MichaelA

I’m Michael Aird, an incoming Summer Research Fellow with the Center on Long-Term Risk (though I don’t personally subscribe to suffering-focused views on ethics). During my fellowship, I’ll likely do research related to reducing long-term risks from malevolent actors.

Before that, I did existential risk research & writing for Convergence Analysis and grant writing for a sustainability accounting company. Before that, I was a high-school teacher for two years in the Teach For Australia program, ran an EA-based club and charity election at the school I taught at, published a peer-reviewed psychology paper, and won a stand-up comedy award which ~30 people in the entire world would've heard of (a Golden Doustie, if you must know).

Opinions expressed in my posts or comments should be assumed to be my own, unless indicated otherwise.

I want to continually improve along many dimensions, so I welcome feedback of all kinds. You can give me feedback anonymously here.

I also post to LessWrong.

If you think you or I could benefit from us talking, feel free to reach out or schedule a call.

Comments

A New X-Risk Factor: Brain-Computer Interfaces

I haven't had a chance to read this post yet, but just wanted to mention one paper I know of that does discuss brain-computer interfaces in the context of global catastrophic risks, which therefore might be interesting to you or other readers. (The paper doesn't use the term existential risk, but I think the basic points could be extrapolated to them.) 

The paper is Assessing the Risks Posed by the Convergence of Artificial Intelligence and Biotechnology. I'll quote the most relevant section of text below, but table 4 is also relevant, and the paper is open-access and (in my view) insightful, so I'd recommend reading the whole thing.

Beyond the risks associated with medical device exploitation, it is possible that in the future computer systems will be integrated with human physiology and, therefore, pose novel vulnerabilities. Brain-computer interfaces (BCIs), traditionally used in medicine for motor-neurological disorders, are AI systems that allow for direct communication between the brain and an external computer. BCIs allow for a bidirectional flow of information, meaning the brain can receive signals from an external source and vice versa. 

The neurotechnology company, Neuralink, has recently claimed that a monkey was able to control a computer using one of their implants. This concept may seem farfetched, but in 2004 a paralyzed man with an implanted BCI was able to play computer games and check email using only his mind. Other studies have shown a ‘‘brainbrain’’ interface between mammals is possible. In 2013, one researcher at the University of Washington was able to send a brain signal captured by electroencephalography over the internet to control the hand movements of another by way of transcranial magnetic stimulation. Advances are occurring at a rapid pace and many previous technical bottlenecks that have prevented BCIs from widespread implementation are beginning to be overcome.

Research and development of BCIs have accelerated quickly in the past decade. Future directions seek to achieve a symbiosis of AI and the human brain for cognitive enhancement and rapid transfer of information between individuals or computer systems. Rather than having to spend time looking up a subject, performing a calculation, or even speaking to another individual, the transfer of information could be nearly instantaneous. There have already been numerous studies conducted researching the use of BCIs for cognitive enhancement in domains such as learning and memory, perception, attention, and risk aversion (one being able to incite riskier behavior). Additionally, studies have explored the military applications of BCIs, and the field receives a bulk of its funding from US Department of Defense sources such as the Defense Advanced Research Projects Agency. 

While the commercial implementation of BCIs may not occur until well into the future, it is still valuable to consider the risks that could arise in order to highlight the need for security-by-design thinking and avoid path dependency, which could result in vulnerabilities—like those seen with current medical devices—persisting in future implementations. Cyber vulnerabilities in current BCIs have already been identified, including those that could cause physical harm to the user and influence behavior. In a future where BCIs are commonplace alongside advanced understandings of neuroscience, it may be possible for a bad actor to achieve limited influence over the behavior of a population or cause potential harm to users. This issue highlights the need to have robust risk assessment prior to widespread technological adoption, allowing for regulation, governance, and security measures to take identified concerns into account.

What questions would you like to see forecasts on from the Metaculus community?

I'd also be interested in forecasts on these topics.

I think Metaculus could play a sort of sanity-checking, outside-view role for EA. Questions like 'Will EA see AI risk (climate change/bio-risk/etc.) as less pressing in 2030 than they do now?', or 'Will EA in 2030 believe that EA should've invested more and donated less over the 2020s?'

It seems to me that there'd be a risk of self-fulfilling prophecies. 

That is, we'd hope that what'd happen is: 

  1. a bunch of forecasters predict what the EA community would end up believing after a great deal of thought, debate, analysis, etc.
  2. then we can update ourselves closer to believing that thing already, which could help us get to better decisions faster.

...But what might instead happen is: 

  1. a relatively small group of forecasters makes relatively unfounded forecasts
  2. then the EA community - which is relatively small, unusually connected to Metaculus, and unusually interested in forecasts - updates overly strongly on those forecasts, thus believing something that they wouldn't otherwise have believed and don't have good reasons to believe

(Perhaps this is like a time-travelling information cascade?)

I'm not saying the latter scenario is more likely than the former, nor that this means we shouldn't solicit these forecasts. But the latter scenario seems likely enough to perhaps be an argument against soliciting these forecasts, and to at least be worth warning readers about clearly and repeatedly if these forecasts are indeed solicited.

Also, this might be especially bad if EAs start noticing that community beliefs are indeed moving towards the forecasted future beliefs, and don't account sufficiently well for the possibility that this is just a self-fulfilling prophecy, and thus increase the weight they assign to these forecasts. (There could perhaps be a feedback loop.)

I imagine there's always some possibility that forecasts will influence reality in a way that makes the forecasts more or less likely to come true that they would've been otherwise. But this seems more-than-usually-likely when forecasting EA community beliefs (compared to e.g. forecasting geopolitical events).

Propose and vote on potential tags

Now vs Later, or Optimal Timing, or Optimal Timing for Altruists, or some other name.

This would be intended to capture posts relevant to the debate over "giving now vs later" and "patient vs urgent longtermism", as well as related debates like whether to do direct work now vs build career capital vs movement-build, and how much to give/work now vs later, and when to give/work if not now ("later" is a very large category!). 

This tag would overlap with Hinge of History, but seems meaningfully distinct from that.

Not sure what the best name would be. 

Quantifying the probability of existential catastrophe: A reply to Beard et al.

Great, thanks for sharing that link! I've now edited this post to link to the preprint rather than the paywalled version, and to mention the blog post.

(I'd already read the paywalled version myself.)

Collection of good 2012-2017 EA forum posts

Really enjoyed this collection.

One more long-term-future post from that era that I'd recommend is Beckstead's A proposed adjustment to the astronomical waste argument. I think that's been influential in a lot of people's thinking (I see it cited often), including mine.

Also, regarding Beckstead's Improving disaster shelters to increase the chances of recovery from a global catastrophe (which you link to), he also wrote a good paper on the same topic.

I'm Linch Zhang, an amateur COVID-19 forecaster and generalist EA. AMA

Yeah, I share the view that that sort of research could be very useful and seems worth trying to do, despite the challenges. (Though I hold that view with relatively low confidence, due to having relatively little relevant expertise.)

Some potentially useful links: I discussed the importance and challenges of estimating existential risk in my EAGx lightning talk and Unconference talk, provide some other useful links (including to papers and to a database of all x-risk estimates I know of) in this post, and quote from and comment on a great recent paper here.

I think there are at least two approaches to investigating this topic: solicit new forecasts about the future and then see how calibrated they are, or find past forecasts and see how calibrated they were. The latter is what Muehlhauser did, and he found it very difficult to get useful results. But it still seems possible there’d be room for further work taking that general approach, so in a list of history topics it might be very valuable to investigate I mention:

6. The history of predictions (especially long-range predictions and predictions of things like extinction), millenarianism, and how often people have been right vs wrong about these and other things.

Hopefully some historically minded EA has a crack at researching that someday! (Though of course that depends on whether it'd be more valuable than other things they could be doing.)

(One could also perhaps solicit new forecasts about what’ll happen in some actual historical scenario, from people who don’t know what ended up happening. I seem to recall Tetlock discussing this idea on 80k, but I’m not sure.)

Addressing Global Poverty as a Strategy to Improve the Long-Term Future

Oh, ok. I knew of "gross national happiness" as (1) a thing the Bhutan government talked about, and (2) a thing some people mention as more important than GDP without talking precisely about how GNH is measured or what the consequences of more GNH vs more GDP would be. (Those people were primarily social science teachers and textbook authors, from when I taught high school social science.) 

I wasn't aware GNH had been conceptualised in a way that includes things quite distinct from happiness itself. I don't think the people I'd previously heard about it from were aware of that either. Knowing that makes me think GNH is more likely to be a useful metric for x-risk reduction, or at least that it's in the right direction, as you suggest. 

At the same time, I feel that, in that case, GNH is quite a misleading term. (I'd say similar about the Happy Planet Index.) But that's a bit of a tangent, and not your fault (assuming you didn't moonlight as the king of Bhutan in 1979).

A List of EA Donation Pledges (GWWC, etc)

Thanks for putting this together!

Isn’t the GWWC Pledge too simplistic to fit everyone’s specific situation?

Another thing that I think is worth mentioning here is that the GWWC pledge is already less one-size-fits-all than many people realise. To illustrate, here are some key points from their FAQ:

The pledge is of course just a minimum. Some members decide to go further than this and pledge to give a higher percentage, such as 20% or even 50%.

[...] 

What do you mean by income?

The goal here is to help members stick to their plan of taking significant action to benefit others. All guidelines about how to calculate income should be thought of as serving that goal. [Then there are more details on this.]

[...]

Students, unemployed people, and full-time parents

Many students, unemployed people and full-time parents have little or no income, but are largely supported by money from family members, the government or a student loan.

The Pledge does not require you to donate any of this funding (although it does commit you regarding any future income). However, in the interests of all of our members giving what they can, we feel that the spirit of the Pledge requires them to donate at least 1% of their spending money.

We define spending money as money received for the purpose of spending on items such as food, rent, travel, children, or personal items. It does not include spending on tuition fees. If a couple with shared finances both wish to join, then they can simply donate 10% of their combined earnings and not worry about spending money.

Of course, people who earn some income but depend on other help for their living expenses may choose to donate 10% of their earnings if they want to go above and beyond.

[...]

How often should members donate?

The spirit of the Pledge is to donate on an ongoing basis, rather than letting “donation debt” build up over many years. We check in with members every year and encourage them to log their donations. However, you don’t have to donate on a strictly annual basis. Members who consolidate donations into certain years (for example for tax advantages, or in case of temporary financial hardship) are welcome to do so.

Do donations have to be to registered charities?

The commitment is to donate to “the most effective organisations". These organisations could be charities, but could also be entities not officially registered as tax-deductible charities (for example a charity in the early stages of getting registered, or an advocacy or lobbying group that is not a charity.)

Addressing Global Poverty as a Strategy to Improve the Long-Term Future

A useful concept here might be that of an "environmental Kuznets curve"

The environmental Kuznets curve (EKC) is a hypothesized relationship between environmental quality and economic development[17]: various indicators of environmental degradation tend to get worse as modern economic growth occurs until average income reaches a certain point over the course of development.[18][19] The EKC suggests, in sum, that "the solution to pollution is economic growth."

There is both evidence for and against the EKC. I'm guessing the evidence varies for different aspects of environmental quality and between regions. I'm not an expert on this, but that Wikipedia section would probably be a good place for someone interested in the topic to start.

I already think technology is at a point where welfare does not have to depend on fossil fuel consumption.

I think I broadly agree, but that it's also true that present-day welfare is cheaper if we use fossil fuels than low/no carbon fuels (if we're ignoring things like carbon taxes or renewables subsidies that were put in place specifically to address the externalities). I think carbon mitigation is well worth the price (including the price of enacting e.g. carbon taxes) when we consider future generations, and perhaps even when we consider present generations' entire lifespans (though I haven't looked into that). But there are some real tensions there, for people who are in practice focused on near-term effects.

Addressing Global Poverty as a Strategy to Improve the Long-Term Future

Epistemic status: I've only spent perhaps 15 minutes thinking about these specific matters, though I've thought more about related things.

I'd guess that happiness levels (while of course intrinsically important) wouldn't be especially valuable as a metric of how well a global health/development intervention is reducing existential risks. I don't see a strong reason to believe increased happiness (at least from the current margin) leads to better handling of AI risk and biorisk. Happiness may correlate with x-risk reduction, but if so, it'd probably due to other variables affecting both of those variables.

Metrics that seem more useful to me might be things like:

  • quality of reasoning and evidence used for key political and corporate decision-making
    • Though operationalising this is of course difficult
  • willingness to consider not just risks but also benefits of technological and economic development
    • This is tricky because I think people often overestimate or overweight the risks from various developments (e.g., GMO crops), especially if our focus is on just the coming years or decades. So we'd want to somehow target this metric to the "actually" risky developments, or to "considering" risks in a reasonable way rather than just in general.
  • levels of emissions
  • levels of corruption

The last two of those metrics might be "directly" important for existential risk reduction, but also might serve as a proxy for things like the first two metrics or other things we care about.

Load More