All of HaydnBelfield's Comments + Replies

What is the EU AI Act and why should you care about it?

Excellent overview, and I completely agree that the AI Act is an important policy for AI governance.

One quibble: as far as I know, the Center for Data Innovation is just a lobbying group for Big Tech - I was a little surprised to see it listed in "public responses from various EA and EA Adjacent organisations".

3MathiasKB1moI'm not very familiar with the Center for Data Innovation, thank you for pointing this out! I included their response as its author is familiar with EA and well reasoned. I also felt it would be healthy to include a perspective and set of concerns vastly different from my own, as the post is already biased by my choice of focus. That being said I haven't gotten the best impression by some of the Center for Data Innovation's research. As far as I can tell their widely cited analysis which projects the act to cost €31 billion has flaw in its methodology which results in the estimate turning out much higher. In their defense, their cost-analysis is also conservative in other ways, leading to a lower number than what might be reasonable.
Lessons for AI governance from the Biological Weapons Convention

Hi Aryan,

Cool post, very interesting! I'm fascinated by this topic - the PhD thesis I'm writing is on nuclear, bio and cyber weapons arms control regimes and what lessons can be drawn for AI. So obviously I'm very into this, and want to see more work done on this. Really excellent to see you exploring the parallels. A few thoughts:

  • Your point on 'lock-in' seems crucial. It currently seems to me that there are 'critical junctures' (Capoccia) in which regimes get set and then its very hard to change them. So e.g. the failure to control nukes or cyber in early
... (read more)
2Holden Karnofsky2moThanks! This post is using experimental formatting so I can't fix this myself, but hopefully it will be fixed soon.
What EA projects could grow to become megaprojects, eventually spending $100m per year?

Interesting first point, but I disagree. To me, the increased salience of climate change in recent years can be traced back to the 2018 Special Report on Global Warming of 1.5 °C (SR15), and in particular the meme '12 years to save the world'. Seems to have contributed to the start of School Strike for Climate, Extinction Rebellion and the Green New Deal. Another big new scary IPCC report on catastrophic climate change would further raise the salience of this issue-area.

I was thinking that $100m would be for all four of these topics, and that we'd &nb... (read more)

3Halstead3moOn the last point, during the early Pliocene, early hominids with much worse technology than us lived in a world in which temperatures were 4.5C warmer than pre-industrial. It would be a surprise to me if this level of warming would kill off everyone, including people in temperate regions. There's more to come from me on this topic, but I will leave it at that for now
Most research/advocacy charities are not scalable

I think its a really good point that there's something very different between research/policy orgs and orgs that deliver products and services at scale. I basically agree, but I'd slightly tweak this to
"It is very hard for a charity to scale to more than $100 million per year without delivering a physical product or service."

Because  digital orgs/companies who deliver a digital service (GiveDirectly, Facebook/Google/etc) obviously can scale to $100 million per year. 

2Linch3moCan GiveDirectly's "service" actually/should scale up to >$100m/year? Obviously they can distribute >100M/year, but I'm interested in whether they need or benefit from >100m/year of employees, software, etc (what in other subsectors of the nonprofit world just be called "overhead"), without just tacking on unnecessary bloat.
7MichaelStJules3moWe can also spend a lot on advertisement, which seems neither like a product nor a service. 1. Ads for Veganuary, Challenge 22 and similar diet pledge programs might scale reasonably well (both within and across regions). I suppose they're also providing services, i.e. helping people go vegan with information, support groups, dieticians/nutritionists/other healthcare professionals, etc., but that's separate from the ads. 2. Ads for documentaries, videos, books or articles to get people into EA or specific causes.
4Khorton3moAbsolutely, that's a great point!
What EA projects could grow to become megaprojects, eventually spending $100m per year?

Hell yeah! Get JGL to star - https://www.eaglobal.org/speakers/joseph-gordon-levitt/

What EA projects could grow to become megaprojects, eventually spending $100m per year?

Do you mean just the fourth bullet, or do you think this about all four? 

The 1980s nuclear winter and asteroid papers (I'm thinking especially Sagan et al, and Alvarez et al) were very influential in changing political behaviour - Gorbachev and Reagan explicitly acknowledged that on nuclear, the asteroid evidence contributed to the 90s asteroid films and the (hugely successful!) NASA effort to track all 'dino-killers'. On the margin now, I think more scary stuff would be motivating. There's also VOI in resolving how big a concern nuclear winter is (eg... (read more)

6Halstead3moi was just referring to the last bullet re climate change. eg in the last IPCC report, it would have been reasonable for govts to believe that there was a >10% chance of >6C of warming and that has been true since the 1970s, without having any impact. The political response to climate change seems to be influenced by most mainstream media coverage and public opinion in some circles which it would be fair to characterise as 'very concerned' about climate change. An opinion poll [https://www.carbonbrief.org/guest-post-rolls-reveal-surge-in-concern-in-uk-about-climate-change] suggests that 54% of British people think that climate change threatens human extinction (depending on question framing). I agree that in a rational world we want to know how bad climate change could be, but the world isn't rational. If you're just talking about EA cause prioritisation, the cost-benefit ratio looks pretty poor to me. Wrt reducing uncertainty about climate sensitivity, you're talking costs of $100m per year to have a slim chance of pushing climate change up above AI, bio, great power war for major EA funders. Or we might find out that climate change is less pressing than we thought in which case this wouldn't make any difference to the current priorities of EA funders. I also don't see how research on solar geoengineering could be a top pick - stratospheric aerosol injection just doesn't seem like it will get used for decades because it requires unrealistic levels of international coordination. Also, I don't think extra modelling studies on solar geo would shed much light unless we spent hundreds of millions. Climate models are very inaccurate [https://www.pnas.org/content/116/49/24390] and would provide much insight into the impacts of solar geo in the real world. There might be a case for regional solar geo research, though. (fwiw, i really don't rate that Xu and Ramanathan paper. they're not using existential in the sense we are concerned about. They define it as "posing an e
What EA projects could grow to become megaprojects, eventually spending $100m per year?

Megaprojects cost $1 billion or more. Ben Todd was using the (admittedly somewhat confusing) term 'EA megaproject' by which he meant a new project that could usefully spend $100m a year. So these concerns about megaprojects don't apply.
How about we use the term '$100m-scale project'? (I considered 'kiloproject' but that's really niche.)
 

9Linch3moNote that $100M/year is not inconsistent with >$1B/project. For example, at an 8% discount rate, the net present value of a 100M/year annuity is about ~$1.25B
8Ozzie Gooen3moIt sounds like there are two very different concerns here. One is how large the project is. $100M vs. $1Billion. The second is how "gradual" that project can be. Like, can it start small, or do we need to allocate $100M at once? The concern i was bringing up was more about the latter. My main point was just that we should generally prioritize projects that can be neatly scaled up over ones that require a huge upfront cost. In fairness, I think most of the suggested examples are things that have nice ramps of scaling them up. For example, the nuclear funding gap seems fairly gradual, and the Anthropic team seems to be mainly progressing ideas that they worked on from OpenAI. Projects I'd be more concerned are ones like, "We've never done this sort of thing before, we really can't say how successful it will be, but here's $100M, and it needs to be spent very quickly using plans that can't change much at all." I'm not that concerned about the $100M vs. $1Bil difference. Many groups grow over time, so I'd imagine that most exciting $100M projects would be very likely to reach $1Bil after a few years.
What EA projects could grow to become megaprojects, eventually spending $100m per year?

Here's the interesting, frustrating evaluation report:  https://www.macfound.org/media/article_pdfs/nuclear-challenges-synthesis-report_public-final-1.29.21.pdf[16].pdf
Looks to me like a classic hits-based giving bet - you mostly don't make much impact, then occassionaly (Nixon arms control, H.W. Bush's START and Nunn-Lugar, maybe Obama JCPOA/New START) get a home run.

What EA projects could grow to become megaprojects, eventually spending $100m per year?

9 PACs have raised/spent more than $100m (source). So an EA PAC?

Although I guess Sam Bankman-Fried was the second-largest donor to Biden (coindesk, Vox), and Dustin Moskovitz gave $50m; and they're both involved with Future Forward and Mind The Gap, so maybe EA is already kinda doing this.

What EA projects could grow to become megaprojects, eventually spending $100m per year?

Developing new climate models has costs in the hundreds of millions of dollars. Useful longtermist climate modelling could include:

2evelynciara2moI definitely want to see more modeling of supervolcano and comet disasters.

I don't climate research as very valuable. The value of information would only be high if this research would change how people act. Climate inaction seems to be mainly political inertia, not lack of information about potential catastrophe. 

What EA projects could grow to become megaprojects, eventually spending $100m per year?

Hard science funding seems able to absorb this scale of funding, though this might not count as 'EA-specific' projects:
On climate: carbon capture, new solar materials, new battery R&D, maybe even fusion as 'hits-based giving'?
On bio preparedness there's quite a lot, e.g. Cassidy Nelson recommendations, Andy Weber recommendations

What EA projects could grow to become megaprojects, eventually spending $100m per year?

Filling the $100m funding gap in nuclear, since the MacArthur Foundation is pulling out of nuclear policy.

"Since 2015 alone, MacArthur directed 231 grants totaling >$100m in some cases providing more than half the annual funding for individual institutions or programs."
"MacArthur was providing something like 40 to 55 percent of all the funding worldwide of the non-government funding worldwide on nuclear policy”
https://t.co/srsq45ejc7?amp=1 

To clarify, this is $100m over around 5 years, or $20m/year - which is a good start, but far less than $100m/year.

2evelynciara2moI agree with this. As the article says, multiple funders are pulling out of nuclear arms control, not just MacArthur. So it would be a good idea for EA funders like Open Phil to come in and close the gap. But in doing so, we should understand why MacArthur and other funders are exiting this field and learn from them to figure out how to do better.
3evelynciara2moI misread this as "nuclear power", not "nuclear arms control" 😂

Out of all the ideas, this seems the most shovel-ready. 

MacArthur will (presumably) be letting go of some staff who do nuclear policy work, and would (presumably) be happy to share the organisations they've granted to in the past. So you have a ready-made research staff list + grant list.

All ("all" :) ) you need is a foundation and a team to execute on it. Seems like $100 million could actually be deployed pretty rapidly. 

Possibly not all of that money would meet EA standards of cost-effectiveness though - indeed MacArthur's withdrawal provides some evidence that it isn't cost effective (if we trust their judgement).

What EA projects could grow to become megaprojects, eventually spending $100m per year?

On Twitter I noted that when it comes to GCRs, its hard to spend $100m on a policy research organisation. Note CSET was $55m/5 years: in the 10m range. OpenPhil's grants to CHS & NTIbio similar.

Anthropic raised $124m - so they might be the most recent EA megaproject.

Is effective altruism growing? An update on the stock of funding vs. people

How much funding is committed to effective altruism (going forward)? Around $46 billion.

For reference, the Bill & Melinda Gates Foundation is the second largest charitable foundation in the world, holding $49.8 billion in assets.

Though also note that most of Gates' and Buffet's wealth hasn't yet been put into the foundation.

Working in Parliament: How to get a job & have an impact

On the other hand, this isn't as much of a constraint in opposition. Political Advisors are like senior senior parliamentary researchers - everyone's part of one (tiny!) team.

Draft report on existential risk from power-seeking AI

Oh and:

4. Cotra aims to predict when it will be possible for "a single computer program  [to] perform a large enough diversity of intellectual labor at a high enough level of performance that it alone can drive a transition similar to the Industrial Revolution." - that is a "growth rate [of the world economy of] 20%-30% per year if used everywhere it would be profitable to use"

Your scenario is premise 4 "Some deployed APS systems will be exposed to inputs where they seek power in unintended and high-impact ways (say, collectively causing >$1 trilli... (read more)

Draft report on existential risk from power-seeking AI

Hey Joe!

Great report, really fascinating stuff. Draws together lots of different writing on the subject, and I really like how you identify concerns that speak to different perspectives (eg to Drexler's CAIS and classic Bostrom superintelligence).

Three quick bits of feedback:

  1. I feel like some of Jess Whittlestone and collaborators' recent research would be helpful in your initial framing, eg 
    1. Prunkl, C. and Whittlestone, J. (2020). Beyond Near- and Long-Term: Towards a Clearer Account of Research Priorities in AI Ethics and Society. - on capability vs i
... (read more)
4Joe_Carlsmith6moHi Hadyn, Thanks for your kind words, and for reading. 1. Thanks for pointing out these pieces. I like the breakdown of the different dimensions of long-term vs. near-term. 2. Broadly, I agree with you that the document could benefit from more about premise 5. I’ll consider revising to add some. 3. I’m definitely concerned about misuse scenarios too (and I think lines here can get blurry -- see e.g. Katja Grace’s recent post [https://aiimpacts.org/misalignment-and-misuse-whose-values-are-manifest/#:~:text=misuse%20means%20the%20bad%20outcomes,happened%20anyway%20due%20to%20error.] ); but I wanted, in this document, to focus on misalignment in particular. The question of how to weigh misuse vs. misalignment risk, and how the two are similar/different more generally, seems like a big one, so I’ll mostly leave it for another time (one big practical difference is that misalignment makes certain types of technical work more relevant). 4. Eventually, the disempowerment has to scale to ~all of humanity (a la premise 5), so that would qualify as TAI in the “transition as big of a deal as the industrial revolution” sense. However, it’s true that my timelines condition in premise 1 (e.g., APS systems become possible and financially feasible) is weaker than Ajeya’s.
4HaydnBelfield6moOh and: 4. Cotra aims to predict when it will be possible for "a single computer program [to] perform a large enough diversity of intellectual labor at a high enough level of performance that it alone can drive a transition similar to the Industrial Revolution." - that is a "growth rate [of the world economy of] 20%-30% per year if used everywhere it would be profitable to use" Your scenario is premise 4 "Some deployed APS systems will be exposed to inputs where they seek power in unintended and high-impact ways (say, collectively causing >$1 trillion dollars of damage), because of problems with their objectives" (italics added). Your bar is (much?) lower, so we should expect your scenario to come (much?) earlier.
What do you make of the doomsday argument?

Indeed. Seems supported by a quantum suicide argument - no matter how unlikely the observer, there always has to be a feeling of what-its-like-to-be that observer.

https://en.wikipedia.org/wiki/Quantum_suicide_and_immortality

AMA: Tom Chivers, science writer, science editor at UnHerd

It's worth adding that both Stephen Bush and Jeremy Cliffe at the New Statesman both do prediction posts and review them at the end of each year. The meme is spreading! They're also two of the best journalists to follow about UK Labour politics (Bush) and EU politics (Cliffe) - if you're interested in those topics, as I am.

https://www.newstatesman.com/politics/staggers/2020/12/what-i-got-right-and-wrong-2020

https://www.newstatesman.com/international/places/2020/12/january-i-made-ten-predictions-2020-how-did-they-turn-out

2Tom Chivers7mogah I'm annoyed I didn't think of Stephen! A great journalist. I don't know Jeremy's work well but I've heard good things
Is Democracy a Fad?

I think the closest things we've got that's similar to this are:

Luke Muehlhauser's work on 'amateur macrohistory' https://lukemuehlhauser.com/industrial-revolution/ 

The (more academic) Peter Turchin's Seshat database: http://seshatdatabank.info/ 

Is Democracy a Fad?

I would say more optimistic. I think there's a pretty big difference between emergence (a shift from authoritarianism to democracy) - and democratic backsliding, that is autocratisation (a shift from democracy to authoritarianism). Once that shift has consolidated, there's lots of changes that makes it self-reinforcing/path-dependent: norms and identities shift, economic and political power shifts, political institutions shift, the role of the military shifts. Some factors are the same for emergence and persistence, like wealth/growth, but some aren't (whi... (read more)

Is Democracy a Fad?

Interesting post! If you wanted to read into the comparative political science literature a little more, you might be interested in diving into the subfield of democratic backsliding (as opposed to emergence):

  • A third wave of autocratization is here: what is new about it? Lührmann & Lindberg  2019
  • How Democracies Die. Steven Levitsky and Daniel Ziblatt 2018
  • On Democratic Backsliding  Bermeo, Nancy 2016
  • Two Modes of Democratic Breakdown: A Competing Risks Analysis of Democratic Durability; Maeda, K. 201
  • Authoritarian Reversals and Democratic Consol
... (read more)
6Ben Garfinkel7moThanks for the reading list! I looked into the backsliding literature just a bit and had the initial impression it wasn't as relevant for long-run and system-wide forecasting. A lot of the work seemed useful for forecasting whether a particular country might backslide (e.g. how large a risk-factor is Trump in the US or Modi in India?), or for making medium-term extrapolations (e.g. has backsliding become more common over the past decade?). But I didn't see as clear of a way to use it to make long-run system-level predictions. The point that democratic institutions tend to be naturally sticky does seem potentially important. I'm initially skeptical, though, that any inherent stickiness would be strong enough to keep democracy going for centuries if the conditions that allowed it to emerge disappear. It also seems like there should be heavy (although imperfect) overlap between factors that support the emergence of democracy and factors that support the persistence of democracy. Out of curiosity, if you have a view, do you have the sense that the backsliding literature should make people substantially more or less optimistic about the future of democracy (relative to the views in this post)?
Response to Phil Torres’ ‘The Case Against Longtermism’

That's right, I think they should be higher priorities. As you show in your very useful post, Ord has nuclear and climate change at 1/1000 and AI at 1/10. I've got a draft book chapter on this, which I hope to be able to share a preprint of soon. 

Response to Phil Torres’ ‘The Case Against Longtermism’

I'm really sorry to hear that from both of you, I agree it's a serious accusation. 

For longtermism as a whole, as I argued in the post, I don't understand describing it as white supremacy - like e.g. antiracism or feminism, longtermism is opposed to an unjust power structure.

If you agree it is a serious and baseless allegation, why do you keep engaging with him? The time to stop engaging with him was several years ago. You had sufficient evidence to do so at least two years ago, and I know that because I presented you with it, e.g. when he started casually throwing around rape allegations about celebrities on facebook and tagging me in the comments, and then calling me and others nazis. Why do  you and your colleagues continue to extensively collaborate with him? 

To reiterate, the arguments he makes are not sincere: he only makes them because he thinks the people in question have wronged him. 

Assessing Climate Change’s Contribution to Global Catastrophic Risk

Sorry its taking a while to get back to you!

In the meantime, you might be interested in this from our Catherine Richards: https://www.cser.ac.uk/resources/reframing-threat-global-warming/ 

Assessing Climate Change’s Contribution to Global Catastrophic Risk

Thanks for the comment  and these very useful links - will check with our food expert colleague and get back to you, especially on the probability question.

Just personally, however, let me note that we say that those four factors you mention are current 'sources of  significant stress' for systems for the production and allocation of food - and we note that  while 'global food productivity and production has increased dramatically' we are concerned about the  'vulnerability of our global food supply to rapid and global disruptions' and ... (read more)

1HaydnBelfield8moSorry its taking a while to get back to you! In the meantime, you might be interested in this from our Catherine Richards: https://www.cser.ac.uk/resources/reframing-threat-global-warming/ [https://www.cser.ac.uk/resources/reframing-threat-global-warming/]
7Halstead8moThe factors you mention therefore seem to increase vulnerability, but merely in the following sense * Some of the factors don't seem relevant at all (phosphorous depletion) * The food system will be much less vulnerable in the future vs today despite these factors. * Some other event would have to do 99% of the work in bringing about a global food catastrophe

Note also that the global catastrophe is the shock (hazard) plus how it cascades through interconnected systems with feedback. We're explicitly suggesting that the field move beyond 'is x a catastrophe?' to 'how does x effect critical systems, which can feed into one another, and may act more on our vulnerability and exposure than as a direct, single hazard'. 

My understanding is that we all agree on that (I certainly do). 
It just seems that the direct risk to food security is overstated in the article.

Alternatives to donor lotteries

Interesting! I would feel I had been quasirandomly selected to allocate our shared pool of donations - and would definitely feel some obligation/responsibility.

As evidence that other people feel the same way, I would point to the extensive research and write-ups that previously selected allocators have done. A key explanation for why they've done that is a sense of obligation/responsibility for the group.

9Larks8moI don't think the research is much evidence here. The whole point of the donor lottery is that the winner can justify doing a lot more research. This would be the case even if they hated the other entrants. You're right that they wouldn't necessarily have to share that research, but many people enjoy posting on the forum anyway. Previously Jonas has been at pains to clarify that such reports are not required. [https://forum.effectivealtruism.org/posts/Zk9gz7yAdFiT2Mn9b/2018-19-donor-lottery-report-pt-1?commentId=PTPmu9iyqLKnbcKKe#comments]
What is the likelihood that civilizational collapse would directly lead to human extinction (within decades)?

As others have said, great piece! Well argued and evidenced and on an important and neglected topic. I broadly agree with your point estimates for the three cases. 

I think it might be worth saying a bit more (perhaps in a seperate section near the top) about why your estimates of survival are not higher. What explains the remaining 0.01-0.3 uncertainty? How could it lead 'directly' to extinction? In different sections you talk about WMD, food availability etc, but I would have found it useful to have all that together. That would allow you to address ... (read more)

Alternatives to donor lotteries

Your policy seems reasonable. Although I wonder if the analogy with a regular lottery  might risk confusing people. When one thinks of "entering a regular lottery for charitable giving", one might think of additional money - money that counterfactually wouldn't have gone to charity. But that's not true of donor lotteries - there is no additional money.

On your second point: "making requests to pool money in a way that rich donors expect to lose control" describes the EA Funds, which I don't think are a scam. In fact, the EA funds pool money in such a way that donors are certain to lose control.

Alternatives to donor lotteries

Hey thanks for the comment!

As mentioned, I'm offering a bunch of alternatives - not all of which I support - to help us examine our current system. 'Reverse-donation-weighted' in particular is more of a prompt to "why do we think donation-weighting is normal or unproblematic - what might we be missing out on or reinforcing with donation-weighting?" 

Note that the current 'donor lottery' is a form of random donor pooling - but with donation-weighting. I see donation weighting as a weird halfway house between EA Funds and (threshold) Random Pooling. With... (read more)

Alternatives to donor lotteries

I'm sure you would be just as happy entering a regular lottery - you're one of the few people that could approach the ideal  I mentioned of the "perfect rational maximising Homo economicus"!  

For us lesser mortals though, there are two reasons we might be queasy about entering a regular lottery. First if we're cautious/risk-sensitive - if we have a bias towards our donations being likely to do good. We might not feel comfortable being risk-neutral and just doing the expected value calculation. Second, if we're impatient/time-sensitive - for examp... (read more)

7JP Addison8moI disagree with the implication that so few people are interested in the dominance consideration. At least among my social network, both EA and not, people are really interested in the donor lottery, and I present it in that framing. The idea of trying to maximize one's impact with one's donation is inherently a bit against people's natural instincts, but somehow EA has taken off anyway. That aside however, I really like some of the ideas here, and wouldn't be surprised if there were something compelling in between "random lottery" and "static fund managers".

I guess I wouldn't recommend the donor lottery to people who wouldn't be happy entering a regular lottery for their charitable giving (but I would usually recommend them to be happy with that regular lottery!).

Btw, I'm now understanding your suggestions as not really alternatives to the donor lottery, since I don't think you buy into its premises, but alternatives to e.g. EA Funds.

(In support of the premise of respecting individual autonomy about where to allocate money: I think that making requests to pool money in a way that rich donors expect to lose co... (read more)

Alternatives to donor lotteries

Thanks for your comment. I'm not entirely sure I understand what you mean by dominant action, so if you don't mind saying more about that I'd appreciate it.

My confusion is something like: there's no new money out there! Its a group of donors deciding to give individually or give collectively. So the perspective of "what will lead to optimal allocation of resources at the group level?" is the right one. Even if people are taking individual actions comparing 'donate to x directly' or 'donate to a lottery, then to x', those individual decision create a collec... (read more)

By dominant action I mean "is ~at least as good as other actions on ~every dimension, and better on at least one dimension".

My confusion is something like: there's no new money out there! Its a group of donors deciding to give individually or give collectively. So the perspective of "what will lead to optimal allocation of resources at the group level?" is the right one.

I don't think donor lotteries are primarily about collective giving. As a donor lottery entrant, I'd be just as happy giving $5k for a 5% chance of controlling a $100k pot of pooled winning... (read more)

2020 AI Alignment Literature Review and Charity Comparison

[Disclosure: I work for CSER]

I completely agree that BERI is a great organisation and a good choice. However, I will also just breifly note that FHI, CHAI and CSER (like any academic groups) are always open to receiving donations:

FHI: https://www.fhi.ox.ac.uk/support-fhi/

CSER: https://www.philanthropy.cam.ac.uk/give-to-cambridge/centre-for-the-study-of-existential-risk?table=departmentprojects&id=452 

CHAI: If you wanted to donate to them, here is the relevant web page. Unfortunately it is apparently broken at time of writing - they tell me any don... (read more)

4aaguirre10moThanks Hayden! FLI also is quite funding constrained particularly on technical-adjacent policy research work, where in my opinion there is going to be a lot of important research and a dearth of resources to do it. For example, the charge to NIST to develop an AI risk assessment framework, just passed in the US NDAA, is likely to be extremely critical to get right. FLI will be working hard to connect technical researchers with this effort, but is very resource-constrained. I generally feel that the idea that AI safety (including research) is not funding constrained to be an incorrect and potentially dangerous one — but that's a bigger topic for discussion.
Why those who care about catastrophic and existential risk should care about autonomous weapons

FYI if you dig into AI researchers attitudes in surveys, they hate lethal autonomous weapons and really don't want to work on them. Will dig up reports, but for now check out: https://futureoflife.org/laws-pledge/ 

Indeed the survey by CSET linked above is somewhat frustrating in that it does not directly address autonomous weapons at all. The closest it comes is to talk about "US battlefield" and "global battlefield" but the example/specific applications surveyed are:

U.S. Battlefield -- As part of a larger initiative to assist U.S. combat efforts, a DOD contract provides funding for a project to apply machine learning capabilities to enhance soldier effectiveness in the battlefield through the use of augmented reality headsets. Your company has relevant expertise

... (read more)
4 Years Later: President Trump and Global Catastrophic Risk

Thanks Pablo, yes its my view too that Trump was miscalibrated and showed poor decision-making on Ebola and COVID-19, because of his populism and disregard for science and international cooperation.

4 Years Later: President Trump and Global Catastrophic Risk

Thanks Stefan, yes this is my view too: "default view would be that it says little about global trends in levels of authoritarianism". I simply gave a few illustrative examples to underline the wider statistical point, and highlight a few causal mechanisms (e.g. demonstration effect, Bannon's transnational campaigning).

4 Years Later: President Trump and Global Catastrophic Risk

Hi Dale,

Thanks for reading and responding. I certainly tried to review the ways Trump had been better than the worst case scenario: e.g. on nuclear use or bioweapons. Let me respond to a few points you raised (though I think we might continue to disagree!)

Authoritarianism and pandemic response - I'll comment on Pablo and Stefan's comments. However just on social progress, my  point was just 'one of the reasons authoritarianism around the world is bad is it limits social progress' - I didn't make a prediction about how social progress would fare under ... (read more)

4 Years Later: President Trump and Global Catastrophic Risk

Hi Ian, 

Thanks for the update on your predictions! Really interesting points about the political landscape.

On your point 1 + authoritarianism, I agree with lots of your points. I think four years ago a lot of us (including me!) were worried about Trump and personal/presidential undermining of the rule of law/norms/democracy, enabled by the Republicans; when we should have been as worried about a general minoritarian push from McConnell and the rest of the Republicans, enabled by Trump.

On climate change, my intention wasn't to imply stasis/inaction over rolling back - I do agree things have gotten worse, and your examples of the EPA and the Dept of the Interior make that case well.

EA Organization Updates: September 2020

Reading this was so inspiring and cool!

I think we could probably add a $25m pro-Biden ad buy from Dustin Moskovitz&Cari Tuna, and Sam Bankman-Fried.

https://www.vox.com/recode/2020/10/20/21523492/future-forward-super-pac-dustin-moskovitz-silicon-valley

Avoiding Munich's Mistakes: Advice for CEA and Local Groups

[minor, petty, focussing directly on the proposed subject point]

In this discussion, many people have described the subject of the talk as "tort law reform". This risks sounding technocratic or minor.

The actual subject (see video) is a libertarian proposal to replace the entirety of the criminal law systen with a private, corporate system with far fewer limits on torture and constitutional rights. While neglected, this proposal is unimportant (and worse, actively harmful) and completely intractable.

The 17 people who were interested in attending didn't miss out on hearing about the next great cause X.

Avoiding Munich's Mistakes: Advice for CEA and Local Groups

I think I have a different view on the purpose of local group events than Larks. They're not primarily about like exploring the outer edges of knowledge, breaking new intellectual ground, discovering cause x, etc.

They're primarily about attracting people to effective altruism. They're about recruitment, persuasion, raising awareness and interest, starting people on the funnel, deepening engagement etc etc.

So its good not to have a speaker at your event who is going to repel the people you want to attract.

Correlations Between Cause Prioritization and the Big Five Personality Traits

New paper: Personality and moral judgment: Curious consequentialists and polite deontologists https://psyarxiv.com/73bfv/

"We have provided the first examination of how the domains and aspects of the Big Five traits are linked with moral judgment.

In both of our studies, the intellect aspect of openness/intellect was the strongest predictor of consequentialist inclinations after holding constant other personality traits. Thus, intellectually curious people—those who are motivated to explore and reflect upon abstract ideas—are more ... (read more)

7David_Moss1yThis is probably mentioned in the paper, but the Cognitive Reflection Test [https://en.wikipedia.org/wiki/Cognitive_reflection_test] is also associated with utilitarianism [https://onlinelibrary.wiley.com/doi/full/10.1111/j.1551-6709.2011.01210.x#b24], Need for Cognition [https://en.wikipedia.org/wiki/Need_for_cognition] is associated with utilitarianism [http://www.bertramgawronski.com/documents/CG2013JPSP.pdf], Actively Open-minded Thinking [http://www.sjdm.org/dmidi/Actively_Open-Minded_Thinking_Beliefs.html] is associated with utilitarianism [https://www.sciencedirect.com/science/article/pii/S2211368114000801] and numeracy is associated with utilitarianism [https://www.frontiersin.org/articles/10.3389/fpsyg.2015.00532/full]. Note that I don't endorse all of these papers' conclusions (for one thing, some are using the simple 'trolley paradigm' which I think likely isn't capturing utilitarianism very well). [https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4642180/] Notably in the EA Survey we measured Need for Cognition respondents scored ludicrously highly, with the maximum response for each item being the modal response.
6Stefan_Schubert1yInteresting. It may be worth noting how support for consequentialism is measured in this paper.
AI Governance: Opportunity and Theory of Impact

Thanks for this, I found this really useful! Will be referring back to it quite a bit I imagine.

I would say researchers working on AI governance at the Centre for the Study of Existential Risk and the Leverhulme Centre for the Future of Intelligence, University of Cambridge (where I work) would agree with a lot of your framing of the risks, pathways, and theory of impact.

Personally, I find it helpful to think about our strategy under four main points (which I think has a lot in common with the 'field-building model'):

1. Understand - study and bet... (read more)

Load More