Disclaimer: This is a short post.

 

Was recently asked this on twitter and realised I didn't have good answers.

In x-risk I'll include: AI risk, biorisk and nuclear risk. I'll include both technical and governance work, and all the other forms of work that indirectly help produce these, and work of other forms that helps reduce x-risk. I'll also include work that reduces s-risk.

 

Answers I can think of are increased field-building and field-building capacity, (some) increased conceptual clarity of research, and increased capital available. None of which are very legible; there is a sense in which they feel like they're precursors to real work, than real work itself. I can't point to actual percentage point reductions in x-risk today that EA is responsible for. I can't point to projects that exist in the real world where a layman can look at them for ten seconds and realise they're significant and useful results. Whereas I can do the same for work in global health, for instance. And many non-EA movements also have achievement they can point at.

(This is not to discount the value of such work, or to claim EA is currently acting suboptimally. It is possible that the optimal thing to is spend years on illegible work, before obtaining legible results.)

 

Am I correct on my view of current achievements or is there something I'm missing? I would also love to be linked to other resources.

New Answer
Ask Related Question
New Comment

5 Answers sorted by

Daniel Kirmani

Jun 14, 2022

250

I might've slightly decreased nuclear risk. I worked on an Air Force contract where I trained neural networks to distinguish between earthquakes and clandestine nuclear tests given readings from seismometers.

The point of this contract was to aid in the detection (by the Air Force and the UN) of secret nuclear weapon development by signatories to the UN's Comprehensive Test Ban Treaty and the Nuclear Non-Proliferation Treaty. (So basically, Iran.) The existence of such monitoring was intended to discourage "rogue nations" (Iran) from developing nukes.

That being said, I don't think an Iran-Israel exchange would constitute an existential risk, unless it then triggered a global nuclear war. Also, it's not clear that my performance on my contribution to the contract actually increased the strength of the deterrent to Iran. However, if (a descendant of) my model ends up being used by NATO, perhaps I helped out by decreasing the chance of a false positive.

Disclaimer: This was before I had ever heard of EA. Still, I've always been somewhat EA-minded, so maybe you can attribute this to proto-EA reasoning. When I was working on the project, I remember telling myself that even a very small reduction in the odds of a nuclear war happening meant a lot for the future of mankind.

That being said, I don't think an Iran-Israel exchange would constitute an existential risk, unless it then triggered a global nuclear war.

I wouldn't sell yourself short. IMO, any nuclear exchange would dramatically increase the probability of a global nuclear war, even if the probability is still small by non-xrisk standards.

Thank you for you work!

[anonymous]1y20

Thanks for this anecdote!

Given the scarcity of such successes, I think people here would be interested in hearing a longer form version of this. Just wished to suggest!

Denkenberger

Jun 18, 2022

110

I'm not sure how legible they are, but there are indications that work so far on resilience to global catastrophic agricultural risks and resilience to loss of electricity/infrastructure catastrophes has reduced X risk.

Rohin Shah

Jun 19, 2022

70

The AI safety community has gotten people to do reinforcement learning from human feedback (rather than automated reward functions) sooner than it would otherwise have happened.

There's lots of subtleties about whether this reduced x-risk or not but I think it did.

[anonymous]1y10

Thanks for replying. I'm not sure this satisfies the criteria of "legible" as I was imagining it, since I buy most of the AI risk arguments and still feel poorly equipped to evaluate how important this was. But I do not have sufficient ML knowledge; perhaps it was legible to people with sufficient knowledge.

P.S. If it doesn't take too much of your time I would love to know if there's any discussion on why this was significant for x-risk, say, on the alignment forum. I found the paper and OpenAI blogpost but couldn't find discussion. (If it will take time I totally understand, I will try finding it myself.)

4
Rohin Shah
1y
I don't think there's great public discussion of that.
1[anonymous]1y
I understand, thanks again

Ulrik Horn

Jun 15, 2022

50

Not sure how relevant to the criteria of legibility, but FHI and FLI have in short time become more influential than the likes of Chatham House and RAND Corporation (world's 6th and 7th most influential think tanks overall, founded in 1920 and 1948) within the AI domain. I previously thought naively and pessimistically that our EA organisations did not have much traction in the policy influence sphere but this report from University of Pennsylvania ranking think tanks in terms of influence went some way in making me change my mind about the achievements of EA organizations.

[anonymous]1y20

Thanks this definitely seems helpful!

HaydnBelfield

Jun 14, 2022

20

There's a whole AI ethics and safety field that would have been much smaller and less influential.

From my paper Activism by the AI Community: Analysing Recent Achievements and Future Prospects.

"2.2 Ethics and safety 

There has been sustained activism from the AI community to emphasise that AI should be developed and deployed in a safe and beneficial manner. This has involved Open Letters, AI principles, the establishment of new centres, and influencing governments. 

The Puerto Rico Conference in January 2015 was a landmark event to promote the beneficial and safe development of AI. It led to an Open Letter signed by over 8,000 people calling for the safe and beneficial development of AI, and a research agenda to that end [21]. The Asilomar Conference in January 2017 led to the Asilomar AI Principles, signed by several thousand AI researchers [23]. Over a dozen sets of principles from a range of groups followed [61]. 

The AI community has established several research groups to understand and shape the societal impact of AI. AI conferences have also expanded their work to consider the impact of AI. New groups include: 

  • OpenAI (December 2015)
  • Centre for Human-Compatible AI (August 2016) 
  • Leverhulme Centre for the Future of Intelligence (October 2016)3 
  •  DeepMind Ethics and Society (October 2017)
  • UK Government’s Centre for Data Ethics and Innovation (November 2017)"
[anonymous]1y10

Thanks for your reply! I'll see if I can convince people using this.

(Also very small point but: the pdf title says "Insert your title here" when viewed on chrome atleast)

Comments4
Sorted by Click to highlight new comments since: Today at 8:13 AM
Linch
1y200

EAs have legible achievements in x-risk-adjacent domains (e.g. highly cited covid paper in Science, Reinforcement Learning from Human Feedback which was used to power stuff like InstructGPT), and illegible achievements in stuff like field-building and disentanglement research.

However, the former doesn't have a clean connection to actually reducing x-risk, and the latter isn't very legible.

So I think it is basically correct that we have not done legible things to reduce object-level x-risk like cause important treaties to be signed, ban gain-of-function research in some countries, engineer the relevant technical defenses, etc.

See this post by Owen Cotton-Barrat as well.

[anonymous]1y10

Thanks for your reply!

This makes sense. Linked post at the end was useful and new for me.

Not an answer, just wanting to say thank you for asking this question! The same question had been percolating in my mind for some time but couldn't quite put it into words, and you did so perfectly. Thank you!

[anonymous]1y10

No problem!