Hide table of contents

Consider me a non-technical, AI-ignorant EA who has specialized in other areas. Consider that I want to do the most good, so I hear about AI and how all the funding and the community is now directed towards AI and how it is the most impactful thing. However as an EA I'm interested by impact before making any decisions. Can you link any research paper (understandable by someone who is not skilled in ML), any forum post, any book (that would be the best option!), to show me evidence that AI is indeed the most impactful thing to work on?

I listened to Davidson 80k podcast, I know about the fable of the boy and the wolf by the famous Kat, I know about the 5 percent chances of human extinction. I read the Precipice. However I still find myself reluctant to put AI as my priority despite knowing these things. As an EA raised in the Oxford tradition, I have an urge to defer, but rationally I am not convinced. Since it is considered that community epistemics are strong on this subject, I guess they should be available for people that do not have the technical background to understand the evidence. 

Ps : Sorry for the many reposts, my internet is playing with me and I thought that the question wasn't sent! 

9

1
0

Reactions

1
0
New Answer
New Comment

1 Answers sorted by

[edit: Fixed link for Stuart Russell's book. Initially linked to Brian Christiansen's Human Compatible.]

  1. Cold-takes is a generally good blog by Holden Karnofsky that lays out the argument for why AI would be transformative and the jobs that could help with that. 
  2. For papers, I think Richard Ngo's paper is really good as an overview of the field from a Deep Learning perspective.
  3. For other posts, I found that Ajeya Cotra's posts on TAI timelines is really important for a lot of people about when it would happen.
  4. For books, Stuart Russell's book is accessible to non-technical audiences.

Thanks! I'll check them all.

Comments4
Sorted by Click to highlight new comments since:

I'm going to not directly answer your question, but if you do want a suggestion I'd recommend Stuart Russell's book Human Compatible. Very readable, includes AI history as well as arguments for being concerned about risk, and Russell literally (co)-wrote the textbook on AI so has impeccable credentials.

so I hear about AI and how all the funding and the community is now directed towards AI and how it is the most impactful thing.

Can I ask where you heard this from? Because the evidence we have is that this is not true in terms of funding. AI Safety has become seen as increasingly more impactful, but there's plenty of disagreement in the community about how impactful it actually is.

As an EA raised in the Oxford tradition, I have an urge to defer, but rationally I am not convinced.

Don't defer!! If you've done some initial research and reading, and you're not convinced, then it's absolutely fine to not be convinced!

Given that you've said that you're a non-technical EA who wants to do the most good but isn't inspired/convinced by AI, then don't force yourself into the field of AI! What field would you like to work in, what are your unique skills and experience? Then look at if you can apply them to any number of EA cause areas rather than technical AI safety.

Thank you very much for the evidence about the funding. OpenPhil has caught up remarkably and I expect many more donors towards longtermism in the future ; GiveWell is excellent but it remains one source and the likelihood that it decreases/doesn't infuse as much as before remains since it's more difficult to get funding when there is only one source of funding. 

I was indeed wrong to say that longtermism was the most financed area; however, I wouldn't be surprised if this data changed very fast and the trend reversed next year, given the current circumstances of pushing from the top and hallo effect around longtermism right now. 

I don't want to force myself, but as a community builder, I have to take the leap. Hence my need to understand better how I can get people on board with this.

I'm open to there being new evidence on funding, but I'd also want to make a distinction between existential risk and longtermism as reasons for funding. I could reject the 'Astronomical Waste' argument and still think that preventing the worst impacts of Nuclear War/Climate Change from affecting the current generation held massive moral value and deserved funding.

As for being a community builder, I don't have experience there, but I guess I'd make some suggestions/distinctions:

  • If you have a co-director for the community in question who is more AI-focused, perhaps split responsibilities along cause area lines
  • Be open about your personal position (i.e. being unpursuaded about the value of AI risk) but separate that from being a community builder where you introduce the various major cause areas (including AI) and present the arguments for and against it

I don't think you should have to update or defer your own views in order to be a community builder at all, and I'd encourage you to hold on to that feeling of being unconvinced

Hope that helps! :)

However I still find myself reluctant to put AI as my priority despite knowing these things.

One way out is to simply not put AI as your own, personal, priority (vs say "the wider EA community's priority", a separate question altogether). 80,000 Hours' problem profiles page for instance explicitly says that their list of most pressing world problems, where AI risk features at the top, is 

ranked roughly by our guess at the expected impact of an additional person working on them, assuming your ability to contribute to solving each is similar

which is already an untrue assumption, as they clarify in their problem framework:

While personal fit is not assessed in our problem profiles, it is relevant to your personal decisions. If you enter an area that you find totally demotivating, then you’ll have almost no impact. 

Given the ostensible reluctance in your post, I'm not sure that you yourself should make AI safety work your top priority (although you can still e.g. donate to the Long-Term Future Fund, one of GWWC's top recommendations in this area, and read Holden's writing and discuss it with others, and so on, none of which require such drastic re-prioritization).  

Also, since other commenters / answerers will likely supply materials in support of prioritizing AI safety, for the sake of good epistemics I think it's worth signal-boosting a good critique of it, so consider checking out Nuno Sempere's My highly personal skepticism braindump on existential risk from artificial intelligence.

More from Vaipan
Curated and popular this week
Relevant opportunities