Hide table of contents

Regarding AI safety, of course.

The detailed version of this question follows: "Is AI safety sufficiently talent-constrained, and on an imminent enough timeline,

that me and any friends I can convince ordinary non-Ivy-League non-top-tier computer scientists, mathematicians, and programmers (including students)

should drop everything (including things you are personally uncomfortable telling people to drop out of)

and apply for every grant out there?"

I would also accept "If you are not very talented, go do something else and let us handle it", or "If you're unsure of your talent, here is a link to an online test or open application thing that will give you good evidence one way or the other".

29

0
0

Reactions

0
0
New Answer
New Comment

3 Answers sorted by

I do not think it is crunch time. I think people in the reference class you're describing should go with some "normal" plan such as getting into the best AI PhD program you can get into, learning how to do AI research, and then working on AI safety.

(There are a number of reasons you might do something different. Maybe you think academia is terrible and PhDs don't teach you anything, and so instead you immediately start to work independently on AI safety. That all seems fine. I'm just saying that you shouldn't make a change like this because of a supposed "crunch time" -- I would much prefer having significantly better help in 5 or 10 years, rather than not-very-good help now.)

That being said, I feel confident that there are other AI safety researchers who would say it is crunch time or very close to it. I expect this would be a minority (i.e. < 50%).

I do think it is crunch time probably, but I agree with what Rohin said here about what you should do for now (and about my minority status). Skilling up (not just in technical specialist stuff, also in your understanding of the problem we face, the literature, etc.) is what you should be doing. For what I think should be done by the community as a whole, see this comment.

4
kokotajlod
Update: A friend of mine read this as me endorsing doing PhD's and was surprised. I do not generally endorse doing PhDs at this late hour. (However, there are exceptions.) What I meant to say is that skilling up / learning is what you should be doing, for now at least. Maybe a PhD is the best way to do that, but maybe not -- it depends on what you are trying to learn. I think working as a research assistant at an EA org would probably be a better way to learn than doing a PhD, for example.  If you aren't trying to do research, but instead are trying to contribute by e.g. building a movement, maybe you should be out of academia entirely and instead gaining practical experience building movements or running political campaigns. 

Yes, I think it's crunch time.

But I'd be very hesitant of advocating in general for people to sacrifice more stuff to work hard on the most urgent problems. People vastly overestimate the stability of their motivation and mental life. If you plan your life with the assumption that you'll always be as motivated as you are right now, you'll probably achieve less than if you take some precautions.

I'd say plan for at least 20 years of productivity. This means you want to build relationships with people who support you, invest in finding good down-time activities to keep you refreshed, and don't burn yourself out. Be ambitious! Test your limits until you crash, but make sure you can recover and learn from it rather than taking permanent damage.

On average do I think EAs would do better with more self-sacrifice or less? It varies, and it's important enough that I think advice should be more granular than just "do more".

When considering self-sacrifice, it is also important to weigh-in the effects on other people. IE, every person that "sacrifices something for the cause" increases the perception that "if you want to work on this, you need to give up stuff". This might in turn turn people off from joining the cause in the first place. So even if the sacrifice increases the productivity of that one person, the total effect might still be negative.

People vastly overestimate the stability of their motivation and mental life. "...even when you take into account Hofstadter's Law." Seems very likely in my case.

The rest was helpfully calibrating, thank you

My answer to the detailed version of the question is "unsure...probably no?": I would be extremely wary of reputation effects and perception of AI safety as a field. As a result, getting as many people as we can to work on this might prove to not be the right approach.

For one, getting AI to be safe is not only a technical problem --- apart from figuring out how to make AI safe, we need to also get whoever builds it to adopt our solution. Second, classical academia might prove important for safety efforts. If we are being realistic, we need to admit that the prestige associated with a field has impact on which people get involved with it. Thus, there will be a point where the costs of bringing more people in on the problem might outweight the benefits.

Note that I am not saying anything like "anybody without an Ivy-league degree should just forget about AI safety". Just that there are both costs and benefits associated with working on this, and everybody should consider these before doing major decisions (and in particular outreach).

Comments2
Sorted by Click to highlight new comments since:

One way to collect some "answers" to this is to observe the behavior of organizations that do a lot of work on this problem. For example:

  • Open Philanthropy still makes grants in several different longtermist areas
  • 80,000 Hours lists biorisk as a priority on the same "level" as AI alignment
  • FLI is using their massive Vitalik Buterin donation to fund PhD and postdoctoral fellowships, which seems to imply that "drop everything" doesn't imply e.g. "drop out of school" or "start thinking exclusively about alignment even if it tanks your grades"

Biorisk needs people who are good with numbers and computers. EA community building needs people who are good with computers (there's a lot of good software to be built, websites to be designed, etc.)

To keep the scope of my analysis limited, I'm not even going to mention the dozens of other priorities that someone with the right skills + interests might be better off pursuing. But it at least seems not to be consensus that "crunch time" is here, even among people who think about the problem quite a lot.

That said, I would never turn away anyone who wants to work on alignment, and I think anyone with related skills should strongly consider it as an area to explore + take seriously. That's the pitch I'd be making if I were in college, alongside messages like (paraphrased a lot):

  • "This seems like a good way to end up working on something historically significant, in a way that probably won't happen if you join Facebook instead."
  • "If you want to do this, there's a good chance you'll have unbelievable access to mentorship and support from top people in the field, which... probably won't happen if you join Facebook instead." (As a non-programmer, I don't know whether this is true, but I'd guess that it is.)

Of course, some organizations and people that do a lot of work on this problem would say that it is, in fact, crunch time. If someone decides to explore the area, "it's crunch time" is a hypothesis they should consider. I just don't think it should be their default assumption, or your default pitch.

Curated and popular this week
Relevant opportunities