Benjamin Hilton

Benjamin is a research analyst at 80,000 Hours. Before joining 80,000 Hours, he worked for the UK Government and did some economics and physics research.

Topic Contributions

Comments

Lifeguards

This is a great story! Good motivational content.

But I do think, in general, a mindset of "only I can do this" is innacurate and has costs. There are plenty of other people in the world, and other communities in the world, attempting to do good, and often succeeding. I think EAs have been a small fraction of the success in reducing global poverty over the last few decades, for example.

Here are a few plausible costs to me:

  • Knowing when and why others will do things significantly changes estimates of the marginal value of acting. For example, if you are starting a new project, it's reasonably likely that even if you have a completely new idea, other people will be in similar epistemic situations as you, and will soon stumble upon the same idea. So to estimate your counterfactual impact you might want to be estimating how much earlier something will occur because you made it occur, rather than purely the impact of the thing occurring. More generally, neglectedness is a key part of estimating your marginal impact - and estimating neglectedness relies heavily on an understanding of what others are focusing on, and usually at least a few people are doing things in a similar space to you.

  • Also, knowing when and why others will do things affects strategic considerations. The fact that in many places we now try to do good there are few non-EAs working there is a result of our attempts to find neglected areas. But - especially in the case of x-risk - we can expect others to begin to do good work in these areas as time progresses (see e.g. AI discussions around warning shots). The extent to which this is the case affects what is valuable to do now.

EA can sound less weird, if we want it to

This does seem to be an important dynamic.

Here are a few reasons this might be wrong (both sound vaguely plausible to me):

  1. If someone being convinced of a different non-weird version of an argument makes it easier to convince them of the actual argument, you end up with more people working on the important stuff overall.
  2. If you can make things sound less weird without actually changing the content of what you're saying, you don't get this downside (This might be pretty hard to do though.)

(1) is particularly important if you think this "non-weird to weird" approach will appeal to a set of people who wouldn't otherwise end up agreeing with your arguments. That would mean it has a high counterfactual impact - even if some of the people do work that whilst still being good is ultimately far less relevant to x-risk reduction. This is even more true if you think there's a low rate of people who would have just listened to your weirder sounding arguments in the first place who will get "stuck" at the non-weird stuff and as a result never do useful things.

Software engineering - Career review

That's not the intention, thanks for pointing this out!

To clarify, by "route", I mean gaining experience in this space through working on engineering roles directly related to AI. Where those roles are not specifically working on safety, it's important to try to consider any downside risk that could result from advancing general AI capabilities (this in general will vary a lot across roles and can be very difficult to estimate).

Software engineering - Career review

A bit of both - but you're right, I primarily meant "secure" (as I expect this is where engineers have something specific to contribute).