All of Benjamin Hilton's Comments + Replies

Lifeguards

This is a great story! Good motivational content.

But I do think, in general, a mindset of "only I can do this" is innacurate and has costs. There are plenty of other people in the world, and other communities in the world, attempting to do good, and often succeeding. I think EAs have been a small fraction of the success in reducing global poverty over the last few decades, for example.

Here are a few plausible costs to me:

  • Knowing when and why others will do things significantly changes estimates of the marginal value of acting. For example, if you are st

... (read more)

I really like these nuances. I think one of the problems with the drowning child parable / early EA thinking more generally was (and still is, to a large extent) very focused on the actions of the individual. 

It's definitely easier and more accurate to model individual behavior, but I think we (as a community) could do more to improve our models of group behavior even though it's more difficult and costly to do so. 

EA can sound less weird, if we want it to

This does seem to be an important dynamic.

Here are a few reasons this might be wrong (both sound vaguely plausible to me):

  1. If someone being convinced of a different non-weird version of an argument makes it easier to convince them of the actual argument, you end up with more people working on the important stuff overall.
  2. If you can make things sound less weird without actually changing the content of what you're saying, you don't get this downside (This might be pretty hard to do though.)

(1) is particularly important if you think this "non-weird to weird" ap... (read more)

6Rohin Shah1mo
I agree with both of those reasons in the abstract, and I definitely do (2) myself. I'd guess there are around 50 people total in the world who could do (2) in a way where I'd look at it and say that they succeeded (for AI risk in particular), of which I could name maybe 20 in advance. I would certainly not be telling a random EA to make our arguments sound less weird. I'd be happy about the version of (1) where the non-weird version was just an argument that people talked about, without any particular connection to EA / AI x-risk. I would not say "make EA sound less weird", I'd say "one instrumental strategy for EA is to talk about this other related stuff".
Software engineering - Career review

That's not the intention, thanks for pointing this out!

To clarify, by "route", I mean gaining experience in this space through working on engineering roles directly related to AI. Where those roles are not specifically working on safety, it's important to try to consider any downside risk that could result from advancing general AI capabilities (this in general will vary a lot across roles and can be very difficult to estimate).

Software engineering - Career review

A bit of both - but you're right, I primarily meant "secure" (as I expect this is where engineers have something specific to contribute).