This does seem to be an important dynamic.
Here are a few reasons this might be wrong (both sound vaguely plausible to me):
(1) is particularly important if you think this "non-weird to weird" ap... (read more)
That's not the intention, thanks for pointing this out!
To clarify, by "route", I mean gaining experience in this space through working on engineering roles directly related to AI. Where those roles are not specifically working on safety, it's important to try to consider any downside risk that could result from advancing general AI capabilities (this in general will vary a lot across roles and can be very difficult to estimate).
A bit of both - but you're right, I primarily meant "secure" (as I expect this is where engineers have something specific to contribute).
This is a great story! Good motivational content.
But I do think, in general, a mindset of "only I can do this" is innacurate and has costs. There are plenty of other people in the world, and other communities in the world, attempting to do good, and often succeeding. I think EAs have been a small fraction of the success in reducing global poverty over the last few decades, for example.
Here are a few plausible costs to me:
Knowing when and why others will do things significantly changes estimates of the marginal value of acting. For example, if you are st
I really like these nuances. I think one of the problems with the drowning child parable / early EA thinking more generally was (and still is, to a large extent) very focused on the actions of the individual.
It's definitely easier and more accurate to model individual behavior, but I think we (as a community) could do more to improve our models of group behavior even though it's more difficult and costly to do so.