Crossposted from Otherwise
This is the story of how I started to care about AI risk. It’s far from an ideal decision-making process, but I wanted to try to spell out the untidy reality.
I first learned about the idea of AI risk by reading a lot of LessWrong in the summer of 2011. I didn’t like the idea of directing resources toward it. I didn’t spell out my reasons to myself at the time, but here’s what I think was going on under the surface:
- I was already really dedicated to global health as a cause area, and didn’t want competition with that.
- The concrete thing you could do about AI risk seemed to be “donate to MIRI,” and I didn’t understand what MIRI was doing or how it was going to help.
- These people all seemed to be California tech guys, and that wasn’t my culture.
My explicit thoughts were something like:
- Well yeah, I can see how misaligned AI might be the end of everything
- But maybe that wouldn’t be so bad; seems like there’s a lot of suffering in the world
- Anyway, I don’t know what we’re really going to do about it.
In 2017, a coworker/friend who had worked at an early version of MIRI talked to some of her old friends and got particularly worried about short AI timelines. And seeing her level of concern clicked with me. She wasn’t a California tech guy; she was a former civil rights lawyer from Detroit. She was a Giving What We Can member. She felt like My People.
And I started to take it seriously. I started to feel viscerally that it could be very bad for everything I cared about if we developed superhuman AI and we weren’t ready.
Once I started caring about this area a lot, I took a fresh look around at what might be done about it. In the time since I’d first encountered the idea, more people had also started taking it seriously. Now there were more projects like AI policy work that I found easier to comprehend.
Two other things that shifted over time:
- My concern about people and animals having net-negative lives has been related to what’s happening with my own depression. My concern is a lot stronger when I’m doing worse personally. [edited to add: I don't know which of these impressions is more accurate — just noting that my sense of the external world shifts depending on my internal state.]
- Once I had children, I had a gut-level feeling that it was extremely important that they have long and healthy lives.
Changing my beliefs didn’t mean there were especially good actions to take. Once I changed my view on AI safety I was more willing to donate to that area, but a lot of people had the same idea, and there wasn’t/isn't a lot of obvious work that wasn’t already funded. So I’ve continued donating to a mix of global health (which I still really value) and EA community-building. I was already doing cause-general work and didn’t think I could be more useful in direct work, but I started to encourage other people to consider work on global catastrophic risks.
Reflections now:
- What subculture you belong to doesn’t mean much about how right you are about something. Subcultures / echo chambers develop different ideas from the mainstream, some which will be valuable and many which will be pointless or harmful. (LessWrong was also very into cryonics at the time, and I think it’s right for that idea to get a lot less attention than AI safety.)
- One downside of a homogeneous culture is that other people may bounce off for tribalistic reasons.
- Because you don’t share the same concerns, and don’t speak to the things they care about
- Because they’re put off in some basic social or demographic way, and never seriously listen to you in the first place
- When I think about what could have alerted me that my thinking was driven by group identity more than by logic, what comes to mind is the feeling of annoyance I had about “AI people.”
I think there's something quite interesting here...I feel like one of the main things I see in the post is sort of the opposite of the intended message.
(I realise this is an old post now but I've only just read it and - full disclosure - I've ended up reading it now because I think my skepticism about AI risk arguments is higher than it's been for a long time and so I'm definitely coming at it from that point of view).
If I may paraphrase a bit flippantly, I think that one of the messages is sort of supposed to be: 'just because the early AI risk crowd were very different for me and kind of irritating(!), it doesn't mean that they were wrong' and so 'sometimes you need to pay attention to messages coming from outside of your subculture'.
But actually what happens in the narrative is that you only start caring about AI risk when an old friend who 'felt like one of your own' - and who was "worried" - manages to make you "feel viscerally" about it. So it wasn't that, without direct intervention from either 'tribe', you actually sat down with the arguments/data and understood things logically. Nor was it that you, say, found a set of AI/technology/risk experts to defer to. It was that someone with whom you had more of an affinity made you feel like we should care more and take it seriously. This sounds sort of like the opposite of the intended message, does it not?. i.e. it sounds like more attention was paid to an emotional appeal from an old friend than to whatever arguments were available at the time.
Yep, that's all true. I think what I'm pointing to is that de facto people do decide what to pay attention to and what arguments to dig into based on arbitrary factors and tribalism. Ideally I'd have had some less arbitrary way to decide where to focus my attention, but here we are.