What is the upshot of this? Is this for new audiences to read? It seems like the most straightforward application of it is futures betting, not positively influencing the future.
Perhaps you're indicating that if the money will run out if frontier-AI doesn't becoming self-sustaining by 2030? Maybe we can do something to make that more likely?
Because I do struggle to see how this helps.
When I learned more about eyestalk ablation reviewing the Rethink Priorities report, I was surprised how little it seemed to bother the shrimp, and I did downgrade my concern about welfare from that particular practice. However, I think what people are reacting to is more the barbarity of it than the level or amount of harm. (After all, they already knew the shrimp get killed at the end.) I think it's just so bizarre and gross and exploitative-feeling that it shocks them out of complacency in how they view the shrimp. I think they helplessly imagine themselves losing their own eye and they empathize with the shrimp in a powerful, gut-level way, and that this is why it has been impactful to talk about.
I agree that not everyone already knows what they need to know. Our crux issue is probably "who needs to get it and how will they learn it?" I think we more than have the evidence to teach and set an example of knowing for the public. I think you think we need to make a very respectable and detailed case to convince elites. I think you can take multiple routes to influencing elites and that they will be more receptive when the reality of AI risk is a more popular view. I don't think timelines are a great tool for convincing either of these groups because they create such a sense of panic and there's such an invitation to quibble with the forecasts instead of facing the thrust of the evidence.
Honestly, I wasn't thinking of you! People planning their individual careers is one of the better reasons to engage with timelines imo. It's more the selection of interventions where I think the conversation is moot, not where and how individuals can connect to those interventions.
The hypothetical example of people abandoning projects that culminate in 2029 was actually inspired by PauseAI-- there is a contingent of people who think protesting and irl organizing takes too long and that we should just be trying to go viral on social media. I think the irl protests and community is what make PauseAI a real force and we have greater impact, including by drawing social media attention, all along that path-- not just once our protests are big.
That said, I do see a lot of people making the mistakes I mentioned about their career paths. I've had a number of people looking for career advice through PauseAI say things like, "well, obviously getting a PhD is ruled out", as if there is nothing they can do to have impact until they have the PhD. I think being a PhD student can be a great source of authority and a flexible job (with at least some income, often) where you have time to organize a willing population of students! (That's what I did with EA at Harvard.) The mistake here isn't even really a timelines issue; it's not modeling the impact distribution along a career path well. Seems like you've been covering this:
>I also agree many people should be on paths that build their leverage into the 2030s, even if there's a chance it's 'too late'. It's possible to get ~10x more leverage by investing in career capital / org building / movement building, and that can easily offset. I'll try to get this message across in the new 80k AI guide.
Yes, I agree. I think what we need to spend our effort on is convincing people that AI development is dangerous and needs to be handled very cautiously if at all, not that superintelligence is imminent and there's NO TIME. I don't think the exact level of urgency or the exact level of risk matters much after like p(doom)=5. The thing we need to convince people of is how to handle the risk.
A lot of AI Safety messages expect the audience to fill in most of the interpretive details-- "As you can see, this forecast is very well-researched. ASI is coming. You take it from here."-- when actually what they need to know is what those claims mean for them and what they can do.
I have to admit, I wouldn't have taken it to heart much if these studies hadn't found much effect (nor if they had found a huge effect). And I feel exposed here bc I know that looks bad, like I'm resisting actual evidence in favor of my vibes, but I really think my model is better and the evidence in these studies should only tweak it.
I'm just not that hopeful that you can control enough of the variables with the few historical examples we have to really know that through this kind of analysis. I also think the defining of aims and impacts is too narrow-- Overton window pushing can manifest in many, many ways and still be contributing to the desired solution.
I'm comfortable with pursuing protests through PauseAI US because they were a missing mood in the AI Safety discussion. They are a form of discussion and persuasion, and I approach them similarly to how I decide to address AI danger in writing or in interviews. They are also a form of showing up in force for the cause, in a way that genuinely signals commitment bc it is very hard to get people to do it, which is important to movement building even when small. The only point of protests is not to get whatever the theme of that protest was (the theme of our protests is either shutting down your company or getting an international treaty lol)-- they feed back into the whole movement and community, which can have many unanticipated but directionally desirable impacts.
I don't think my approach to protests is perfect by any means, and I may have emphasized them too much and failed to do things I should to grow them. But I make my calls about doing protests based on many considerations for how it will affect the rhetorical and emotional environment of the space. I wish there were studies that could tell me how to do this better, but there aren't, just like there aren't studies that tell me exactly what to write to change people's minds on AI danger in the right way. (Actually, a good comparison here would be "does persuasive writing work?" bc there we all have personal experiences of knowing it worked, but actually as a whole the evidence might be thin for it achieving its aims.)