Thank you for this! I'm hoping that this enables me to spend a lot less time on hiring in the future. I feel that this is a topic that could easily have taken me 3x the effort to understand if I hadn't gotten some very good resources from this post so I will definitely check out the book and again, awesome post!
Good post; interesting point with that the impact of the founder effect is probably higher in longtermism and I would tend to agree that starting a new field can have a big impact. (Such as wild animal suffering in space, NO FISH ON MARS!)
Not to be the guy that points something out, but I will be that guy; why not use the classic EA jargon of counterfactual impact instead of contingent impact?
Essentially that the epistemics of EA is better than in previous longtermist movements. EA's frameworks are a lot more advanced with things such as thinking about the traceability of a problem, not Goodharting on a metric, forecasting calibration, RCTs... and so on with techniques that other movements didn't have.
The ones who aimed at the distant future mostly failed. The longtermist label seems mostly unneeded and unhelpful- and I’m far from the first to think so.
Firstly, in my mind, you're trying to say something akin to that we shouldn't advertise longtermism as it hasn't worked in the past. Yet this is a claim about the tractability of the philosophy and not necessarily about the idea that future people matter.
Don't confuse the philosophy with the instrumentals, longtermism matters, but the implementation method is still up for debate.
But I don’t view the effective altruist version of longtermism as particularly unique or unprecedented.I think the dismal record of (secular) longtermism speaks for itself.
Secondly, I think you're using the wrong outside view.
There is a problem with using historical presidents as you assume similar conditions exist in the EA community as it did in the other communities.
An example of this is HPMOR and how unpredictable the success of this fan fiction would have been if you looked at an average Harry Potter fan fiction from before. The underlying outside view is different because the underlying causal thinking is different.
As Nasim Nicholas Taleb would say, you're trying to predict a black swan, an unprecedented event in the history of humanity.
What is it that makes longtermism different?
There is a fundamental difference in understanding of the world's causal models in the EA community. There is no outside view for longtermism as its causal mechanisms are too different from existing reference classes.
To make a final analogy, it is useless to predict gasoline prices for an electric car, just like it is useless to predict the success of the longtermist movement from previous ones.
(Good post, though, interesting investigation, and I tend to agree that we should just say holy shit, x-risk instead)
This is completely unrelated to the great point you made with the comment but I felt I had to share a classic? EA tip that worked well for me. (uncertain how much this counts as a classic.) I got to the nice nihilistic bottom of realising that my moral system is essentially based on evolution but I reversed that within a year by reading a bunch of Buddhist philosophy and by meditating. Now it's all nirvana over here! (try it out now...)
https://www.lesswrong.com/posts/Mf2MCkYgSZSJRz5nM/a-non-mystical-explanation-of-insight-meditation-and-the
https://www.lesswrong.com/posts/WYmmC3W6ZNhEgAmWG/a-mechanistic-model-of-meditation
https://www.lesswrong.com/s/ZbmRyDN8TCpBTZSip
TL;DR: I totally agree with the general spirit of this post, we need people to solve alignment, and we're not on track. Go and work on alignment but before you do, try to engage with the existing research, there are reasons why it exists. There are a lot of things not getting worked on within AI alignment research, and I can almost guarantee you that within six months to a year, you can find things that people haven't worked on.
So go and find these underexplored areas in a way where you engage with what people have done before you!
I also agree in that Eliezer's style of doom seems uncalled for and that this is a solvable but difficult problem. My personal p(doom) is something around 20%, and I think this seems quite reasonable.
Now I do want to give pushback on this claim as I see a lot of people who haven't fully engaged with the more theoretical alignment landscape making this claim. There are only 300 people working on alignment, but those people are actually doing things, and most of them aren't doing blue in the sky theory.
A note on the ARC claim:
This is essentially a claim about the methodology of science in that working on existing systems gives more information and breakthroughs compared to working on a blue-sky theory. The current hypothesis for this is that it is just a lot more information-rich to do real-world research. This is, however, not the only way to get real-world feedback loops. Christiano is not working on blue sky theory; he's using real-world feedback loops in a different way; he looks at the real world and looks for information that's already there.
A discovery of this type is, for example, the tragedy of the commons; whilst we could have created computer simulations to see the process in action, it's 10x easier to look at the world and see the real-time failures. He tells stories and sees where they fail in the future as his research methodology. This gives bits of information on where to do future experiments, like how we would be able to tell that humans would fail to stop overfishing without actually running an experiment on it.
This is also what John Wentworth does with his research; he looks at the real world as a reference frame which is quite rich in information. Now a good question is why we haven't seen that many empirical predictions from Agent Foundations. I believe it is because alignment is quite hard, and specifically, it is hard to define agency in a satisfactory way due to some really fuzzy problems (boundaries, among others) and, therefore, hard to make predictions.
We don't want to mathematize things too early either, as doing so would put us into a predefined reference frame that it might be hard to escape from. We want to find the right ballpark for agents since if we fail we might base evaluations on something that turns out to be false.
In general, there's a difference in the types of problems in alignment and empirical ML; the reference class of a "sharp-left turn" is different from something empirically verifiable as it is unclearly defined, so a good question is how we should turn one into the other. This question of how we take recursive self-improvement, inner misalignment and agent foundations into empirically verifiable ML experiments is actually something that most of the people I know in AI Alignment are currently actively working on.
This post from Alexander Turner is a great example of doing this as they try "just retargeting the search"
Other people are trying other things, such as bounding the maximisation in RL into quantilisers. This would, in turn, make AI more "content" with not maximising. (fun parallel to how utilitarianism shouldn't be unbounded)
I could go on with examples, but what I really want to say here is that alignment researchers are doing things; it's just hard to realise why they're doing things when you're not doing alignment research yourself. (If you want to start, book my calendly and I might be able to help you.)
So what does this mean for an average person? You can make a huge difference by going in and engaging with arguments and coming up with counter-examples, experiments and theories of what is actually going on.
I just want to say that it's most likely paramount to engage with the existing alignment research landscape before as it's free information and easy to fall into traps if you don't. (a good resource for avoiding some traps is John's Why Not Just sequence)
There's a couple of years worth of research there; it is not worth rediscovering from the ground up. Still, this shouldn't stop you, go and do it; you don't need a hero licence.