Great tool; I've enjoyed it and used it for two years. I (a random EA) would recommend it.
Thank you for this! I'm hoping that this enables me to spend a lot less time on hiring in the future. I feel that this is a topic that could easily have taken me 3x the effort to understand if I hadn't gotten some very good resources from this post so I will definitely check out the book and again, awesome post!
That makes sense, and I would tend to agree that the framing of contingency invokes more of a "what if I were to do this" feeling which might be more conducive toward people choosing to do more entrepreneurial thinking which in turn seems to have higher impact
Good post; interesting point with that the impact of the founder effect is probably higher in longtermism and I would tend to agree that starting a new field can have a big impact. (Such as wild animal suffering in space, NO FISH ON MARS!)
Not to be the guy that points something out, but I will be that guy; why not use the classic EA jargon of counterfactual impact instead of contingent impact?
Essentially that the epistemics of EA is better than in previous longtermist movements. EA's frameworks are a lot more advanced with things such as thinking about the traceability of a problem, not Goodharting on a metric, forecasting calibration, RCTs... and so on with techniques that other movements didn't have.
Thank you! I was looking for this one but couldn't find it
The ones who aimed at the distant future mostly failed. The longtermist label seems mostly unneeded and unhelpful- and I’m far from the first to think so.
Firstly, in my mind, you're trying to say something akin to that we shouldn't advertise longtermism as it hasn't worked in the past. Yet this is a claim about the tractability of the philosophy and not necessarily about the idea that future people matter.
Don't confuse the philosophy with the instrumentals, longtermism matters, but the implementation method is still up for debate.
But I don’t view the effective altruist version of longtermism as particularly unique or unprecedented.I think the dismal record of (secular) longtermism speaks for itself.
Secondly, I think you're using the wrong outside view.
There is a problem with using historical presidents as you assume similar conditions exist in the EA community as it did in the other communities.
An example of this is HPMOR and how unpredictable the success of this fan fiction would have been if you looked at an average Harry Potter fan fiction from before. The underlying outside view is different because the underlying causal thinking is different.
As Nasim Nicholas Taleb would say, you're trying to predict a black swan, an unprecedented event in the history of humanity.
What is it that makes longtermism different?
There is a fundamental difference in understanding of the world's causal models in the EA community. There is no outside view for longtermism as its causal mechanisms are too different from existing reference classes.
To make a final analogy, it is useless to predict gasoline prices for an electric car, just like it is useless to predict the success of the longtermist movement from previous ones.
(Good post, though, interesting investigation, and I tend to agree that we should just say holy shit, x-risk instead)
This is completely unrelated to the great point you made with the comment but I felt I had to share a classic? EA tip that worked well for me. (uncertain how much this counts as a classic.) I got to the nice nihilistic bottom of realising that my moral system is essentially based on evolution but I reversed that within a year by reading a bunch of Buddhist philosophy and by meditating. Now it's all nirvana over here! (try it out now...)https://www.lesswrong.com/posts/Mf2MCkYgSZSJRz5nM/a-non-mystical-explanation-of-insight-meditation-and-thehttps://www.lesswrong.com/posts/WYmmC3W6ZNhEgAmWG/a-mechanistic-model-of-meditationhttps://www.lesswrong.com/s/ZbmRyDN8TCpBTZSip
If you feel like booking a meeting to help me out in some way, here's my calendly: https://calendly.com/jonas-hallgren
Also, it is more focused on easier projects that university groups and people new to EA can manage to do on their own. For example, the EA hotel is a cool idea but there are very few select people who could do it. More like running a conference in your city or helping with EA resources by creating a website.