As far as I can tell, there seems to be a strong tendency for those who are worried about AI risk to also be longtermists.
However many elements of the philosophical case for longtermism are independent of contingent facts about what is going to happen with AI in the coming decades.
If we have good community epistemic health, we should expect there to be people who object to longtermism on grounds like:
- person-affecting views
- supporting a non-zero pure discount rate
but who still are just as worried about AI as those with P(doom) > 90%.
Indeed, the proportion of "doomers" with those philosophical objections to longtermism should be just as high as the rate of such philosophical objections among those typically considered neartermist.
I'm interested in answers either of the form:
- "hello, I'm both neartermist and have high P(doom) from AI risk..."; or
- "here's some relevant data, from, say, the EA survey, or whatever"
I'm a neartermist with 0.01<P(doom from AI)<0.05 on a 30-year horizon. I don't consider myself a doomer, but I think this qualifies as taking AI risk seriously (or at least not dismissing it entirely).
I think of my neartermism as a result of 3 questions:
As I said above, I think there's between a 1% and 5% chance of extinction from AI in the next 30 years. In my mind, this is high. If I were a longtermist, this would be sufficient to motivate to me to work on AI safety.
I am sympathetic to person-affecting views, which to me means thinking of x-risk as primarily impacting people (& animals) alive today. I'm also sympathetic to the idea that it's somewhat good to create a positive life. However, I'd really rather not create negative lives, and I think there is uncertainty on the sign of all not-yet-existent lives. As an example of this uncertainty, consider that many people raised in excellent conditions (loving family, great education, good healthcare, good friends) still struggle with depression. Because of this uncertainty and risk-aversion, the non-person-affecting views part of me is roughly neutral on creating lives as an altruistic act.
I have a technical skillset and could directly do AI safety work. However, I think most technical AI safety work still accelerates AI and therefore may accelerate extinction. As an example, I believe (weakly! convince me otherwise please!) that RLHF and instruction-tuning led to the current LLM gold rush and that if LLMs were more toxic (aka less safe?) there would be less investment in them right now. Along these lines, I'm not sure that any technical AI safety work done thus far has decreased AI x-risk.
I think the best mechanism to lowering AI x-risk is to slow down AI development, both to give us more time in the current safe-ish technological world and perhaps time to shift into a paradigm where we can develop clearly beneficial technical safety tools. I imagine this deceleration to primarily happen through policy. Policy is outside my skillset, but I'd happily write a letter to my congressperson.
If I could lower AI x-risk by 0.0001% (I think of this as lowering P(doom) from 0.020000 to 0.019999, or 1 part in 20,000), I'd consider this worth 8 billion people * 1e-6 probability = 8e3 = 8000 deaths averted. I think I have better options to add this many QALYs over the course of my life - without the downside risk of potentially accelerating extinction!
Other reasons I'm not a longtermist / I don't do technical AI safety work: