I hear two conflicting voices in my head, and in EA:
- Voice: it's highly uncertain whether deworming is effective, based on 20 years of research, randomized controlled trials, and lots of feedback. In fact, many development interventions have a small or negative impact.
- Same voice: we are confident that work for improving the far future is effective, based on <insert argument involving the number of stars in the universe>.
I believe that I could become convinced to work on artificial intelligence or extinction risk reduction. My main crux is that these problems seem intractable. I am worried that my work would have a negligible or a negative impact.
These questions are not sufficiently addressed yet, in my opinion. So far, I've seen mainly vague recommendations (e.g., "community building work does not increase risks" or "look at the success of nuclear disarmament"). Examples of existing work for improving the far future often feel very indirect (e.g., "build a tool to better estimate probabilities ⇒ make better decisions ⇒ facilitate better coordination ⇒ reduce the likelihood of conflict ⇒ prevent a global war ⇒ avoid extinction") and thus disconnected from actual benefits for humanity.
One could argue that uncertainty is not a problem, that it is negligible when considering the huge potential benefit of work for the far future. Moreover, impact is fat-tailed, and thus the expected value dominated by a few really impactful projects, and thus it's worth trying projects even if they have low success probability[1]. This makes sense, but only if we can protect against large negative impacts. I doubt we really can — for example, a case can be made that even safety-focused AI researchers accelerate AI and thus increase its risks.[2]
One could argue that community building or writing "what we owe the future" are concrete ways to do good for the future . Yet this seems to shift the problem rather than solve it. Consider a community builder who convinces 100 people to work on improving the far future. There are now 100 people doing work with uncertain, possibly-negative impact. The community builder's impact is some function which is similarly uncertain and possibly negative. This is especially true if is fat-tailed, as the impact will be dominated by the most successful (or most destructive) people.
To summarize: How can we reliably improve the far future, given that even near-termist work like deworming, with plenty of available data and research and rapid feedback loops and simple theories, so often fails? As someone who is eager to do spend my work time well, who thinks that our moral circle should include the future, but who does not know ways to reliably improve it... what should I do?
Will MacAskill on fat-tailed impact distribution: https://youtu.be/olX_5WSnBwk?t=695 ↩︎
For examples on this forum, see When is AI safety research harmful? or What harm could AI safety do? ↩︎
Hi there Sjlver, thanks for engaging.
I wouldn't describe it as indifferent. More like enthusiastically embracing both the life we currently have, and the inevitable death we will experience. Happiness might be defined as such an embrace, and suffering as resistance to that which we can do little about, other than delay the inevitable a bit.
We know we're going to die.
It can be reasonably proposed that no one really knows what the result of that will be.
If true, then what we can do in the face of this unknown is manage our relationship with this situation so as to create the most positive possible experience of it.
Should someone provide compelling proof of what death is, then we might wish to align our relationship with death to what the facts reveal. But there are no facts (imho) and so the enterprise rationally shifts away from facts which can not be obtained, to our relationship with that which can not currently be known.
Ok, let's talk practical implications. Everybody will have to find this for themselves, but here's how it works for me.
My mother died of Parkinson's after a very long tortured journey which I will not describe here. The point is that observing this tortured journey from a ring side seat filled me with fear. What if this happens to me? (It did happen to my sister)
To the degree I can liberate myself from fear of death, I can escape this fate. When the doctor says I'm going to experience a long painful death from a terminal case of Typoholic Madman Syndrome :-) I can go to the gun store, and obtain a "get out of jail free" card. To the degree I can accept this solution, I don't need to be afraid of Parkinson's. Death embraced, life enhanced.
I don't have a secret formula which can relieve everyone from their fear of death. In my case, whatever freedom I have (exact degree unknown until the final moments) comes from factors like this:
I had great parents. Being so lucky so young tends to install in one a kind of faith that the universe is basically kind. How valid such a faith might be is unknown, but experiencing such a faith is helpful.
Next, I spend a TON of time in the North Florida woods. Way more than a lot. From such experience one can conclude that nature is cyclical, not linear, as implied by the formula born>live>die.
Anyway, the rational message here is, focus on controlling that which we can control, and that is our relationship with death, and thus with living.
I hope something in there is helpful, or interesting, or something. If this is a topic of interest to you, and you'd like to see me crash the server with excessive typing on the subject :-), it would be cool if maybe you started a post on the topic.