First of all, thank you for engaging with this post! Your kind words and thoughtful pushback mean a lot to me. Over the last couple of weeks, I have been taking a break from many things to help me regain motivation and courage (hence, I am only starting to reply now). Fortunately, I am feeling much better again and ready to tackle the problem I am facing. Thank you once again, and I hope you have a great day!
Yes, this was the biggest reason why I was considering to exit AI safety. I grappled with this question multiple months. Complex cluelessness triggered a small identity crisis for me haha.
"If you can't predict the second and third order effects of your actions, what is the point of trying to do good in the first place?" Open Phil funding OpenAI is a classical example here.
But here is why I am still going:
I'm doing no one a favour by coming to the conclusion the risk that it's just not tractable at all is too high, so I'm just not going to do it at all. AGI is still going to happen. It's still going to be determined by a relatively small number of people. They're going to, on average, both care less about humanity and have thought less rigorously about what's most tractable. So I'm not really doing anyone a favor by dropping out.
More concretely:
Even if object-level actions are not tractable, the EV of doing meta-research still seems to significantly outweigh other cause areas. Positively steering the singularity remains for me to be the most important challenge of our time (assuming one subscribes to longtermism and acknowledges both the vast potential of the future and the severe risks of s-risks).
Even if we live in a world where there is a 99% chance of being entirely clueless about effective actions and only a 1% chance of identifying a few robust strategies, it is still highly worthwhile to focus on meta-research aimed at discovering those strategies.
Strongest reason for pausing and AI safety I can think about: In order to build a truth-seeking super intelligence, that not only maximises paperclips, but also tries to understand the nature of the universe, you need to align it to that goal. And we have not accomplished this yet or figured out how to do so. Hence, regardless of whether you believe in the inherent value of humanity or not, AI safety is still important, and pausing probably too. Otherwise we won’t be able to create a truth-seeking ASI.
Hey Jim,
Thanks for chiming in, and you're spot on: our chat at EAGxVirtual definitely got the gears turning! No worries at all about the existential crisis, I see it as part of the journey (and I actively requested it) :) I actually think these moments of doubt are important to progress in my mission in EA (similarly laid out by JWS in his post). I usually don't do this, but the post was a good way for me to vent and help me process some of the ideas + get feedback.
You've broken down my jumbled thoughts really well. It is helpful to see the three points laid out like that. They each deserve their own space, and I appreciate you giving them that.
I think you're right that cluelessness is kind of its own beast, regardless of where one stands on suffering-focused ethics.
Anyway, thanks for the thoughtful response and for helping me untangle my thoughts.