I have been on a mission to do as much good as possible since I was quite young, and I decided to prioritize X-risk and improving the long-term future at around age 13. Toward this end, growing up I studied philosophy, psychology, social entrepreneurship, business, economics, the history of information technology, and futurism.
A few years ago I wrote a book draft I was calling “Ways to Save The World” or "Paths to Utopia" which imagined broad innovative strategies for preventing existential risk and improving the long-term future.
Upon discovering Effective Altruism in January 2022, while preparing to start a Master's of Social Entrepreneurship degree at the University of Southern California, I did a deep dive into EA and rationality and decided to take a closer look at the possibility of AI caused X-risk and lock-in, and moved to Berkeley to do longtermist research and community building work.
I am now researching "Deep Reflection," processes for determining how to get to our best achievable future, including interventions such as "The Long Reflection," "Coherent Extrapolated Volition," and "Good Reflective Governance."
Thanks Kat! Couldn’t agree more, I think self-care is essential, I wish there were more posts, or better yet more high quality comprehensive systematic health and mental health support for high-impact x-risk workers and a culture where this is acknowledged as important, I think this is an underrated crux for x-risk work.
It may seem like ‘fluff,’ but really I think the research is on our side! Would love to see what a quantitative case for exercise or other forms of self-care might look like.
I agree one way for supporting this is that you can read wok stuff while exercising. I actually like the “Milo” voice on Eleven Labs for reading work stuff even more than most audiobook narrators, it’s crazy how good text-to-speech has gotten, I get a lot of extra reading in through this and would highly recommend Eleven Labs.
I very much agree with this and have been struggling with a similar problem in terms of achieving high value futures, versus mediocre ones.
I think there may be some sort of a “Fragile Future Value Hypothesis,” somewhat related to Will MacAskill’s “No Easy Eutopia,” (and the essay which follows this one in the series) and somewhat isomorphic to “The Vulnerable World Hypothesis,” in which there may be many path dependencies, potentially leading to many low and medium value futures attractor states we could end up in, because, in expectation, we are somewhat clueless as to which crucial considerations matter, and if we act wrongly on any of those crucial considerations, we could potentially lose most or even nearly all future value.
I also agree that making the decisionmakers working on AI highly aware of this could be an important solution, I’ve been thinking that the problem isn’t so much that people at the labs don’t care about future value, they are often quite explicitly utopian, it just seems to me that they don’t have much awareness of the fact that near-best futures might actually be highly contingent and very difficult to achieve, and the illegibility of this fact means that they are not really trying to be careful about which path they set us on.
I also agree that trying to get advanced AI working on these types of issues as soon it is able to meaningfully assist could be an important solution and intend to start working on this as one of my main objectives— although I’ve been a bit more focused on macrostrategy than philosophy because I think this might be a bit more feasible for current or near-future AI, and if we get in the right strategic position then maybe that could position us to figure out the philosophy stuff which I think is going to be a lot harder for AI.
Thank you for sharing Arden! I similarly have been thinking longtermism is an important crux for making AI go well, I think it’s very possible that we could avoid x-risk and have really good outcomes in the short-term, but put ourselves on a path where we predictably miss out on nearly all value in the long-term.
I really enjoyed this! Very important crux for how well the future goes. You may be interested to know that Nick Bostrom talks about this, he calls them super-beneficiaries.
I have been thinking that one solution to this could be people self-organizing and spending more of their off-time and casual hours working on these issues in self-organizing or crowd-sourced ways. Would be really curious to hear what your thoughts are on such an approach. I feel like there is enough funding that if people were able to collectively produce something promising, then this could really go somewhere. I have thought a lot about what kind of organizational structures would allow this;
Something like a weekly group meeting where people bring their best ideas and then discuss them and iteratively work on developing project ideas, media, research, and anything that could be high impact. Kind of like the EA Fellowship or other fellowships like Blue Dot and the Astra Fellowship, except for more decentralized and project-focused, and then having a coordination mechanism coordinating between the different groups in order to funnel the best projects from all of the groups to the top.
I have a pretty elaborate mechanism I designed in the past related to this for something else. But seems like it could work well here too. Don’t really have time to work on this myself that much right now. Perhaps unless I could get funding myself, which is ironically my own bottleneck I am primarily focused on right now.
But again, would be very curious to hear your thoughts on this kind of approach.
Interesting! I think I didn’t fully distinguish between two possibilities:
I think both types of AW are worth pursuing, but the second may be even more valuable, and I think this is the type I had in mind at least in scenario 3.
Hey Will, very excited to see you posting more on viatopia, couldn't agree more that some conception of viatopia might be an ideal north star for navigating the intelligence explosion.
As crazy as this seems, I just last night wrote a draft of a piece on what I have been calling primary and secondary cruxes/crucial considerations, (in previous work I also used a perhaps even more closely related concept of “robust viatopia proxy targets”) which seems closely related to your "societal version of Rawls' primary goods," though I had not been previously aware of this work by Rawls. I continue to be quite literally shocked at the convergence of our research, in this case profoundly (if you happen to be as incredulous as I am, I do by chance have my work on this time-stamped through a few separate modalities I’d be happy to share.)
I believe figuring out primary goods and primary cruxes should be a key priority of macrostrategy research, we don't need to figure out everything, we just need to get the right processes and intermediate conditions in order to move us progressively in the right direction.
I think what is ultimately most important is that we reach a state of what I have been calling “deep reflection”; a state in which we have both comprehensively reflected to determine how to achieve a high value future, and simultaneously are in such a state in which society is likely to act on that knowledge. This is not quite the same as viatopia, as it’s more of an end state that would occur right before we actualize our potential, hence I think it can act as another useful handle as the kind of thing we should hope viatopia is ultimately moving us toward.
I’m really looking forward to seeing more essays in your series!