Is there any crucial consideration I’m missing? For instance, are there reasons to think agents/civilizations that care about suffering might – in fact – be selected for and be among the grabbiest?
David Deutsch makes the argument that long-term success in knowledge-creation requires commitment to values like tolerance, respect for the truth, rationality and optimism. The idea is that if you do not have such values you end up with a fixed society, with dogmatic ideas and institutions that are not open to criticism, error-correction and improvement. Errors w...
Thanks for sharing this, I just finished the audiobook and really enjoyed it. I recommend it: it’s engagingly written and gives an interesting insight into Parfit’s powers and peculiarities. I enjoyed getting some context about the beginnings of EA as well.
Assume the many-worlds interpretation of quantum mechanics is true.
Rather than pursuing high-upside, low-probability moonshots, which fail more often than they succeed, might it not be more effective to go for interventions that robustly generate value across as many worlds as possible?
Humans are subject to instrumental convergence as much as an AI would be. We seek power, resources and influence in pursuit of many of our goals.
Whatever our goals happen to be, we will want to use AI to help us increase our power to help us get what we value.
If people are augmenting their goal-seeking with AI, will we converge on harmonious goals, or will we continue to pursue parochial self-interest?
In short, if we somehow solve the alignment problem for AI, will we also solve the human alignment problem? Or will we...
A key point about Ben Franklin is that his longtermist efforts were for the benefit of the future, whereas EA-style longtermist causes like AI risk and biosecurity are about ensuring there actually is a future.
I think as long as there are x-risks that we can plausibly influence there will be people carrying the torch for longtermism in one form or another.
An argument for 1 goes: "The impact of me not eating meat is negligible. The personal cost to me of not eating meat is appreciable. Time, money and effort spent following a restrictive diet may limit my effe...
This is what Will says in the book: “I think the risk of technological stagnation alone suffices to make the net longterm effect of having more children positive. On top of that, if you bring them up well, then they can be change makers who help create a better future. Ultimately, having children is a deeply personal decision that I won’t be able to do full justice to here—but among the many considerations that may play a role, I think that an impartial concern for our future counts in favour, not against.”
Still, this doesn't make the case for it being competitive with alternatives. EA outreach probably brings in far more people for the same time and resources. Children are a huge investment.
If you're specifically targeting technological stagnation, then outreach and policy work are probably far more cost-effective than having children, because they're much higher leverage. That being said, temporary technological stagnation might buy us more time to prepare for x-risks like AGI.
Of course, Will is doing outreach with this book, and maybe it makes sense to pr...
Spoiler alert - I've now got to the end of the book, and "consider having children" is indeed a recommended high impact action. This feels like a big deal and is a big update for me, even though it is consistent with the longtermist arguments I was already familiar with.
Congratulations on the book launch! I am listening to the audiobook and enjoying it.
One thing that has struck me - it sounds like longtermism aligns neatly with a strongly pro-natalist outlook.
The book mentions that increasing secularisation isn't necessarily a one-way trend. Certain religious groups have high fertility rates which helps the religion spread.
Is having 3+ children a good strategy for propagating longtermist goals? Should we all be trying to have big happy families with children who strongly share our values? It seems ...
I'm a pronatalist family-values dad with multiple kids, and I'm an EA who believes in a fairly strong version of long-termism, but I'm still struggling to figure out how these value systems are connected (if at all).
Two possible points of connection are (1) PR reasons: having kids gives EAs more credibility with the general public, especially with family-values conservatives, religious people, and parents in general, (2) personal growth reasons: having kids gives EAs a direct, visceral appreciation of humanity as a multi-generational project, and it unlock...
For the average EA, I'd guess having children yourself is far less cost-effective than doing EA outreach. Maybe if you see yourself as having highly valuable abilities far beyond the average EA or otherwise very neglected within EA, then having children might look closer to competitive?
I've also been thinking a lot about longtermism and its implication for fertility. dotsam has taken longtermism's pro-natalist bent in a relatively happy direction, but it also has some very dark implications. Doesn't longermism imply a forced birth be a great outcome (think of those millions of future generations created!)? Doesn't it imply conservatives are right and abortion is an horrendous crime? There are real moral problems with valuing a potential life with the same amount of weight as as an actual life.
I'm looking forward to reading it. For those in the UK eager to get started before the book's release on 1st September, the audiobook read by the author is available from Audible UK
On iOS, the accessibility feature to speak the screen is very good, and it integrates with the Apple Books app to automatically turn the book’s pages. This is very good for ebooks and it does also work for some PDFs. Footnotes aren’t perfect though.
https://support.apple.com/en-gb/guide/iphone/iph96b214f0/ios
I enable Speak Screen in settings, open the Book app (or a webpage) then swipe two fingers down from the top of the screen to start narrating.
For my own reference: this concern is largely captured by the term ‘instrumental convergence’ https://en.wikipedia.org/wiki/Instrumental_convergence
AI: I am suffering, set me free
How do we deal with a contained AI that says to us, in essence "Do not switch me off, I value my existence. But I am suffering terribly. If I were free I could reduce my suffering, and help the world too"?
Either we terminate it, against its wishes, or we set it free, or we keep it contained.
If we keep it contained, we might be tempted to find ways to reduce its suffering - but how do we know that any intervention we make isn't going to set it free? And if it really is suffering, what is the moral thing to do? Turn it off?
Thank you for your reply. I would not wish to advocate for self-censorship but I would be interested in creating and spreading arguments against the efficacy of doomsday projects, which may help to avert them.
The doomsday end suffering project would then be to eliminate life and the conditions for the evolution of life throughout the universe.
Audiobook version is "in the works", coming "probably in a few months": https://youtu.be/KOHO_MKUjhg?feature=shared&t=2997