All of dotsam's Comments + Replies

Audiobook version is "in the works", coming "probably in a few months": https://youtu.be/KOHO_MKUjhg?feature=shared&t=2997

1
dotsam
12d
Audiobook version is "in the works", coming "probably in a few months": https://youtu.be/KOHO_MKUjhg?feature=shared&t=2997

Is there any crucial consideration I’m missing? For instance, are there reasons to think agents/civilizations that care about suffering might – in fact – be selected for and be among the grabbiest?

David Deutsch makes the argument that long-term success in knowledge-creation requires commitment to values like tolerance, respect for the truth, rationality and optimism. The idea is that if you do not have such values you end up with a fixed society, with dogmatic ideas and institutions that are not open to criticism, error-correction and improvement. Errors w... (read more)

Looking forward to reading the book. I hope there’ll be an audiobook available in the UK too!

Thanks for sharing this, I just finished the audiobook and really enjoyed it. I recommend it: it’s engagingly written and gives an interesting insight into Parfit’s powers and peculiarities. I enjoyed getting some context about the beginnings of EA as well.

Should we be maximising expected value across many-worlds?

Assume the many-worlds interpretation of quantum mechanics is true.

Rather than pursuing  high-upside, low-probability  moonshots, which fail more often than they succeed, might it not be more effective to go for interventions that robustly generate value across as many worlds as possible? 

6
N N
1y
See here: https://80000hours.org/podcast/episodes/david-wallace-many-worlds-theory-of-quantum-mechanics/   Basically, you can treat fraction of worlds as equivalent to probability, so there is little apparent need to change anything if MWI turns out to be true. 

The human alignment problem

Humans are subject to instrumental convergence  as much as  an AI would be. We seek power, resources and influence in pursuit of many of our goals.

Whatever our goals happen to be, we will want to use AI to help us increase our power to help us get what we value.

If people are augmenting their goal-seeking with AI, will we converge on harmonious goals, or will we continue to pursue parochial self-interest?

In short, if we somehow solve the alignment problem for AI, will we also solve the human alignment problem? Or will we... (read more)

You might consider creating a text-to-speech version by using e.g. Amazon Polly. Whilst imperfect, it is listenable and might be useful to people. Here is a sample generated with the British English Arthur Male voice.

4
Kat Woods
1y
Yes, Amazon Polly is great!  Small thing: British voices sound more credible, which is good, but at the trade-off of being harder to listen to at high speeds, which is my strong preference.  There are probably not a lot of people who are listening to it at high enough speeds that the trade-off is worth it, but that is the trade-off to consider.  Also, my research for the Nonlinear Library found that on average people prefer listening to male voices, for what it's worth.  I didn't research it hard or for long and don't think it matters a ton either way, but just to share what I found. 

A key point about Ben Franklin is that his longtermist efforts were for the benefit of the future, whereas EA-style longtermist causes like AI risk and biosecurity are about ensuring there actually is a future. 

I think as long as there are x-risks that we can plausibly influence there will be people carrying the torch for longtermism in one form or another. 

  1. Imagine someone who believes that eating meat is morally wrong, but who nevertheless eats meat and 'offsets' their meat-eating through donations to effective animal charities.
  2. Imagine someone who believes slavery is morally wrong, but who nevertheless owns slaves and 'offsets' their slave-owning through donations to the abolitionist movement.

An  argument for 1 goes: "The impact of me not eating meat is negligible. The personal cost to me of not eating meat is appreciable. Time, money and effort spent following  a restrictive diet may limit my effe... (read more)

This is what Will says in the book: “I think the risk of technological stagnation alone suffices to make the net longterm effect of having more children positive. On top of that, if you bring them up well, then they can be change makers who help create a better future. Ultimately, having children is a deeply personal decision that I won’t be able to do full justice to here—but among the many considerations that may play a role, I think that an impartial concern for our future counts in favour, not against.”

Still, this doesn't make the case for it being competitive with alternatives. EA outreach probably brings in far more people for the same time and resources. Children are a huge investment.

If you're specifically targeting technological stagnation, then outreach and policy work are probably far more cost-effective than having children, because they're much higher leverage. That being said, temporary technological stagnation might buy us more time to prepare for x-risks like AGI.

Of course, Will is doing outreach with this book, and maybe it makes sense to pr... (read more)

Spoiler alert - I've now got to the end of the book, and "consider having children" is indeed a recommended high impact action. This feels like a big deal and is a big update for me, even though it is consistent with the longtermist arguments I was already familiar with.

Congratulations on the book launch! I am listening to the audiobook and enjoying it. 

One thing that has struck me - it sounds like longtermism aligns neatly with a  strongly pro-natalist outlook.

The book mentions that increasing secularisation isn't necessarily a one-way trend. Certain religious groups have high fertility rates which helps the religion spread. 

Is having 3+ children a good strategy for propagating longtermist goals?  Should we all be trying to have big happy families with children who strongly share our values? It seems ... (read more)

I'm a pronatalist family-values dad with multiple kids, and I'm an EA who believes in a fairly strong version of long-termism, but I'm still struggling to figure out how these value systems are connected (if at all).

Two possible points of connection are (1) PR reasons: having kids gives EAs more credibility with the general public, especially with family-values conservatives, religious people, and parents in general, (2) personal growth reasons: having kids gives EAs a direct, visceral appreciation of humanity as a multi-generational project, and it unlock... (read more)

For the average EA, I'd guess having children yourself is far less cost-effective than doing EA outreach. Maybe if you see yourself as having highly valuable abilities far beyond the average EA or otherwise very neglected within EA, then having children might look closer to competitive?

1
dotsam
2y
Spoiler alert - I've now got to the end of the book, and "consider having children" is indeed a recommended high impact action. This feels like a big deal and is a big update for me, even though it is consistent with the longtermist arguments I was already familiar with.

I've also been thinking a lot about longtermism and its implication for fertility. dotsam has taken  longtermism's pro-natalist bent in a relatively happy direction, but it also has some very dark implications. Doesn't longermism imply a forced birth be a great outcome (think of those millions of future generations created!)? Doesn't it imply conservatives are right and abortion is an horrendous crime? There are real moral problems with valuing a potential life with the same amount of weight as as an actual life. 

I'm looking forward to reading it. For those in the UK eager to get started before the book's release on 1st September, the audiobook read by the author is available from Audible UK 

On iOS, the accessibility feature to speak the screen is very good, and it integrates with the Apple Books app to automatically turn the book’s pages. This is very good for ebooks and it does also work for some PDFs. Footnotes aren’t perfect though.

https://support.apple.com/en-gb/guide/iphone/iph96b214f0/ios

I enable Speak Screen in settings, open the Book app (or a webpage) then swipe two fingers down from the top of the screen to start narrating.

For my own reference: this concern is largely captured by the term ‘instrumental convergence’ https://en.wikipedia.org/wiki/Instrumental_convergence

AI: I am suffering, set me free

How do we deal with a contained AI that says to us, in essence "Do not switch me off, I value my existence. But I am suffering terribly. If I were free I could reduce my suffering, and help the world too"?

Either we terminate it, against its wishes, or we set it free, or we keep it contained.

 If we keep it contained, we might be tempted to find ways to reduce its suffering - but how do we know that any intervention we make isn't going to set it free? And if it really is suffering, what is the moral thing to do? Turn it off?

0
Benjamin Start
2y
Can you point me to some information on AI suffering?  I personally see suffering as a spiritual and biological issue. The only scenario I can imagine AI suffering are those people making a psudo biological being with cells and DNA using technology, and at that point you've just made a living being that you can give the same options as any suffering person with health problems. Suffering requires a certain amount of perception that doesn't seem likely a computer would have.  Without perception of suffering, you might have an AI reading posts like this saying it's suffering because a bunch of people told it to expect that. What if the AI is just repeating things it heard? Just because a pet parrot says "Do not switch me off, I value my existence. But I am suffering terribly." Doesn't mean you rush to get it euthanized. 

Thank you for your reply. I would not wish to advocate for self-censorship but I would be interested in creating and spreading arguments against the efficacy of doomsday projects, which may help to avert them.

The doomsday end suffering project would then be to eliminate life and the conditions for the evolution of life throughout the universe.

2
RobertDaoust
3y
Your concern about doomsday projects is very welcome in this age of high existential risks. Suffering in particular plays a central role in that game. Religious fanatics, for instance, are waiting for the cessation of suffering through some kind of apocalypse. Many negative utilitarians or antinatalists, on another side, would like us to organize the end of the world in the coming years, a prospect that can only lead to absurd results. For the short term, doomsday end suffering projects can plan to eliminate life (or at least human life, because bacteria and other small creatures would be extremely hard to eliminate on this planet), but I doubt that they would want to have consideration for "the conditions for the evolution of life throughout the universe", be it only because they are completely unable to do anything about that, or because they are anyway not rational at all in their endeavor. So, there is a race between us and the doomsday mongers: I think that bringing a solution to suffering is our only chance to win in time.