dotsam

286Joined Jun 2021

Comments
14

Should we be maximising expected value across many-worlds?

Assume the many-worlds interpretation of quantum mechanics is true.

Rather than pursuing  high-upside, low-probability  moonshots, which fail more often than they succeed, might it not be more effective to go for interventions that robustly generate value across as many worlds as possible? 

The human alignment problem

Humans are subject to instrumental convergence  as much as  an AI would be. We seek power, resources and influence in pursuit of many of our goals.

Whatever our goals happen to be, we will want to use AI to help us increase our power to help us get what we value.

If people are augmenting their goal-seeking with AI, will we converge on harmonious goals, or will we continue to pursue parochial self-interest?

In short, if we somehow solve the alignment problem for AI, will we also solve the human alignment problem? Or will we simply race to use AI to maximise our own power and our own values, even if these harm others? 

The best hope is that if we solve AI alignment, the AI will keep us in check in a benevolent and minimally impactful way. It will prevent us from pursuing zero-sum goals and guide us to be better versions of ourselves. 

But this kind of control may well appear misaligned from our current perspectives, in that some people's  cherished goals and values may not be the ones the AI chooses to support. 

So to talk of aligned AI is to gloss over the possibility that it is likely to be misaligned with a great many peoples’ current goals and ambitions.

You might consider creating a text-to-speech version by using e.g. Amazon Polly. Whilst imperfect, it is listenable and might be useful to people. Here is a sample generated with the British English Arthur Male voice.

A key point about Ben Franklin is that his longtermist efforts were for the benefit of the future, whereas EA-style longtermist causes like AI risk and biosecurity are about ensuring there actually is a future. 

I think as long as there are x-risks that we can plausibly influence there will be people carrying the torch for longtermism in one form or another. 

  1. Imagine someone who believes that eating meat is morally wrong, but who nevertheless eats meat and 'offsets' their meat-eating through donations to effective animal charities.
  2. Imagine someone who believes slavery is morally wrong, but who nevertheless owns slaves and 'offsets' their slave-owning through donations to the abolitionist movement.

An  argument for 1 goes: "The impact of me not eating meat is negligible. The personal cost to me of not eating meat is appreciable. Time, money and effort spent following  a restrictive diet may limit my effectiveness to do good elsewhere. My donation is the optimal path to reducing animal suffering".

And an argument for 2 goes: "My slave-owning is very modest, and is a drop in the ocean in the big picture. I can effectively use the economic surplus generated by my slaves to end slavery sooner. If I free my slaves I'll be poorer and will have less money to donate, and so I'd do less good overall."

Whilst the situations are not symmetric, they are similar enough that I feel like I want to say "If you care about animals, you should support animal charities AND go vegan" in the same way I want to say "If you care about slaves, you should support abolition AND free your slaves".

This is what Will says in the book: “I think the risk of technological stagnation alone suffices to make the net longterm effect of having more children positive. On top of that, if you bring them up well, then they can be change makers who help create a better future. Ultimately, having children is a deeply personal decision that I won’t be able to do full justice to here—but among the many considerations that may play a role, I think that an impartial concern for our future counts in favour, not against.”

Spoiler alert - I've now got to the end of the book, and "consider having children" is indeed a recommended high impact action. This feels like a big deal and is a big update for me, even though it is consistent with the longtermist arguments I was already familiar with.

Congratulations on the book launch! I am listening to the audiobook and enjoying it. 

One thing that has struck me - it sounds like longtermism aligns neatly with a  strongly pro-natalist outlook.

The book mentions that increasing secularisation isn't necessarily a one-way trend. Certain religious groups have high fertility rates which helps the religion spread. 

Is having 3+ children a good strategy for propagating longtermist goals?  Should we all be trying to have big happy families with children who strongly share our values? It seems like a clear path for effective multi-generational community building! Maybe even more impactful than what we do with our careers...

This would be a significant shift in thinking for me -- in my darkest hours I have wondered if having children is a moral crime (given the world we're leaving them). It also is slightly off-putting as it sounds like it is out of the playbook for fundamentalist religions. 

But if I buy the longtermist argument, and if I assume that I will be able to give my kids happy lives and that I will be able to influence their values, it seems like I should give more weight to the idea of having children than I currently do.

I see that UK total fertility rates has been below replacement level since 1973 and has been decreasing year on year since 2012. I imagine that EAs / longtermists are also following a similar trend.

Should we shut up and multiply?!

I'm looking forward to reading it. For those in the UK eager to get started before the book's release on 1st September, the audiobook read by the author is available from Audible UK 

On iOS, the accessibility feature to speak the screen is very good, and it integrates with the Apple Books app to automatically turn the book’s pages. This is very good for ebooks and it does also work for some PDFs. Footnotes aren’t perfect though.

https://support.apple.com/en-gb/guide/iphone/iph96b214f0/ios

I enable Speak Screen in settings, open the Book app (or a webpage) then swipe two fingers down from the top of the screen to start narrating.

Load More