Hide table of contents

Epistemic status: probably raving lunacy

Summary

  • EA discourse is mostly “here are some problems we need to solve”.
  • There isn’t much “here are some opportunities for good we can capitalise on”.
  • I don’t see any reason why problem-altruism would be a better strategy for doing good than opportunity-altruism. In some sense they seem like the same thing.
  • But problems-altruism feels anxious, and opportunities-altruism feels exciting.
  • So if there really isn’t any reason why problems-altruism is more effective, then EA is spreading anxiety for no reason. This is bad utilitarianism, bad for community health, and bad marketing.

Other people have discussed this before, but I think it’s still an under-rated point in EA.

Note: I have self-plagiarised from a couple of my previous posts.

Being an effective altruist can suck

The world has so many problems and there's so much you could be doing to help. Donating more. Advocating more. Researching more.

This can make me feel like:

  • I have to keep giving until there's nothing left for me
  • Even then, I can never live up to my moral obligations
  • I am therefore a bad person

When I ran a small survey of EAG attendees, two out of thirteen participants brought up “impact anxiety” (the feeling that you’re not doing enough to solve the world’s problems).

On top of that, EA discourse often implies or asserts:

  • The world is a terrible place full of suffering
  • Everyone is likely to die soon from AI or pandemics or something

Which makes me feel anxious and panicky.

Nowadays I don’t worry so much about moral obligations or existential risk; but in former years I’ve given myself some pretty hard spankings - both emotionally and financially - because from reading EA material I thought that that was the right thing to do.

Is this utility loss necessary? I suspect not. In fact I think it’s counter-productive.

Opportunities vs Obligations

I started thinking about this after listening to “the most successful EA podcast of all time”, in which Will MacAskill convinces Sam Harris to start substantially donating to global poverty. He does this by reframing Peter Singer’s drowning child thought experiment: the fact that you can save a child’s life for a mere couple of thousand dollars by donating to global health charities doesn’t mean that you are obliged to donate. It means that you have an opportunity to do good very cheaply.

Here’s MacAskill on why this works:

It is notable that Peter Singer made these arguments around giving for almost four decades with comparatively little uptake, certainly compared to the last 10 years of the effective altruism movement. And my best hypothesis is that a framing that appeals to guilt lowers motivation. You don't often start doing things on the basis of guilt. We’ve moved to [messaging centred on] inspiration and say, “No, this is an amazing opportunity we have.”

And Harris:

If you're philosophically and ethically sensitive, and you go down this path with Singer, you'll probably still live more or less the way you want, but you'll periodically feel like a total hypocrite. Now Will's emphasis on the opportunity of giving cuts through this. Forget about measuring yourself against a standard of perfection and just realise that, by dint of sheer good luck, you get to do a tremendous amount of good in this world whenever you want.

In summary, pitching altruism as an opportunity rather than an obligation makes it more appealing and results in more uptake.

Can we generalise this lesson? Are there other opportunities to reframe an idea that feels negative as something that feels positive? Could this be an easy way to give EA ideas more traction?

Sidenote: MacAskill’s new book is called “What We Owe the Future”. “Owe” implies obligation. Does the opportunity-not-obligation policy not apply to long-termism?

Keep your status quo low

Most EA discussion looks something like this:

Oh no! A Bad Thing is happening or will happen! We need to stop it!

And not like this:

Hey, wouldn't it be amazing if Good Thing happened? How can cause it?

Why? Is there a structural reason why "preventing bad" is a better altruism strategy than "causing good"?

There might be within certain philosophies (such as prioritarianism and deontology), but to me "preventing bad" and "causing good" look like they're actually the same thing. They are both situations where:

  • there are two possible futures
  • one is better than the other
  • we cause the better future to come about

At this level of analysis, "preventing bad" and "causing good" is a purely psychological distinction that depends on where you set your status quo.

If the status quo is the better future, then:

  • if the worse future happens then we have failed and things have gotten worse
  • if the better future happens then... nothing

If the status quo is the worse future, then:

  • if the worse future happens then... nothing
  • if the better future happens then we have achieved and things have gotten better

So by mere psychological framing, altruism can either feel like clawing your way out of a utility hole, or like blasting off in a utility rocket. Do you put the baseline (the ground) above you or below you?

In order to turn altruism from a horrible slog into an exciting adventure, always set your status quos as low as they will go.

This one weird trick creates infinite utility

The idea of setting your status quo as low as possible has an analogy in von-Neumann Morgenstern utility theory.

Because preferences over lotteries only determine utility functions up to affine transformation, you can add any number of utility points to your utility function "for free" without it impacting any actual decisions you make. You can arbitrarily declare that the universe starts out with a trillion trillion utils, and that the grand sum of good achieved by humanity is just the tiniest sliver of icing on the cosmic cake.

And if you're not doing time discounting, you can even declare that the universe automatically gets a billion new utils every day, so that no matter how much we screw up, the universe is always getting better and better.

These mathematical tricks are fundamentally pointless indulgences. But I feel much better about the world if I imagine the cosmic utility counter with a very big number on it, and always clicking up.

Existential Opportunities

When we apply the status quo trick to global poverty, we go from “there is a crushing moral obligation to help the poor” to “utils are on sale at the util store for CRAZY low prices”.

What’s the equivalent transmutation for existential risk? How do we put a positive spin on the possibility of everyone dying soon?

One way of thinking about the Fermi Paradox is that technological civilization isn’t supposed to exist in the universe. Of all the billions of planets in our galaxy that have been sitting around for billions of years, _none _of them have covered the milky way in dyson spheres, or even emitted detectable radio signals.

And yet here we are, poised to take over the Milky Way within a mere million years or so. The blink of a cosmic eye.

We are an insane outlier. Every day that our species gets a step closer to galactic colonisation is a fresh miracle.

So I don’t think of the possibility of extinction as an unimaginable catastrophe in which we are deprived of our cosmic birthright. It’s totes bananas that we got this far at all, and I’m just excited how much further we can push our luck.

And suppose we do beat the odds and colonise the galaxy? Then what? What wild things could we do with this scale of resources? What possibilities get opened up? Should we start planning for them now?

Making drEAms come true

So far I’ve talked about psychological framing that doesn’t bear on actual policy. But there’s also the question of where you look for altruism opportunities. Do you look for bad things that could happen to people (or animals or whatever) and try to make these happen less in expectation, or do you look for good things that could happen to people and try to make these happen more in expectation?

EA seems to almost exclusively do the first thing and look for ways to prevent changes for the worse. I don’t understand why this is. It’s so much more inspiring and hopeful to aim for filling the world with smiles and making millions of people’s dreams come true. There are some reasons why it might be easier on average to prevent changes for the worse than cause changes for the better. But I hardly see anyone even looking for ambitious projects to bring joy to the masses.

For concreteness, let’s imagine a specific person called Jane.

Here are some bad things that could happen to Jane:

  • She could die.
  • She could get sick.
  • She could be run over by a self-driving car.

EA causes are largely trying to prevent versions of these bad things that happen at the global scale:

  • Death: existential risk reduction, and nuclear security.
  • Sickness: global health and pandemic prevention.
  • Self driving car: AI alignment.

But there are also some profoundly positive things that could happen to Jane:

  • She could meet the love of her life.
  • She could experience a work of art that makes sense of her life.
  • She could land a dream job, and do what she loves for a living.

Are there any ways to make these things happen at the global scale? Has anyone ever tried? Has anyone even brainstormed strategies?

  • Could you leverage technology to solve human loneliness? Like, what if there was a dating app that actually tried to compute optimal global mate pairings rather than simply make a profit? There was a startup that made an effort in this direction, but it wasn’t financially sustainable.
  • Or what if you analysed databases of book/movie/music/show/visual-art reviews to look for pieces that are really good but haven’t been widely experienced? There must be some mind blowing stuff out there that’s been drowned out by garbage with commercial or political interests behind it.
  • There’s that Levitt study which suggested that if you just told lots of people to quit the jobs they hate, you could add 5 points to their quality of life on a 10 point scale. Bonkers!

How much utility could these schemes generate? How much of your life would you give up to find your soulmate, experience the best art, and escape the rat race? It’s gotta be years, right? And these projects could in principle scale to infinity, so best case scenario is adding whole QALYs of value to billions of people. Or even orders of magnitude more if the solutions can be continued far into the future.

“But none of that matters if humanity gets wiped out by a pandemic or AI”

Correct.

I agree that it’s imperative to protect humanity from existential risks.

But I think it’s also important to put some resources into strategically making the world a happy place. It’s good for morale. Good for optics. Robustly good even if we’re fundamentally wrong about a lot of stuff. It keeps the focus on the actual good stuff you can do with life, rather than getting lost in abstracting preserving human life per se.

The part where I talk about AI

Is there a positive/inspiring way to think about AI alignment?

Yes, I think so. But first…

the bittEAr lesson

The biggest lesson that can be read from 70 years of AI research is that general methods that leverage computation are ultimately the most effective, and by a large margin. The ultimate reason for this is Moore's law, or rather its generalization of continued exponentially falling cost per unit of computation. Most AI research has been conducted as if the computation available to the agent were constant (in which case leveraging human knowledge would be one of the only ways to improve performance) but, over a slightly longer time than a typical research project, massively more computation inevitably becomes available. Seeking an improvement that makes a difference in the shorter term, researchers seek to leverage their human knowledge of the domain, but the only thing that matters in the long run is the leveraging of computation. These two need not run counter to each other, but in practice they tend to. Time spent on one is time not spent on the other. There are psychological commitments to investment in one approach or the other. And the human-knowledge approach tends to complicate methods in ways that make them less suited to taking advantage of general methods leveraging computation.

This quote comes from The Bitter Lesson, a 2019 essay by AI researcher Richard Sutton. Sutton goes on to specify that search and machine learning, which can leverage arbitrary amounts of compute, will generally outcompete other approaches to AI. This trend has been seen in chess, go, computer vision, and speech recognition. So search and machine learning are the subfields that AI researchers should be focusing on.

But if we generalise Sutton’s argument a bit, any task which computers could conceivably do will eventually be dominated by ML. This includes artistic, commercial, scientific, engineering, geopolitical, and altruistic enterprises.

So the bad news is that unless Moore’s Law stops, reinforcement learners will eventually rule the world.

But the positive version is that if we can figure out how to turn compute into utility, then we automatically get exponentially increasing utility for constant cost over time.

AI alignment from a different angle

Going on a bit of a tangent, but coming at AI alignment from the Bitter Lesson angle makes it seem much more grounded and tractible to me.

It doesn't rely on any weird sci-fi ideas like recursive self-improvement or spontaneous agency. The only assumptions are that Moore's Law will continue and that computers can do good or harm in a way that scales. And there’s no magic singularity point where AI qualitatively changes. We’re just looking at the asymptotic behaviour and speculating that a compute-based altruism strategy will eventually dominate.

But focusing on "finding strategies for turning compute into utility that scale to infinity" implies quite different kinds of work than what we normally see in AI alignment. Rather than thinking up clever ways to control and comprehend artificial super-agents, we instead look for concrete problems that people care about and try to set up infrastructure where a computer can run experiments and iteratively learn how to solve the problem better. Again, this feels much more grounded and like something that could concretely help people in the near to medium term.

Some of the project ideas already discussed might be good candidates for scalable computer-to-utility converters: a human mate matching system, a consumption recommendation system, or a job matching system.

Objection: what if Moore's Law ends?

Wikipedia indicates that Moore's Law will end around 2025:

Most forecasters, including Gordon Moore, expect Moore's law will end by around 2025. Although Moore’s Law will reach a physical limitation, some forecasters are optimistic about the continuation of technological progress in a variety of other areas, including new chip architectures, quantum computing, and AI and machine learning.

However, the Bitter Lesson actually talked about a generalised Moore's Law (exponentially falling cost per compute), whereas Wikipedia is just talking about Moore's Law per se (exponentially growing transistors on integrated circuits). So it's unclear whether this has any bearing on our discussion.

Basically it's unclear if and when (generalised) Moore's Law will end. As a first-pass analysis, we can use Lindy's Law, and say "as a median estimate, we are at the middle of Moore's Law lifespan". Moore's Law started ~50 years ago, so we should expect it to last another 50, taking us to ~2070. Given a increase every two years, our median estimate is that there will still be a 34 million-fold increase in compute-per-dollar.

But presumably we should assign some probability to Moore's Law continuing past 2070, and indeed, arbitrarily far into the future. And it turns out that this means that a "compute-to-utility converter that scales to infinity" actually has infinite expected utility.

We can use Gott's formula to model the probability of Moore's Law surviving up to , which is hyperbolic:

I'm only interested in asymptotic behaviour, so have omitted a scaling constant. If our compute costs are fixed, then compute is increasing exponentially with time. And as long as utility is at least compute, then the expected utility at time is:

Therefore the total utility generated by time is

So under highly plausible assumptions, the utility machine will generate expected utils by .

In conclusion…

Always try to make doing altruism more enjoyable. This is especially true if it starts feeling like a depressing slog, because depression will slow you down. Look for different angles to your altruistic thinking which feel more inspiring and positive. If there are two ways to do altruism, where one way is fun and the other way is crap, then utilitarianism compels you to do altruism the fun way to generate more utils.

12

0
0

Reactions

0
0

More posts like this

Comments
No comments on this post yet.
Be the first to respond.
More from Hmash
Curated and popular this week
Relevant opportunities