ADV

Alexander de Vries

78 karmaJoined Nov 2021

Comments
10

So I did a fair bit more research on this subject for a post I'm writing on it, and from what I can tell, that Blanchflower study you mentioned is making the exact mistake Bartram points out, and if you use controls correctly, the u-shape only shows up in a few countries.

This study by Kratz and Brüderl is very interesting - it points out four potential causes of bias in the age-happiness literature and make their own study without those biases, finding a constant downwards slope in happiness. I think they miss the second-biggest issue, though (after overcontrol bias): there is constant confusion between different happiness measures, as I described in my post above, and  that really matters when studying a subject with effects this small.

If I ever have time, I'm planning on doing some kind of small meta-analysis, taking the five or so biggest unbiased studies in the field. I'd have to learn some more stats first, though :)

Thanks for doing the back of the envelope calculation here! This made me view blood donation as significantly more effective than I did before. A few points:

  • Your second source doesn't exactly say that one third of blood is used during emergencies, but rather that 1/3rd is used in "surgery and emergencies including childbirth". Not all surgeries are emergencies, and not all emergencies are potentially fatal.
  • However, I think this is more than balanced out by the fact that according to the same source, the other two thirds are used to treat "medical conditions including anemia, cancer, and blood disorders." A lot of those conditions are potentially fatal, so I think it probably actually ends up at more than 1/3rd of blood donated going to life-saving interventions.

I'd love to see someone do the full calculation sometime. Based on this, I expect that for a lot of people, donating blood is sufficiently effective that they should do it once in a while, even instead of an hour of effective work or earning-to-give.

[I'll be assuming a consequentialist moral framework in this response, since most EAs are in fact consequentialists. I'm sure other moral systems have their own arguments for (de)prioritizing AI.]

Almost all the disputes on prioritizing AI safety are really epistemological, rather than ethical; the two big exceptions being a disagreement about how to value future persons, and one on ethics with very high numbers of people (Pascal's Mugging-adjacent situations).

I'll use the importance-tractability-neglectedness (ITN) framework to explain what I mean. The ITN framework is meant to figure out whether an extra marginal dollar to cause 1 will have more positive impact than a dollar to cause 2; in any consequentialist ethical system, that's enough reason to prefer cause 1. Importance is the scale of the problem, the (negative) expected value in the counterfactual world where nothing is done about it - I'll note this as CEV, for counterfactual expected value. Tractability is the share of the problem which can be solved with a given amount of resources; percent-solved-per-dollar, which I'll note as %/$. Neglectedness is comparing cause 1 to other causes with similar importance-times-tractability, and seeing which cause currently has more funding. In an equation:

Now let's do ITN for AI risk specifically:

Tractability - This is entirely an epistemological issue, and one which changes the result of any calculations done by a lot. If AI safety is 15% more likely to be solved with a billion more dollars to hire more grad students (or maybe Terence Tao), few people who are really worried about AI risk would object to throwing that amount of money at it. But there are other models under which throwing an extra billion dollars at the problem would barely increase AI safety progress at all, and many are skeptical of using vast amounts of money which could otherwise help alleviate global poverty on an issue with so much uncertainty.

Neglectedness - Just about everyone agrees that if AI safety is indeed as important and tractable as safety advocates say, it currently gets less resources than other issues on the same or smaller scales, like climate change and nuclear war prevention.

Importance - Essentially, importance is probability-of-doom [p(doom)] multiplied by how-bad-doom-actually-is [v(doom)], which gives us expected-(negative)-value [CEV] in the counterfactual universe where we don't do anything about AI risk.

The obvious first issue here is an epistemological one; what is p(doom)? 10% chance of everyone dying is a lot different from 1%, which is in turn very different from 0.1%. And some people think p(doom) is over 50%, or almost nonexistent! All of these numbers have very different implications regarding how much money we should put into AI safety.

Alright, let's briefly take a step back before looking at how to calculate v(doom). Our equation now looks like this:

Assuming that the right side of the equation is constant, we now have 3 variables that can move around:  and . I've shown that the first two have a lot of variability, which can lead to multiple orders of magnitude difference in the results. 

The 'longtermist' argument for AI risk is, plainly, that  is so unbelievably large that the variations in  and  are too small to matter. This is based on an epistemological claim and two ethical claims.

Epistemological claim: the expected amount of people (/sentient beings) in the future is huge. An OWID article estimates it at between 800 trillion and 625 quadrillion given a stable population of 11 billion on Earth, while some longtermists, assuming things like space colonization and uploaded minds, go up to 10^53 or something like that. This is the Astronomical Value Thesis.

This claim, at its core, is based on an expectation that existential risk will effectively cease to exist soon (or at least drop to very very low levels), because of something like singularity-level technology. If x-risk stays at something like 1% per century, after all, it's very unlikely that we ever reach anything like 800 trillion people, let alone some of the higher numbers. This EA Forum post does a great job of explaining the math behind it.

Moral claim 1: We should care about potential future sentient beings; 800 trillion humans existing is 100,000 times better than 8 billion, and the loss of 800 trillion future potential lives should be counted as 100,000 times as bad as the loss of today's 8 billion lives. This is a very non-intuitive moral claim, but many total utilitarians will agree with it.

If we combine the Astronomical Value Thesis with moral claim 1, we get to the conclusion that  is so massive that it overwhelms nearly everything else in the equation. To illustrate, I'll use the lowball estimate of 800 trillion lives:

You don't need advanced math to know that the side with that many zeroes is probably larger. But valid math is not always valid philosophy, and it has often been said that ethics gets weird around very large numbers. Some people say that this is in fact invalid reasoning, and that it resembles the case of Pascal's mugging, which infamously 'proves' things like that you should exchange 10$ for a one-in-a-quadrillion chance of getting 50 quadrillion dollars (after all, the expected value is $50).

So, to finish, moral claim 2: at least in this case, reasoning like this with very large numbers is ethically valid.

 

And there you have it! If you accept the Astronomical Value Thesis and both moral claims, just about any spending which decreases x-risk at all will be worth prioritizing. If you reject any of those three claims, it can still be entirely reasonable to prioritize AI risk, if your p(doom) and tractability estimates are high enough. Plugging in the current 8 billion people on the planet:

That's still a lot of zeroes!

Context: there has recently been a new phase 1/2b RCT in The Lancet, confirming a ~80% effectiveness rate for the R21/MM malaria vaccine (and confirming that booster shots work). 

Quoting https://www.bbc.com/news/health-62797776:

'Prof Hill said the vaccine - called R21 - could be made for "a few dollars" and "we really could be looking at a very substantial reduction in that horrendous burden of malaria".

He added: "We hope that this will be deployed and available and saving lives, certainly by the end of next year."'

If the vaccine makes it through phase III trials, this seems intuitively like a much more effective malaria intervention than bednets.

I don't quite get your right foot/left foot example. Cassie says that Utopi-doogle will soon 'likely' (I'll assume this is somewhere around 70%-90%) explode, killing everyone, and that the only solution is for everyone to stamp the same foot at once; if they guess correctly as to which foot to stamp, they survive, otherwise, they die.

To me, it seems that the politician who starts the Left Foot Movement is attempting to bring them from 'likely' death (again, 70-90%), the case if nothing is done or if the foot stamps aren't coordinated, to a new equilibrium with 50% chance of death; either he is correct and the world is saved, or he is wrong and the world is destroyed.

How is this a Pascal's Mugging? If the politician's movement succeeds, x-risk is reduced significantly, right?

Great post! You convinced me that the Astronomical Value Thesis is less likely than I thought. 

I'd like to point out, though, that of the risks with which you labeled space exploration "less helpful", by far the largest is AI risk. But it seems to me that any AI which is capable of wiping out humans on other planets would also, if aligned, be capable of strongly reducing existential risk, and therefore make the Time of Perils hypothesis true.

Pllus, if you can convince the bots on Omegle as well, that gives us a head start on the alignment problem!

I agree with a lot of these points, but it seems like you're arguing for reading fiction, while the people you refer to are arguing for podcasts over non-fiction. It would be interesting to see someone fully articulate the case for reading a non-fiction book rather than a summary of the main points or a podcast where the author explains the whole thing in a far shorter time.

Thanks! Glad this works well as a summary.

I haven't looked into the age-happiness curve all that much, but from the studies I have read, I think it's a bit suspicious that none of them seem to control for ERS. If ERS really is U-shaped (seems like a slight majority opinion), then a lot or even all of the age-happiness curve could be explained by that. Then again, surely someone  in the field would have found that out by now if it were true, right? Might look into it further in a future post

Thanks for the link! I hadn't seen it before, definitely useful information.

No, I hadn't heard of qualia formalism yet, though it sounds kind of like the way I implicitly conceptualize qualia. Principia Qualia seems really intriguing and I'll definitely be reading it!

Load more