Former software entrepreneur, current dabbler in derivatives trading, business advice, and non-fiction writing
OK, well reworking the numbers with a 2/10 neutral point (and Imperial's latest figures as noted below):
Death is now a fall from 5.17 to 2 points, i.e. by 3.17 points, though presumably out of 8 not 10 as we've compressed our scale. So 4.5 years = 4.5 x 3.17/8 = 1.78 WALYs lost. So 1.9 to 24 million deaths = 3.4 to 43 WALYs lost.
Presumably the WALYs lost by the financial crisis is also out of 8 not 10, i.e. 0.2/8 per person = 194 million WALYs. Which is 4.5 to 57 times worse than the deaths.
I've just updated the figures (in footnote 7) using Imperial College's latest global forecast of deaths. Previously a global recession like the last one came out as about 1 to 4 times as bad as pandemic deaths (in terms of impact on well-being); now it is 2.8-35 times as bad.
These are important topics IMO.
I'm not an expert, but I assume (from a glance at the second paper) this is because the 1%-59% is a cost (opportunity cost), not a value of a life year as such; i.e. in a very poor country you can extend a life by a year for as little as $3, maybe with a vaccine or micronutrient supplement. Actually that seems an order of magnitude too low to me; but nonetheless, it's a great deal!
Indeed, though if working from existing 0-10 life satisfaction scores I don't think it's plausible that those who responded below 5/10 thought they'd be better off dead. (Maybe those responding below say 2/10 would.) Otherwise suicide rates would be far higher.
(But indeed some kind of calibration of death and worse-than-death states is needed more generally. E.g. it concerns me that almost all the bad in the world may be located in extreme pain that is hugely underweighted, and so almost all efforts to improve the world may be missing the point.)
Thanks for this. (I hope my summary of value of life was mostly right!)
Yes I haven't really given any thought to what the best way of handling the situation would be, or would have been. Clearly complex given that there are sociological/political constraints too (e.g. how would the public react if x% of them die in a new dramatic way - as contrast with, die routinely from seasonal flu or traffic accidents).
It seems to me a global recession could reduce income/employment & hence quality of life without having much effect on life expectancy. For I'm not sure the last recession had much or any identifiable effect on it; growth in life expectancy has slowed since 2008/9, and I asked Paul F about this in a comment below one of his recent articles (which I have indeed been following), but he attributes it to other things. So I wonder if only looking at saving lives is going to miss most of the damage.
I think Paul F is effectively combining quality with quantity of life in his dollar numbers, and converting to whole lives lost as a convenient way to express it, but not completely sure. After all, dollars can be spent on quality or quantity of life.
I look forward to reading your analysis in due course!
Belatedly - thanks. I'm not sure what to make of this. That survey is quite large (30-50,000 people p.a.), so much larger than Eurobarometer, though smaller than ONS (around 150,000). Eurobarometer shows a large rise 1996-2016 (7.19 to 7.74/10), and the later-starting ONS shows a smallish but non-negligible rise 2012-2016 (7.45 to 7.67/10). Possibly again the question wording might have an influence.
But 5.2 to 5.3 is a rise, even if (statistically?) insignificant. It's unfortunate that the paper cites other surveys (in other countries) which confirm its claim of no effect, but doesn't cite these other UK surveys which suggest the opposite.
Since the ONS survey is much the largest, and also kind of confirmed by its findings on happiness (i.e. positive emotions), perhaps the reality is that there has indeed been a substantial rise since 2012, but only a small rise, or perhaps none, before that.
On a separate small point, I think your probability estimate for ESP is too low, for two reasons:
Firstly, it is a taboo topic (like UFOs and the Loch Ness monster), which scientists are therefore far more likely to dismiss from a position of ignorance, or with weakish arguments (e.g. 'it lacks an explanatory mechanism', 'much of the research methodology is flawed', or 'some of the research has been on fraudsters' - hardly disproof). Few skeptics have domain expertise, i.e. of having conducted or investigated research in the area.
Secondly, ESP covers quite a range of rather distinct phenomena. Only one has to be right for ESP to be true. And I'm not sure that all would require completely novel scientific principles (e.g. unknown physical forces); and the fact that our understanding of physics has gaps, and our understanding of consciousness certainly does, may well leave room for some form of ESP to be compatible with current science (not that that is essential).
Great article. I'm very late to the party in reading it & commenting, but I hope not too late to be of use!
I have three further reasons for epistemic immodesty in some circumstances. They all involve experts, or those who follow their advice, being overconfident about the experts' relevant knowledge. (Though I note your comments about debunking experts; none of these arguments show an amateur is better than some other, probably small, set of experts who have taken these considerations into account.)
You mention that expert views aren't relevant in matters of taste, i.e. preference. However, expert views are often based on non-explicit preferences, which some experts may even be unaware of themselves.
To start with a clear situation where preferences are involved: If I'm looking for a house to buy and trying to decide which one to choose, I may well consult experts in the field, such as an estate agent (realtor), a mortgage advisor, and an architect (if it may need building work). They may advise that I can't afford a house more than $x, or it will cost $y to do up, etc. But even with all their expert advice, this won't necessarily settle the matter of which house to buy, because I also have to *like* the house in question, want to live in that area, etc. So my decision involves both expert factual opinion and my personal preference; and I am the sole expert on the latter.
Now to take a less clear situation, currently topical in the UK: Brexit. Despite years of debate about this, which often includes discussion of experts and whether they should be trusted, I don't think I've heard anyone state clearly that it too mixes expert opinion and preference. Most economists say Brexit will harm the economy, and most voters opposed to Brexit assume this simply entails Brexit is a bad thing. But of course the issue is not only about money - various other considerations are involved (e.g. self-determination) - and the trade-off between these is a matter of preference. Some people with unusual preferences may have coherent reasons to oppose Brexit (e.g. I spoke to someone who voted based on the fact that animal welfare is taken more seriously in the UK than most other EU countries, a consideration she regarded as more important than the economy). So this is an example of a 'semi-hidden' preference - one where many people assume expert opinion is a silver bullet - perhaps including the experts themselves - and overlook the element of preference.
A different example is government guidelines on alcohol consumption. In the UK men are advised by experts to drink no more than (I think) 14 units per week. However, this advice is based on a trade-off between health and pleasure: if you really enjoy alcohol you may be happy to exchange a risk of significantly reduced health or longevity for drinking much more than 14 units. This trade-off is a preference, which the experts have made for you. (And AFAIK the trade-off they chose is arbitrary, not even based on research into say average preferences.)
Other topics may include preferences so hidden that even the experts are hardly aware of them. An example in EA would be the use of DALYs and QALYs (disability/quality-adjusted life years) as human welfare metrics in assessing charities & interventions. Some who work with these metrics may overlook, or perhaps be unaware of, their shortcomings. DALYs and QALYs as currently defined assume that no condition is worse than death - which is inconsistent with the existence of suicide and euthanasia. When ordinary people are surveyed, their views on this vary widely - some taking the (perhaps religious) position that nothing is worse than death, and suicide/euthanasia should never be allowed, whereas others have no problem with the idea of suicide/euthanasia to escape prolonged untreatable agony, for example. So the mere use of these units involves tacitly taking a position on this, i.e. a hidden preference. A resulting expert view that X charity or intervention is better than Y is therefore partly objective and partly subjective; the expert themself may overlook this fact, or even (when involving technical philosophical issues) be unaware of it.
Other unstated assumptions are widespread in EA, e.g. that saving lives is a good thing (even though the world may be overpopulated), or that the prevention of merely potential future humans by mass extinction is a bad thing (even though contraception is fine).
In such cases, a non-expert who identifies such a hidden preference that they don't share may well have good reason to disregard the expert opinion.
Relatedly, there is the issue of core assumptions that are largely unquestioned within a scientific field. A classic example is induction: physics assumes that just because in the past things seem to have behaved in a regular fashion, they will continue to do so. This is the basis of the belief in physical laws (and other laws of nature). Philosophers have long questioned this assumption; there really may be no reason to assume the sun will rise tomorrow, or that the speed of light was the same yesterday or a million years ago; which undermines all kinds of experiments and models. I expect many physicists are only dimly aware of this, know little of the arguments involved, and perhaps regard it as a quasi-theological debate not worth serious attention.
As with DALYs and QALYs, core assumptions like induction are often shaky, and the shakiness is often only taken seriously (or even known about) by those outside the field, e.g. philosophers. Indeed, articles of faith are often left unquestioned by true believers, lest they turn out to be an Achilles heel, and (mixing more metaphors) the whole edifice is built on sand. To question foundational beliefs may be heresy.
So an amateur outsider may well be more aware of such problems than an expert in the field; and may therefore be justified in using them to dismiss expert opinion, or at least, to take it with a big pinch of salt.
Many experts are only expert in an extremely narrow field, yet may be assumed to have a broader range of expertise (and some experts may also believe this themselves).
Apologies, but the clearest example I can think of is myself! At one time I was one of just a handful of world experts in an extremely narrow field - the music notation software industry. (As I owned a company in this field.) My knowledge was extremely in-depth - I had spent years coding this kind of software, knew endless obscure feature requirements, knew all about the market, wrote manuals and brochures, etc. Yet in other respects I knew less than many amateurs. I had never used (and hardly even seen) any music notation software other than my own company's. I knew even less about other types of music software (e.g. sequencers), used by millions of people, often my own customers. So I was a world expert in a very narrow field, yet an ignoramus both in aspects of my own field, and in very close fields.
The same is presumably true elsewhere. Amateurs may know as much as a world expert who is only slightly outside their very narrow field, or even on topics within their specialism. And at least occasionally, experts are unaware of their ignorance on these things. That is, they may make the same false assumptions as others do about the breadth & depth of their expertise.
(An example: the book The Oxford Companion to the Mind is an encyclopedia edited by the eminent psychologist Richard Gregory. Some of the entries in the original edition are by Gregory himself, despite dealing with philosophy of mind & metaphysics, topics evidently outside his expertise. They are amateurish, making confusions that would embarrass a philosophy undergraduate. Even the blurb on the cover jacket casually conflated 'brain' and 'mind' in ways only an ignoramus would do. When I was a philosophy student I was so astonished by this I almost wrote a letter to Gregory suggesting he get someone with domain expertise to rewrite his entries.)
Actually there has been one change in method - in 1998 it was made illegal to sell large quantities of paracetomol, to make casual suicide harder. The suicide rate has been falling since but there was no sudden drop, so I'm not sure we can attribute much effect to that.