Jack_S

83Joined Apr 2019

Comments
10

"If we want to draw in more experienced people,  it'd be much easier to just spin up another brand, rather than try to rebrand something that already has particular connotations."

This strikes me as probably incorrect. Creating a new brand is really hard, and minor shifts in branding to de-emphasise students would be fairly simple. In my experience, the EA brand and EA ideas are sufficiently appealing to a fairly broad range of older people. The problem is that loads of older people are really interested in EA ideas- think Sam Harris' audience or the median owner of a Peter Singer book- but they find that: a) It's socially weird being around uni students; b) Few of the materials, from 80k to Intro fellowships, seem targeted to them; c) It's way harder to commit to a social movement.  

I've facilitated for EA intro programs with diverse ages, and the 'next steps' stage at the end of an intro fellowship is way different for 20 year olds to 40 year olds- for a 20 year old, basically "Just go to your uni EA group and get more involved" is a good level of commitment, whereas a 40 year old has to make far more difficult choices. But I also feel that if this 40 year-old is willing to commit time to EA, this is a more costly signal than a student doing so, so I often feel bullish about their career impact.

My preferred solutions are fairly marginal, just making it a bit easier and more comfortable for older people to get involved: 1) Groups like 80k put a bit more effort into advice for later career people; 2) Events targeting older high-impact professionals (and more 'normal' older people; EA for parents is a good idea); 3) Highlight a few 'role models' (on the EA intro course, for example, or an 80k podcast guest)- people who've become high-impact EAs in later life. 

The claim that we wouldn't see similar evolution of moral reasoning a second time doesn't seem weird to me at all. The claim that we should assume that we've been exceptionally / top 10%- lucky might be a bit weird. Despite a few structural factors (more complex, more universal moral reasoning develops with economic complexity), I see loads of contingency and path dependence in the way that human moral reasoning has evolved. If we re-ran the last few millennia 1000 times, I'm pretty convinced that we'd see significant variation in norms and reasoning, including:

  1. Some worlds with very different moral foundations- think a more Confucian variety of philosophy emerging in classical Athens, rather than Socratic-Aristotelian philosophy. (The emergence of analytical philosophy in classical Athens seems like a very contingent event with far-reaching moral consequences).
  2. Some worlds in which 'dark ages', involving decay/ stagnation in moral reasoning persisted for longer or shorter periods, or where intellectual revolutions never happened, or happened earlier. 
  3. Worlds where empires with very different moral foundations than the British/ American would have dominated most of the world during the critical modernisation period.  
  4. Worlds where seemingly small changes would have huge ethical implications- imagine the pork taboo persisting in Christianity, for example. 

The argument that we've been exceptionally lucky is more difficult to examine using a longer timeline. We can imagine much better and much worse scenarios, and I can't think of a strong reason to assume either way. But with a shorter timeline we can make some meaningful claims about things that could have gone better or worse. It does feel like there are many ways that the last few hundred years could have led to much worse moral philosophies becoming more globally prominent- particularly if other empires (Qing, Spanish, Ottoman, Japanese, Soviet, Nazi) had become more dominant. 

I'm fairly uncertain about this later claim, so I'd like to hear from people with more expertise in world history/ history of moral thought to see if they agree with my intuitions about potential counterfactuals.

Agree with this completely.  

The fact that this same statistical manoeuvre could be used to downplay nuclear war, vaccines for diseases like polio, climate change or AI risk, should also be particularly worrying. 

Another angle is that the number of deaths is directly influenced by the amount of funding- the article says that "the scale of this issue differs greatly from pandemics", but it could plausibly be the case that terrorism isn't an inherently less significant/ deadly issue, but counterterrorism funding works extremely well- that's why deaths are so low. 

Thanks for writing this up. This question came up in a Precipice reading group I was facilitating for last year. We also used the idea that collapse was 're-rolling the dice' on values, and I think it's the right framing. 

I recall that the 'better values' argument was:

  • We should assume that our current values are 'average'. If we reran the last three millennia of human history 1000 times (from 1000BC), we should assume that the current spectrum of values would be somewhere near the average of whatever civilization(s) would emerge in the early 21st century.
  • But if you believe in moral progress, the starting point is important. In all but the most extreme collapse scenarios, we'd assume some kind of ethical continuity and therefore a much better starting point than humans in 1000BC had.
  • Therefore a society that evolved post-collapse would probably lead to the emergence of better moral values.

The 'worse values' argument was:

  • We should not assume that our current values are average, we should rather assume that we've been uncommonly lucky (top ~10% of potential scenarios).
  • This is because most historical societies have had much worse values, and it has been by chance that we have managed to avoid more dystopian scenarios (multi-century global rule by fascist or communist dictatorships, and modern technologically-dominant theocracies).
  • Any collapse worthy of the name would lead to a societal reset, and a return to pre-industrial, even pre-agricultural, norms. We'd probably lose education and literacy. In that case, it would be very similar to re-rolling the dice from a certain point in history, therefore it's more likely that we would end up in one of those dystopian worlds.

We also discussed the argument that, if you're a longtermist who is very concerned about x-risk and you're confident (~70+%) that we would develop better values post-collapse, this may lead to the uncomfortable conclusion that collapse might be morally okay or desirable.

If I had to put a number on my estimates, I'd probably go for 55% better, 45% worse, with very high variation (hence the lack of a 'similar' option). 

Yeah, it's misleading.  

For a bit of context, the cost of the pneumococcus vaccine is $2-$3.50 per dose. It can only be that cheap because the AMC (Advance Market Commitment) agreed to pay top-up prices for the first 200 million doses (to incentivise development). Least-developed, GAVI-eligible countries can effectively get them for free. I'm still sceptical about the low estimates, as there are surely many other costs to getting them into arms, especially in neglected areas/ conflict zones, and I'd assume that all the 'low-hanging fruit' (very poor, very easy-to-access) kids are already being vaccinated. 

I couldn't find the study supporting this, but I'd assume that the low-end estimates were simply talking about how many lives could be saved by adding free (to the recipient country) pneumococcus vaccines to an existing vaccine schedule. 

As for the actual pneumococcus vaccine cost-effectiveness estimates, according to Kremer: "At initial program prices, the pneumococcal vaccine rollout avoided the loss of a disability adjusted life year (DALY) at cost of only $83."

I suspected that, but it didn't seem very logical. AI might swamp x-risk, but seems unlikely to swamp our chances of dying young, especially if we use the model in the piece. 

Although he says that he's more pessimistic on AI than his model suggests, in the model, his estimates are definitely within the bounds that other catastrophic risks would seriously change his estimates. 

I did a rough estimate with nuclear war vs. natural risk (using his very useful spreadsheet, and loosely based on Rodriguez' estimates)  (0.39% annual chance of US-Russia nuclear exchange, 50% chance of a Brit dying in it; I know some EAs have made much lower estimates, but this seems in line with the general consensus). In this model, nuclear risk comes out a bit higher than 'natural' over 30 years. 

Even if you're particularly optimistic about other GCRs, if you add all the other potential catastrophic/ speculative risks  together (pandemics, non-existential AI risk, nuclear, nano, other), I can't imagine them not shifting the model. 

Wow, lots of disagreement points, I'm curious what people disagree with. 

Thanks for the post, this is definitely a valuable framing. 

But I'm a bit concerned that the post creates a misleading impression that the whole catastrophic/ speculative risk field is completely overwhelmed by AI x-risk. 

Assuming you don't believe that other catastrophic risks are completely negligible compared to AI x-risk, I'd recommend adding a caveat that this is only comparing AI x-risk and existing/ non-speculative risks. If you do think AI x-risk overwhelms other catastrophic risks, you should probably mention that too. 

Although many IDev professors (estimate: ~70%) are likely just poorly calibrated, and have no incentives to look into the cost-effectiveness of interventions, many who do know about CEAs might underestimate.

For "the cost to save the life of a child" question, an IDev policy expert might take a different perspective. In my IDev masters, one prof in his 70s explained  that, if you've already paid the fixed costs of getting into the decision making process, it's very often possible to find low-hanging fruit policy changes that save more lives and cost less money (bottom right quadrant in the picture below, taken from one of his classes). 

I expect most EAs would be self-critical enough to see these both as frequently occurring flaws in the movement, but I'd dispute the claim that they're foundational. For the first criticism, some people track personal impact, and 80k talks a lot about your individual career impact, but people working for EA orgs are surely thinking of their collective impact as an org rather than anything individual. In the same way, 'core EAs' have the privilege of actually identifying with the movement enough that they can internalise the impact of the EA community as a whole. 

As for measurability, I agree that it is a bias in the movement, albeit probably a necessary one. The ecosystem example is an interesting one- I'd argue that it's not that difficult to approach ecosystem conservation from an EA perspective. We generally understand how ecosystems work and how they provide measurable valuable services to humans. A cost-effectiveness calculation would provide the human value of ecosystem services (which environmental economists usually do)and, if you want to give inherent value to species diversity, add the number of species within a given area, the number of individuals of these species and rarity/ external value of species etc. Then add weights according to various criteria to give something like an 'ecosystem value per square metre', and you'd get to a value that you could compare to other ecosystems. Calculate the price that it costs to conserve various ecosystems around the world, and voila, you have a cost-effectiveness analysis that feels at home on an EA platform. The reason this process doesn't feel 100% EA is not that it's difficult to measure, but because it can include value judgements that aren't related to the welfare of conscious beings. 

Load More