VG

Vasco Grilo

5323 karmaJoined Jul 2020Working (0-5 years)Lisbon, Portugal
sites.google.com/view/vascogrilo?usp=sharing

Bio

Participation
4

How others can help me

You can give me feedback here (anonymous or not). You are welcome to answer any of the following:

  • Do you have any thoughts on the value (or lack thereof) of my posts?
  • Do you have any ideas for posts you think I would like to write?
  • Are there any opportunities you think would be a good fit for me which are either not listed on 80,000 Hours' job board, or are listed there, but you guess I might be underrating them?

How I can help others

Feel free to check my posts, and see if we can collaborate to contribute to a better world. I am open to part-time volunteering, and part-time or full-time paid work. In this case, I typically ask for 20 $/h, which is roughly equal to 2 times the global real GDP per capita.

Comments
1220

Topic contributions
25

Great post, titotal!

Why should a person focus on your issue in particular, rather than

It looks like you meant to write something after this.

One way to react to this would be to try and compensate for motivation gaps when evaluating the strengths of different arguments. Like, if you are evaluating a claim, and side A has a hundred full time advocates but side B doesn’t, don’t just weigh up current arguments, weigh up how strong side B’s position would be if they also had a hundred full time advocates. A tough mental exercise!

Relatedly, there is this post from Nuño Sempere.

Summary

This document seeks to outline why I feel uneasy about high existential risk estimates from AGI (e.g., 80% doom by 2070). When I try to verbalize this, I view considerations like 

  • selection effects at the level of which arguments are discovered and distributed
  • community epistemic problems, and 
  • increased uncertainty due to chains of reasoning with imperfect concepts 

as real and important. 

I still think that existential risk from AGI is important. But I don’t view it as certain or close to certain, and I think that something is going wrong when people see it as all but assured. 

Thanks for the follow up, Matthew! Strongly upvoted.

My best guess is also that additional GHG emissions are bad for wild animals, but it has very low resilience, so I do not want to advocate for conservationism. My views on the badness of the factory-farming of birds are much more resilient, so I am happy with people switching from poultry to beef, although I would rather have them switch to plant-based alternatives. Personally, I have been eating plant-based for 5 years.

Moreover, as Clare Palmer argues

Just flagging this link seems broken.

I think you have misinterpreted what my article about discounting is recommending.

Sorry! It sounded so much like you were referring to Weitzman 1998 that I actually did not open the link. My bad! I have now changed "That paper says one should discount" to "One should discount".

a traditional justification for discounting is that if we didn’t, we’d be obliged to invest nearly all our income, since the number of future people could be so great.

I do not think this is a good argument for discounting. If it turns out we should invest nearly all our income to maximise welfare, then I would support it. In reality, I think the possibility of the number of future people being so great is more than offset by the rapid decay of how much we could affect such people, such that investing nearly all our income is not advisable.

I argue for discounting damages to those who would be much better off than we are at conventional rates, but giving sizable—even if not equal—weight to damages that would be suffered by everyone else, regardless of how far into the future they exist.

This rejects (perfect) impartiality, right? I strongly endorse expected total hedonistic utilitarianism, so I would rather maintain impartiality. At the same time, the above seems like a good heuristic for better outcomes even under fully impartial views.

Nice points, Matthew!

(a) It wasn't clear to me that the estimate of global heating damages was counting global heating damages to non-humans.

I have now clarified my estimate of the harms of GHG emissions only accounts for humans. I have also added:

estimated the scale of the welfare of wild animals is 4.21 M times that of farmed animals. Nonetheless, I have neglected the impact of GHG emissions on wild animals due to their high uncertainty. According to Brian Tomasik:

“On balance, I’m extremely uncertain about the net impact of climate change on wild-animal suffering; my probabilities are basically 50% net good vs. 50% net bad when just considering animal suffering on Earth in the next few centuries (ignoring side effects on humanity's very long-term future).”

In particular, it is unclear whether wild animals have positive/negative welfare.

I have added your name to the Acknowledgements. Let me know if you would rather remain anonymous.

(b) It appears you were working with a study that employed a discount rate of 2%. That's going to discount damages in 100 years to 13% of their present value, and damages in 200 years to 1.9% of their present value--and it goes downhill from there. But that seems very hard to justify. Discounting is often defended on the ground that our descendants will be richer than we are.

Carleton 2022 presents results for various discount rates, but I used the ones for their preferred value of 2 %. I have a footnote saying:

“Our preferred estimates use a discount rate of  = 2 %”. This is 1.08 (= 0.02/0.0185) times the 1.85 % (= (17.5/9.72)^(1/(2022 - 1990)) - 1) annual growth rate of the global real GDP per capita from 1990 to 2022. The adequate growth rate may be higher due to transformative AI, or lower owing to stagnation. I did not want to go into these considerations, so I just used Carleton 2022’s mainstream value.


But that rationale doesn’t apply to damages in worst-case scenarios. Because they could be so enduring, these damages are huge in expectation.

I used to think this was relevant, but mostly no longer do:

  • One should discount the future at the lowest possible rate, but it might still be the case that this is not much lower than 2 % (for reasons besides pure time discounting, which I agree should be 0).
  • I believe human extinction due to climate change is astronomically unlikely. I have a footnote with the following. "For donors interested in interventions explicitly targeting existential risk mitigation, I recommend donating to LTFF, which mainly supports AI safety. I guess existential risk from climate change is smaller than that from nuclear war (relatedly), and estimated the nearterm annual risk of human extinction from nuclear war is 5.93*10^-12, whereas I guess that from AI is 10^-6".
  • I guess human extinction is very unlikely to be an existential catastrophe. "For example, I think there would only be a 0.0513 % (= e^(-10^9/(132*10^6))) chance of a repetition of the last mass extinction 66 M years ago, the Cretaceous–Paleogene extinction event, to be existential". You can check the details of the Fermi estimate in the post.
  • If your worldview is such that very unlikely outcomes of climate chance still have meaningful expected value, the same will tend to apply to our treatment of animals. For example, I assume you would have to consider effects on digital minds.
  • I am open to indirect longterm effects dominating the expected value, but I suppose maximising more empirically quantifiable less uncertain effects on welfare is still a great heuristic.

Can you give an example of what might count as "spending to save lives in wars 1k times as deadly" in this context?

For example, if one was comparing wars involding 10 k or 10 M deaths, the latter would be more likely to involve multiple great power, in which case it would make more sense to improve relationships between NATO, China and Russia.

Thinking about the amounts we might be willing to spend on interventions that save lives in 100-death wars vs 100k-death wars, it intuitively feels like 251x is a way better multiplier than 63,000. So where am I going wrong?

You may be right! Interventions to decrease war deaths may be better conceptualised as preventing deaths within a given severity range, in which case I should not have interpreted lirerally the example in Founders Pledge’s report Philanthropy to the Right of Boom. In general, I think one has to rely on cost-effectiveness analyses to decide what to prioritise.

When you are thinking about the PDF of , are you forgetting that ∇ is not proportional to ∇?

I am not sure I got the question. In my discussion of Founders Pledge's example about war deaths, I assumed the value of saving one life to be the same regardless of population size, because this is what they were doing). So I did not use the ratio between the initial and population.

Thanks for tagging me, Johannes! I have not read the post, but in my mind one should overwhelmingly focus on minimising animal suffering in the context of food consumption. I estimate the harm caused to farmed animals by the annual food consumption of a random person is 159 times that caused to humans by their annual GHG emissions.

Fig. 4 of Kuruc 2023 is relevant to the question. A welfare weight of 0.05 means that one values 0.05 units of welfare in humans as much as 1 unit of welfare in animals, and it would still require a social cost of carbon of over 7 k$/t for prioritising beef reductions over poultry reductions, whereas United States Environmental Protection Agency (EPA) proposes one of 190 $/t. If one values 1 unit of welfare the same regardless of species (i.e. if one rejects speciesism), there is basically no way it makes sense to go from beef to poultry (ignoring effects on wild animals; see discussion below).

extended data figure 4

Thanks for the comment, Stan!

Using PDF rather than CDF to compare the cost-effectiveness of preventing events of different magnitudes here seems off.

Technically speaking, the way I modelled the cost-effectiveness:

  • I am not comparing the cost-effectiveness of preventing events of different magnitudes.
  • Instead, I am comparing the cost-effectiveness of saving lives in periods of different population losses.

Using the CDF makes sense for the former, but the PDF is adequate for the latter.

You show that preventing (say) all potential wars next year with a death toll of 100 is 1000^1.6 = 63,000 times better in expectation than preventing all potential wars with a death toll of 100k.

I agree the above follows from using my tail index of 1.6. It is just worth noting that the wars have to involve exactly, not at least, 100 and 100 k deaths for the above to be correct.

More realistically, intervention A might decrease the probability of wars of magnitude 10-100 deaths and intervention B might decrease the probability of wars of magnitude 100,000 to 1,000,000 deaths. Suppose they decrease the probability of such wars over the next n years by the same amount. Which intervention is more valuable? We would use the same methodology as you did except we would use the CDF instead of the PDF. Intervention A would be only 1000^0.6 = 63 times as valuable.

This is not quite correct. The expected deaths from wars with  to  deaths is , where  are the minimum war deaths. So, for a tail index of , intervention A would be 251 (= (10^-0.6 - 100^-0.6)/((10^5)^-0.6 - (10^6)^-0.6)) times as cost-effective as B. As the upper bounds of the severity ranges of A and B get increasingly close to their lower bounds, the cost-effectiveness of A tends to 63 k times that of B. In any case, the qualitative conclusion is the same. Preventing smaller wars averts more deaths in expectation assuming war deaths follow a power law.

As an intuition pump we might look at the distribution of military deaths in the 20th century. Should the League of Nations/UN have spent more effort preventing small wars and less effort preventing large ones?

I do not know. Instead of relying on past deaths alone, I would rather use cost-effectiveness analyses to figure out what is more cost-effective, as the Centre for Exploratory Altruism Research (CEARCH) does. I just think it is misleading to directly compare the scale of different events without accounting for their likelihood, as in the example from Founders Pledge’s report Philanthropy to the Right of Boom I mention in the post.

When it comes to things that could be even deadlier than WWII, like nuclear war or a pandemic, it's obvious to me that the uncertainty about the death toll of such events increases at least linearly with the expected toll, and hence the "100-1000 vs 100k-1M" framing is superior to the PDF approach.

I am also quite uncertain about the death toll of catastrophic events! I used the PDF to remain consistent which Founders Pledge's example, which compared discrete death tolls (not ranges).

By "pre- and post-catastrophe population", I meant the population at the start and end of a period of 1 year, which I now also refer to as the initial and final population.

I guess you are thinking that the period of 1 year I mention above is one over which there is a catastrophe, i.e. a large reduction in population. However, I meant a random unconditioned year. I have now updated "period of 1 year" to "any period of 1 year (e.g. a calendar year)". Population has been growing, so my ratio between the initial and final population will have a high chance of being lower than 1.

Oh, I didn't mean for you to define the period explicitly as a fixed interval period. I assume this can vary by catastrophe. Like maybe population declines over 5 years with massive crop failures. Or, an engineered pathogen causes massive population decline in a few months.

Hi @MichaelStJules, I am tagging you because I have updated the following sentence. If there is a period longer than 1 year over which population decreases, the power laws describing the ratio between the initial and final population of each of the years following the 1st could have different tail indices, with lower tail indices for years in which there is a larger population loss. I do not think the duration of the period is too relevant for my overall point. For short and long catastrophes, I expect the PDF of the ratio between the initial and final population to decay faster than the benefits of saving a life, such that the expected value density of the cost-effectiveness decreases with the severity of the catastrophe (at least for my assumption that the cost to save a life does not depend on the severity of the catastrophe).

I just wasn't sure what exactly you meant. Another intepretation would be that P_f is the total post-catastrophe population, summing over all future generations, and I just wanted to check that you meant the population at a given time, not aggregating over time.

I see! Yes, both  and  are population sizes at a given point in time.

I think that the risk of human extinction over 1 year is almost all driven by some powerful new technology (with residues for the wilder astrophysical disasters, and the rise of some powerful ideology which somehow leads there). But this is an important class! In general dragon kings operate via something which is mechanically different than the more tame parts of the distribution, and "new technology" could totally facilitate that.

To clarify, my estimates are supposed to account for unknown unknowns. Otherwise, they would be any orders of magnitude lower.

Unfortunately, for the relevant part of the curve (catastrophes large enough to wipe out large fractions of the population) we have no data, so we'll be relying on theory.

I found the "Unfortunately" funny!

My understanding (based significantly just on the "mechanisms" section of that wikipedia page) is that dragon kings tend to arise in cases where there's a qualitatively different mechanism which causes the very large events but doesn't show up in the distribution of smaller events. In some cases we might not have such a mechanism, and in others we might.

Makes sense. We may even have both cases in the same tail distribution. The tail distribution of the annual war deaths as a fraction of the global population is characteristic of a power law from 0.001 % to 0.01 %, then it seems to have a dragon king from around 0.01 % to 0.1 %, and then it decreases much faster than predicted by a power law. Since the tail distribution can decay slower and faster than a power law, I feel like this is still a decent assumption.

It certainly seems plausible to me when considering catastrophes (and this is enough to drive significant concern, because if we can't rule it out it's prudent to be concerned, and risk having wasted some resources if we turn out to be in a world where the total risk is extremely small), via the kind of mechanisms I allude to in the first half of this comment.

I agree we cannot rule out dragon kings (flatter sections of the tail distribution), but this is not enough for saving lives in catastrophes to be more valuable than in normal times. At least for the annual war deaths as a fraction of the global population, the tail distribution still ends up decaying faster than a power law despite the presence of a dragon king, so the expected value density of the cost-effectiveness of saving lives is still lower for larger wars (at least given my assumption that the cost to save a life does not vary with the severity of the catastrophe). I concluded the same holds for the famine deaths caused by the climatic effects of nuclear war.

One could argue we should not only put decent weight on the existence of dragon kings, but also on the possibility that they will make the expected value density of saving lives higher than in normal times. However, this would be assuming the conclusion.

Thanks for the comment, David! I agree all those effects could be relevant. Accordingly, I assume that saving a life in catastrophes (periods over which there is a large reduction in population) is more valuable than saving a life in normal times (periods over which there is a minor increase in population). However, it looks like the probability of large population losses is sufficiently low to offset this, such that saving lives in normal time is more valuable in expectation.

Load more