VG

Vasco Grilo

6021 karmaJoined Working (0-5 years)Lisbon, Portugal
sites.google.com/view/vascogrilo?usp=sharing

Bio

Participation
4

How others can help me

You can give me feedback here (anonymous or not). You are welcome to answer any of the following:

  • Do you have any thoughts on the value (or lack thereof) of my posts?
  • Do you have any ideas for posts you think I would like to write?
  • Are there any opportunities you think would be a good fit for me which are either not listed on 80,000 Hours' job board, or are listed there, but you guess I might be underrating them?

How I can help others

Feel free to check my posts, and see if we can collaborate to contribute to a better world. I am open to part-time volunteering, and part-time or full-time paid work. In this case, I typically ask for 20 $/h, which is roughly equal to 2 times the global real GDP per capita.

Comments
1345

Topic contributions
25

Thanks for the analysis, Mikolaj. I would be curious to know your thoughts on my estimate that corporate campaigns for chicken welfare are 1.44 k times as cost-effective as GiveWell's top charities.

Hi Linch,

I would've thought that's where most people's intuitions differ, though maybe not Vasco's specific crux.

I think what mostly matters is how fast the difference between the PDF of the value of the future after and before an intervention decays with the value of the future, not the expected value of the future.

Thanks for the comment, Dan.

Claim 4: Given claims 1-3, and that the "some" civilizations described in claims 1-3 are not vanishingly rare (enough to balance out the very high value), the expected value of averting a random extinction event for a technologically advanced civilization is astronomically high.

I think such civilisations are indeed vanishingly rare. The argument you are making is the classical type of argument I used to endorse, but no longer do. As the one Nick Bostrom makes in Existential Risk Prevention as Global Priority (see my post), it is scope sensitive to the astronomical expected value of the future, but scope insensitive to the infinitesimal increase in probability of astronomically valuable worlds, which results in an astronomical cost of moving probability mass from the least valuable worlds to astronomically valuable ones. Consequently, interventions reducing the nearterm risk of human extinction need not be astronomically cost-effective, and I currently do not think they are.

Thanks for engaging, Larks!

10^-35 is such a short period of time that basically nothing can happen during it - even a laser couldn't cut through your body that quickly.

Right, even in a vacuum, light takes 10^-9 s (= 0.3/(3*10^8)) to travel 30 cm.

To explicitly do the calculation, lets assume a handgun bullet hits someone at around ~250m/s, and decelerates somewhat, taking around 10^-3 seconds to pass through them. Assuming they were otherwise a normal person who didn't often get shot at, intervening to protect them for ~10^-3 seconds would give them about 50 years ~= 10^9 seconds of extra life, or 12 orders of magnitude of leverage.

I do not think your example is structurally analogous to mine:

  • My point was that decreasing the risk of death over a tiny fraction of one's life expectancy does not extend life expectancy much.
  • In your example, my understanding is that the life expectancy of the person about to be killed is 10^-3 s. So, for your example to be analogous to mine, your intervention would have to decrease the risk of death over a period astronomically shorter than 10^-3 s, in which case I would be super pessimistic about extending the life expectancy.

This example seems analogous to me because I believe that transformative AI basically is a one-time bullet and if we can catch it in our teeth we only need to do so once.

The mean person who is 10^-3 away from being killed, who has e.g. a bullet 25 cm (= 250*10^-3) away from the head if it is travelling at 250 m/s, presumably has a very short life expectancy. If one thinks humanity is in a similar situation with respecto to AI, then the expected value of the future is also arguably not astronomical, and therefore decreasing the nearterm risk of human extinction need not be astronomically cost-effective. Pushing the analogy to an extreme, decreasing deaths from shootings is not the most effective way to extend human life expectancy.

I respond to that by saying "ok I guess empirics aren't super helpful for the big picture question let's try to build mechanistic understanding of things grounded wherever possible in empirics, as well as priors about what types of distributions occur when various different generating mechanisms are at play", whereas it sounds like you're responding by saying something like "well as a prior we'll just use the parts of the distribution we can actually measure, and assume that generalizes unless we get contradictory data"?

Yes, that would be my reply. Thanks for clarifying.

Thanks for sharing, Lewis! By "farm animal welfare grantmaking", I guess you mean spending which in Open Philanthropy's (OP's) grants database falls under the focus areas "Alternatives to Animal Products", "Broiler Chicken Welfare", "Cage-Free Reforms" or "Farm Animal Welfare". Do you have data on how many M$ OP spent on these 4 areas together in China in 2023? How about the total philanthropic spending to help farmed animals in China including all sources (not just OP)?

I was not clear above, but I meant (posterior) counterfactual impact under expected total hedonistic utilitarianism. Even if a species is counterfactually preserved indefinitely due to actions now, which I think would be very hard, I do not see how it would permanently increase wellbeing. In addition, I meant to ask for actual empirical evidence as opposed to hypothetical examples (e.g. of one species being saved and making an immortal conservationist happy indefinitely).

Now it looks to me as though you're dogmatically sticking with the prior.

Are there any interventions whose estimates of (posterior) counterfactual impact do not decay to 0 in at most a few centuries? From my perspective, their absence establishes a strong prior against persistent longterm effects.

I do put a bunch of probability on "averting near-term extinction doesn't save astronomical value for some reason or another", though the reasons tend to be ones where we never actually had a shot of an astronomically big future in the first place, and I think that that's sort of the appropriate target for scepticism

This makes a lot of sense to me too.


Sorry, I don't find this is really speaking to my question?

I do not think the difficulty of decreasing a risk is independent of the value at stake. It is harder to decrease a risk when a larger value is at stake. So, in my mind, decreasing the nearterm risk of human extinction is astronomically easier than decreasing the risk of not achieving 10^50 lives of value, such that decreasing the former by e.g. 10^-10 leads to a relative increase in the latter much smaller than 10^-10.

I also think that you're making some strong assumptions about things essentially cancelling out

Could you elaborate on why you think I am making a strong assumption in terms of questioning the following?

In light of the above, I expect what David Thorstad calls rapid diminution. I see the difference between the PDF after and before an intervention reducing the nearterm risk of human extinction as quickly decaying to 0, thus making the increase in the expected value of the astronomically valuable worlds negligible. For instance:

  • If the difference between the PDF after and before the intervention decays exponentially with the value of the future v, the increase in the value density caused by the intervention will be proportional to v*e^-v[4].
  • The above rapidly goes to 0 as v increases. For a value of the future equal to my expected value of 1.40*10^52 human lives, the increase in value density will multiply a factor of 1.40*10^52*e^(-1.40*10^52) = 10^(log10(1.40)*52 - log10(e)*1.40*10^52) = 10^(-6.08*10^51), i.e. it will be basically 0.

Do you think I am overestimating how fast the difference between the PDF after and before the intervention decays? As far as I can tell, the (posterior) counterfactual impact of interventions whose effects can be accurately measured, like ones in global health and development, decays to 0 as time goes by. I do not have a strong view on the particular shape of the difference, but exponential decay is quite typical in many contexts.

Load more