Vasco Grilo

3573 karmaJoined Jul 2020Working (0-5 years)Lisbon, Portugal


  • Organizer of EA Lisbon
  • Completed the Precipice Reading Group
  • Completed the In-Depth EA Virtual Program
  • Attended more than three meetings with a local EA group


Topic Contributions

Thanks for writing this, and mentioning my related post, Jeff!

The technological change is the continuing decrease in the knowledge, talent, motivation, and resources necessary to create a globally catastrophic pandemic.

I think this depends on how fast safety measures like the ones you mentioned are adopted, and how the offense-defence balance evolves with technological progress. It would be great if Open Phil released the results of their efforts to quantify biorisk, whose one of the aims was:

  • Enumerating possible ‘phase transitions’ that would cause a radical departure from relevant historical base rates, e.g. total collapse of the taboo on biological weapons, such that they become a normal part of military doctrine.

Thanks for the comment, Jeff!

How extraordinary does the evidence need to be?

I have not thought about this in any significant detail, but it is a good question! I think David Thorstad's series exagerating the risks has some relevant context.

You can easily get many orders of magnitude changes in probabilities given some evidence.  For example, as of 1900 on priors the  probability that >1B people would experience powered flight in the year 2000 would have been extremely low, but someone paying attention to technological developments would have been right to give it a higher probability.

Is that a fair comparison? I think the analogous comparison would involve replacing terrorist attack deaths per year by the number of different people travelling by plane per year. So we would have to assume, in the last 51.5 years, only:

  • 9.63 k different people travelled by plane in a random single calendar year.
  • At most 44.6 k different people travelled by plane in a single calendar year.

Then we would ask about the probability that every single human (or close) would travel by plane next year, which would a priori be astronomically unlikely given the above.

I've written something up on why I think this is likely: Out-of-distribution Bioattacks. Short version: I expect a technological change which expands which actors would try to cause harm.

I am glad you did. It is a useful complement/follow-up to my post. I qualitatevely agree with the points you make, although it is still unclear to me how much higher the risk will become.

(Thanks for sharing a draft with me in advance so I could post a full response at the same time instead of leaving "I disagree, and will say why soon!" comments while I waited for information hazard review!)

You are welcome, and thanks for letting me know about that too!

Thanks for helping organise the donation events, Lizka!

In agreement with my comment last year, I made 97 % of my year donations a few months ago to the Long-Term Future Fund (LTFF). However, I am now significantly less confident about existential risk mitigation being the best way to improve the world:

  • David Thorstad's posts, namely the ones on mistakes in the moral mathematics of existential risk, epistemics and exagerating the risks, increased my general level of scepticism towards deferring to thought leaders in effective altruism before having engaged deeply with the arguments. It is not so much that I got to know knock-down arguments against existential risk mitigation, but more that I become more willing to investigate the claims being made.
  • I noticed my tail risk estimates tend to go down as I investigate a topic. In the context of:
    • Climate risk, I was deferring to a mix between 80,000 Hours' upper bound of 0.01 %  existential risk in the next 100 years, Toby Ord's best guess of 0.1 %, and John Halstead's best guess of 0.001 %. However, I looked a little more into John's report, and think it makes sense to put more weight in his estimate.
    • Nuclear risk, I was previously mostly deferring to Luisa's (great!) investigation for the effects on mortality, and to Toby Ord's 0.1 % existential risk in the next 100 years. However, I did an analysis suggesting both are quite pessimistic:
      • "My estimate of 12.9 M expected famine deaths due to the climatic effects of nuclear war before 2050 is 2.05 % the 630 M implied by Luisa Rodriguez’s results for nuclear exchanges between the United States and Russia, so I would say they are significantly pessimistic[3]".
      • "Mitigating starvation after a population loss of 50 % does not seem that different from saving a life now, and I estimate a probability of 3.29*10^-6 of such a loss due to the climatic effects of nuclear war before 2050[58]".
    • AI risk, I noted I am not confident superintelligent AI disempowering humanity would necessarily be bad, and wonder whether the vast majority of technological progress will happen in the longterm future.
    • AI and bio risk, I suspect the risk of a terrorist attack causing human extinction is exagerated.

I said 97 % above rather than 100 % because I have just made a small donation to the EA Forum Donation Fund[1], distributing my votes fairly similarly across the LTFF, Animal Welfare Fund, and Rethink Priorities. LTFF may still be my top option, so I might have put all votes on LTFF (related dialogue). On the other hand:

  • I was more inclined to support Rethink's (great!) work on the CURVE sequence (whose 1st post went out about 1 month after I made my big year donation). I think it is stimulating some great discussion on cause priritisation, and might (I hope!) eventually influence Open Phil's allocation.
  • I agree animal welfare should be receing more resources, and wanted to signal my support. Also, even though I am all in for fanaticism in principle (not in practice), I also just feel like it is nice to donate to something reducing suffering in a more sure-way now and then! 
  1. ^

    Side note. No donation icon showed up after my donation. Not sure whether this is supposed to or not. Update: you have to DM @EA Forum Team.

That makes a lot of sense, Keller! In addition, donation splitting seems to make the most sense within cause areas, but diminishing returns here can be mitigated by donating to funds (e.g. Animal Welfare Fund) instead of particular charities (e.g. The Humane League).

Nice post, Jack!

Relatedly, Benjamin Todd did an analysis which supports the importance of cause prioritisation. It concludes differences in cost-effectiveness within causes are smaller than previously thought:

Overall, my guess is that, in an at least somewhat data-rich area, using data to identify the best interventions can perhaps boost your impact in the area by 3–10 times compared to picking randomly, depending on the quality of your data.

This is still a big boost, and hugely underappreciated by the world at large. However, it’s far less than I’ve heard some people in the effective altruism community claim.

One limitation of Ben's analysis is that it only looked into nearterm human-focussed interventions, as there were no good data in other areas.

I also identified with many parts of the story[1]. Thanks for the post, Michael!

  1. ^

    Except I am still a hardcore moral realist!

Thanks for the post, Jack!

In Uncertainty over time and Bayesian updating, David Rhys Bernard estimates how quickly uncertainty about the impact of an intervention increases as the time horizon of the prediction increases. He shows that a Bayesian should put decreasing weight on longer-term estimates. Importantly, he uses data from various development economics randomized controlled trials, and it is unclear to me how much the conclusions might generalize to other interventions.

For me the following is the most questionable assumption:

Constant variance prior: We assume that the variance of the prior was the same for each time horizon whereas the variance of the signal increases with time horizon for simplicity.


If the variance of the prior grows at the same speed as the variance of the signal then the expected value of the posterior will not change with time horizon.

I think the rate of increase of the variance of the prior is a crucial consideration. Intuitively, I would say the variance of the prior grows at the same speed as the variance of the signal, in which case the signal would not be discounted.

Thanks for elaborating further!

I just don't think it makes any sense to have an aggregated total measure of "welfare". We can describe what is the distribution of welfare across the sentient beings of the universe, but to simply bunch it all up has essentially no meaning.

I find it hard to understand this. I think 10 billion happy people is better than no people. I guess you disagree with this?

It's moral because the terrorist is infringing the wishes of those people right now, and violating their self-determination. If the people decided to infect themselves, then it would be ok.

I think respecting people's preferences is a great heuristic to do good. However, I still endorse hedonic utilitarianism rather than preference utilitarianism, because it is possible for someone to have preferences which are not ideal to maximise one's goals. (As an aside, Peter Singer used to be a preference utilitarian, but now is a hedonistic utilitarianism.)

Sure, the suffering is an additional evil, but the killing is an evil unto itself.

No killing is necessary given an ASI. The preferences of humans could be modified such that everyone is happy with ASI taking over the universe. In addition, even if you think killing without suffering is bad in itself (and note ASI may even make the killing pleasant to humans), do you think that badness would outweigh an arbitrarity large happiness?

Rocks aren't sentient, they don't count.

I think rocks are sentient in the sense they have a non-null expected welfare range, but it does not matter because I have no idea how to make them happier.

What if you can instantly vaporize everyone with a thermonuclear bomb, as they are all concentrated within the radius of the fireball? Death would then be instantaneous. Would that make it acceptable? Very much doubt it.

No, it would not be acceptable. I am strongly against negative utilitarianism. Vaporising all beings without any suffering would prevent all future suffering, but it would also prevent all future happiness. I think the expected value of the future is positive, so I would rather not vaporise all beings.

Thanks for the clarifications!

Do you see ways for this sort of change to be decision relevant?

Nevermind. I think the model as is makes sense because it is more general. One can always specify a smaller probability of the intervention having no effect, and then account for other factors in the distribution of the positive effect.

However, there are costs to greater configurability, and we opted for less configurability here. Though I could see a reasonable person having gone the other way.

Right. If it is not super easy to add, then I guess it is not worth it.

Thanks for clarifying, Michael! Your approach makes sense to me. On the other hand, the value of EA Funds' Global Health and Development Fund in its current form remains unclear to me.

Load more