Vasco Grilo

5664 karmaJoined Working (0-5 years)Lisbon, Portugal



How others can help me

You can give me feedback here (anonymous or not). You are welcome to answer any of the following:

  • Do you have any thoughts on the value (or lack thereof) of my posts?
  • Do you have any ideas for posts you think I would like to write?
  • Are there any opportunities you think would be a good fit for me which are either not listed on 80,000 Hours' job board, or are listed there, but you guess I might be underrating them?

How I can help others

Feel free to check my posts, and see if we can collaborate to contribute to a better world. I am open to part-time volunteering, and part-time or full-time paid work. In this case, I typically ask for 20 $/h, which is roughly equal to 2 times the global real GDP per capita.


Topic contributions

Thanks for the comment, Joseph!

Will this food be healthier?

I think so. According to the EAT-Lancet Commision, the global adoption of a predominantly plant-based healthy diet, with just 13.6 % (= (153 + 15 + 15 + 62 + 19 + 40 + 36)/2500; see Table 1) of calories coming from animals, would decrease premature deaths of adults by 21.7 % (= (0.19 + 0.224 + 0.236)/3; mean of the 3 estimates in Table 3). However, I assume this requires longterm dietary change, and I do not know whether School Plates leads to that, so I have not accounted for potential health benefits to humans. Likewise, I have neglected potential health benefits coming from corporate campaigns making chicken and eggs slightly more expensive.

Will they like the food?

I determined a suffering-adjusted animal living time per meal in the UK of 0.500 d, i.e. 12.0 h (= 0.500*24). The harms from this seem much larger than the potential increase/decrease in hapiness due to eating a meat-free or plant-based meal.

Thanks, Mathias!

Do you have a spreadsheet with the calculations?

I did not, but I have just created one since you asked. I arrived to the same results (I could have arrived to different ones if I had had errors in the manual calculations). I have not included the sources of the inputs, but you can check them in the post.

Thanks for the summary, Neil. Relatedly, I Fermi estimated corporate campaigns for chicken welfare are 124 times as cost-effective as School Plates, which is a program aiming to increase the number of plant-based meals at schools and universities in the United Kingdom.

Nice post, Stijn!

The average suffering of a farmed land animal, estimated by people, is equal in size to the positive welfare of an average human (in Belgium)

This is roughly line with my estimate that the mean suffering per time of a broiler in a reformed scenario is 64.2 % of the mean happiness of a human[1] (globally). In the same analysis, I calculated the annual suffering of all farmed chickens is 1.74 times the annual happiness of all humans. Including all farmed animals, I got a ratio of 4.64. These values highlight the meat-eater problem, and suggest saving a random human life may well increase suffering nearterm, which is one reason to prioritise helping animals over humans, especially in high income countries where there is a higher consumption per capita of animals with bad lives.

There is not so much to worry about in low income countries. Without accounting for higher future consumption, I calculated the cost-effectiveness of GiveWell's top charities only decreases by 8.64 % accounting for negative effects on farmed animals. On the other hand, I estimated corporate campaigns for chicken welfare, like the ones supported by The Humane League (THL), are 1.44 k times as cost-effective as GiveWell's top charities. So I still think the best animal welfare interventions are way more cost-effective than the most cost-effective ways to save human lives at the current margin (relatedly).

  1. ^

    Chickens are the most farmed land animal.

Thanks for the clarification, Erich! Strongly upvoted.

Let me see if I can rephrase your argument

I think your rephrasement was great.

Now I'm a bit unsure about whether you're saying that you find it extremely unlikely that any AI will be vastly better in the areas I mentioned than all humans, or that you find it extremely unlikely that any AI will be vastly better than all humans and all other AIs in those areas.

The latter.

If you mean 1-4 to suggest that no AI is will be better than all humans and other AIs, I'm not sure about whether 4 follows from 1-3, but I think that seems plausible at least. But if this is what you mean, I'm not sure what you're original comment ("Note humans are also trained on all those abilities, but no single human is trained to be a specialist in all those areas. Likewise for AIs.") was meant to say in response to my original comment, which was meant as pushback against the view that AGI would be bad at taking over the planet since it wouldn't be intended for that purpose.

I think a single AI agent would have to be better than the vast majority of agents (including both human and AI agents) to gain control over the world, which I consider extremely unlikely given gains from specialisation.

If you mean 1-4 to suggest that no AI will be better than all humans, I don't think the analogy holds, because the underlying factor (IQ versus AI scale/algorithms) is different. Like, it seems possible that even unspecialized AIs could just sweep past the most intelligent and specialized humans, given enough time.

I agree.

I'd be curious to hear if you have thoughts about which specific abilities you expect an AGI would need to have to take control over humanity that it's unlikely to actually possess?

I believe the probability of a rogue (human or AI) agent gaining control over the world mostly depends on its level of capabilities relative to those of the other agents, not on the absolute level of capabilities of the rogue agent. So I mostly worry about concentration of capabilities rather than increases in capabilities per se. In theory, the capabilities of a given group of (human or AI) agents could increase a lot in a short period of time such that capabilities become so concentrated that the group would be in a position to gain control over the world. However, I think this is very unlikely in practice. I guess the annual probability of human extinction over the next 10 years is around 10^-6.

Hi titotal,

I think it makes sense to assess the annual risk of simulation shutdown based on the mean annual probability of simulation shutdown. However, I also believe the risk of simulation shutdown is much lower than the one guessed by the humble cosmologist.

The mean of a loguniform distribution ranging from a to 1 is -1/ln(a). If a = 10^-100, the risk is 0.434 % (= -1/ln(10^-100)). However, I assume there is no reason to set the minimum risk to 10^-100, so the cosmologist may actually have been overconfident. Since there is no obvious natural lower bound for the risk, because more or less by definition we do not have evidence about the simulators, I guess the lower bound can be arbitrarily close to 0. In this case, the mean of the loguniform distribution goes to 0 (= -1/(ln(0))), so it looks like the humblest view corresponds to 0 risk of simulation shutdown.

In addition, the probability of surviving an annual risk of simulation shutdown of 0.434 % (= 10^-5.44) over the estimated age of the universe of 13.8 billion years is only 10^-75,072,000,000 (= (10^-5.44)^(13.8*10^9)), which is basically 0. So the universe would have needed to be super super lucky in order to have survived for so long with such high risk. One can try to counter this argument saying there are selection effects. However, it would be super strange to have an annual risk of simulation shutdown of 0.434 % without any partial shutdowns, given that tail risk usually follows something like a power law[1] without severe jumps in severity.

  1. ^

    Although I think tail risk often decays faster than suggested by a power law.

Nice post, titotal!

This could be a whole post in itself, and in fact I’ve already explored it a bit here.

The link is private.

I would expect improvements on these types of tasks to be highly correlated in general-purpose AIs.

Higher IQ in humans is correlated with better performance in all sorts of tasks too, but the probability of finding a single human performing better than 99.9 % of (human or AI) workers in each of the areas you mentioned is still astronomically low. So I do not expect a single AI system to become better than 99.9 % of (human or AI) workers in each of the areas you mentioned. It can still be the case that the AI systems share a baseline common architecture, in the same way that humans share the same underlying biology, but I predict the top performers in each area will still be specialised systems.

I think we've seen that with GPT-3 to GPT-4, for example: GPT-4 got better pretty much across the board (excluding the tasks that neither of them can do, and the tasks that GPT-3 could already do perfectly). That is not the case for a human who will typically improve in just one domain or a few domains from one year to the next, depending on where they focus their effort.

Going from GPT-3 to GPT-4 seems more analogous to a human going from 10 to 20 years old. There are improvements across the board during this phase, but specialisation still matters among adults. Likewise, I assume specialisation will matter among frontier AI systems (although I am quite open to a single future AI system being better than all humans at any task). GPT-4 is still far from being better than 99.9 % of (human or AI) workers in the areas you mentioned.

For an agent to conquer to world, I think it would have to be close to the best across all those areas, but I think this is super unlikely based on it being super unlikely for a human to be close to the best across all those areas.

Hi Erich,

Note humans are also trained on all those abilities, but no single human is trained to be a specialist in all those areas. Likewise for AIs.

Load more