Ranking animal foods based on suffering and GHG emissions

That makes sense. The point I'm trying to make, though, is that the choice of how to do the conversion from CO2/kcal to hours/kcal is probably the most important bit that drives the results. I'd prefer to make that clearer to users, and get them to make their own assessment.

Instead, the WPM ends up coming up with an implicit conversion rate, which could be way different from what the person would say if asked. Given this, it seems like the results can't be trusted.

(I expect a WPM would be fine in domains where there are multiple difficult-to-compare criteria and we're not sure which criteria are most important – as in many daily decisions – but in this case, it could easily be that either CO2 or suffering should totally dominate your ranking, and it just depends on your worldview.)

Ranking animal foods based on suffering and GHG emissions

Cool idea!

I'm not sure I understand how it works, but isn't one of the most important parameters how someone would want to trade 1 tonne of CO2 for 1 h of suffering on a factory farm? I.e. I could imagine that ratio could vary by orders of magnitude, and could make either the suffering or the carbon effects dominate.

It seems like your current approach is to normalize both scales and then add them. This will be implicitly making some tradeoff between the two units, but that tradeoff is hidden from the user, which seems like a problem if it's going to be one of the main things driving the results.

Moreover, (apologies if I've misunderstood) but as far as I can see, the way the tradeoff is made is effectively that whichever animal is worst is set to 100 on each dimension. This doesn't seem likely to give the right results to me.

For instance: Perhaps I think beef = 10 CO2, and chicken = 1 CO2 Beef = 1 unit suffering, chicken = 100 units of suffering

In your process, I would normalize both scales so the worst is '100 points', so I'd need to increase beef to 100 and chicken to 10 on the CO2 scale.

If I weight each at 50%, I end up with overall harm scores of: Beef = 100 + 1 = 101 Chicken = 10 + 100 = 110

However, suppose my view is that 1 tonne of CO2 doesn't result in much animal suffering, so I think 1 unit of suffering = 100 CO2.

Then, my overall harm scores would be:

Beef = 10/100 + 1 = 1.1 Chicken = 1/100 + 100 = 100.1

So the picture is totally different.

(If instead I had a human-centric view that didn't put much weight on reducing animal suffering, the picture would be reversed.)

I could try to fix the results for myself by changing the relative weighting, but given that I'm not given any units, it's hard for me to know I'm doing this correctly.

Lessons from my time in Effective Altruism

What Michael says is closer to the message we're trying to get across, which I might summarise as:

  • Don't immediately rule out an area just because you're not currently interested in it, because you can develop new interests and become motivated if other conditions are present.
  • Personal fit is really important
  • When predicting your fit in an area, lots of factors are relevant (including interest & motivation in the path).
  • It's hard to predict fit - be prepared to try several areas and refine your hypotheses over time.

We no longer mention 'don't follow your passion' prominently in our intro materials.

I think our pre-2015 materials didn't emphasise fit enough.

The message is a bit complicated, but hopefully we're doing better today. I'm also planning to make personal fit more prominent on the key ideas page and also give more practical advice on how to assess it for further emphasis.

Everyday Longtermism

Agree - I think an interesting challenge is "when does this become better than donating 10% to the top marginal charity?"

What’s the low resolution version of effective altruism?

I'm sympathetic to the idea of trying to make spread of impact the key idea. I think the problem in practice is "do thousands of times more good" is too abstract to be sticky and easily understood, so it gets simplified to something more concrete.

What’s the low resolution version of effective altruism?

Unfortunately I think the importance of EA actually goes up as you focus on better and better things. My best guess is the distribution of impact is lognormal, this means that going from, say, the 90th percentile best thing to the 99th could easily be a bigger jump than going from, say, the 50th percentile to the 80th.

You're right that at some point diminishing returns to more research must kick in and you should take action rather than do more research, but I think that point is well beyond "don't do something obviously bad", and more like "after you've thought really carefully about what the very top priority might be, including potentially unconventional and weird-seeming issues".

A new, cause-general career planning process

Makes sense! We've neglected those categories in the last few years - would be great to make the advice there a bunch more specific at some point.

Careers Questions Open Thread

Hi Brad,

Just a very quick comment: if you'd like to get involved in politics/policy, the standard route is to try to network your way directly into a job as staffer, on a political campaign, in the exec branch or at a think tank - though this often takes a few years (and is easier if in DC), so in the meantime people normally focus on building up relevant credentials and experience.

In the second category, grad school is seen as useful step, especially if you want to be more on the technocrat side rather than party politics side of things.

Note that an MPP or Masters in another relevant subject (e.g. Economics) is enough for most positions, and that only takes 1-2 years, rather than 3-6. (PhDs are only needed if you want to be a technical expert or researcher.) It could be at least worth applying to see if you can get into a top ~5 MPP programme, or having that as a goal to potentially work towards.

A little more info here and in the links:

"Patient vs urgent longtermism" has little direct bearing on giving now vs later

Overall I think I'd prefer to think about "how good are various opportunities as investments in the longtermist community?", as well as "how good are various opportunities at making progress towards other proxies-for-good that we've identified?". Activities can score well on either, both, or neither of these, rather than being classed as one type or the other.

That seems like a good way of putting it, and I think I was mainly thinking of it this way (e.g. I was imagining that an opportunity could further all three categories), though I didn't make that clear (e.g. should call them 'goals' rather than 'categories').

Careers Questions Open Thread

I'd agree with the above. I also wanted to check you've seen our generic advice here – it's a pretty rough article, so many people haven't seen it:

Load More