Quant Trader | help organize EA Chicago

Topic Contributions


Most students who would agree with EA ideas haven't heard of EA yet (results of a large-scale survey)

EA does seem a bit overrepresented (sort of acknowledged here).

Possible reasons: (a) sharing was encouraged post-survey, with some forewarning (b) EAs might be more likely than average to respond to 'Student Values Survey'?

Increasing Demandingness in EA

I strongly agree with this comment, especially the last bit.

In line with the first two paragraphs, I think the primary constraint is plausibly founders [of orgs and mega-projects], rather than generically 'switching to direct work'.

Increasing Demandingness in EA

Re footnote, the only public estimate I've seen is $400k-$4M here, so you're in the same ballpark.

Personally I think $3M/y is too high, though I too would like to see more opinions and discussion on this topic.

My bargain with the EA machine

I enjoyed this post and the novel framing, but I'm confused as to why you seem to want to lock in your current set of values—why is current you morally superior to future you?

Do I want my values changed to be more aligned with what’s good for the world? This is a hard philosophical question, but my tentative answer is: not inherently – only to the extent that it lets me do better according to my current values.

Speaking for myself personally, my values have changed quite a bit in the past ten years (by choice). Ten-years-ago-me would likely be doing something much different right now, but that's not a trade that the current version of myself would want to make. In other words, it seems like in the case where you opt for 'impactful toil', that label no longer applies (it is more like 'fun work' per your updated set of values).

The value of small donations from a longtermist perspective

Some of the comments here are suggesting that there is in fact tension between promoting donations and direct work. The implication seems to be that while donations are highly effective in absolute terms, we should intentionally downplay this fact for fear that too many people might 'settle' for earning to give.

Personally, I would much rather employ honest messaging and allow people to assess the tradeoffs for their individual situation. I also think it's important to bear in mind that downplaying cuts both ways—as Michael points out, the meme that direct work is overwhelmingly effective has done harm.

There may be some who 'settle' for earning to give when direct work could have been more impactful, and there may be some who take away that donations are trivial and do neither. Obviously I would expect the former to be hugely overrepresented on the EA Forum.

Some thoughts on vegetarianism and veganism

See also

Offsetting the carbon cost of going from an all-chicken diet to an all-beef diet would cost $22 per year, or about 5 cents per beef-based meal. Since you would be saving 60 chickens, this is three chickens saved per dollar, or one chicken per thirty cents. A factory farmed chicken lives about thirty days, usually in extreme suffering. So if you value preventing one day of suffering by one chicken at one cent, this is a good deal.

Future-proof ethics

I didn't read the goal here as literally to score points with future people, though I agree that the post is phrased such that it is implied that future ethical views will be superior.

Rather, I think the aim is to construct a framework that can be applied consistently across time—avoiding the pitfalls of common-sense morality both past and future.

In other words, this could alternatively be framed as 'backtesting ethics' or something, but 'future-proofing' speaks to (a) concern about repeating past mistakes (b) personal regret in future.

doing more good vs. doing the most good possible

I was especially interested in a point/thread you mentioned about people perceiving many charities as having similar effectiveness and that this may be an impediment to people getting interested in effective altruism


See here

A recent survey of Oxford students found that they believed the most effective global health charity was only ~1.5x better than the average — in line with what the average American thinks — while EAs and global health experts estimated the ratio is ~100x. This suggests that even among Oxford students, where a lot of outreach has been done, the most central message of EA is not yet widely known.

Has anything in the EA global health sphere changed since the critiques of "randomista development" 1-2 years ago?
  1. As Jackson points out, those willing to go the 'high uncertainty/high upside' route tend to favor far future or animal welfare causes. Even if we think these folks should consider more medium-term causes, comparing cost-effectiveness to GiveWell top charities may be inapposite.
  2. It seems like there is support for hits-based policy interventions in general, and Open Phil has funded at least some of this.
  3. The case for growth was based on historical success of pro-growth policy. Not only is this now less neglected, but much of the low-hanging fruit has been taken.
The Explanatory Obstacle of EA

Thanks for this—I have often wished I had a better elevator pitch for EA.

One thing I might add is some mention of just how wide the disparity can be amongst possible interventions, since this seems to be one of the most overlooked key ideas.

Load More