RandomEA

RandomEA's Comments

What will 80,000 Hours provide (and not provide) within the effective altruism community?

For those who are curious,

  • in April 2015, GiveWell had 18 full-time staff, while
  • 80,000 Hours currently has a CEO, a president, 11 core team members, and two freelancers and works with four CEA staff.
What will 80,000 Hours provide (and not provide) within the effective altruism community?

Hi Ben,

Thank you to you and the 80,000 Hours team for the excellent content. One issue that I've noticed is that a relatively large number of pages state that they are out of date (including several important ones). This makes me wonder why it is that 80,000 Hours does not have substantially more employees. I'm aware that there are issues with hiring too quickly, but GiveWell was able to expand from 18 full-time staff (8 in research roles) in April 2017 to 37 staff today (13 in research roles and 5 in content roles). Is the reason that 80,000 Hours cannot grow as rapidly that its research is more subjective in nature, making good judgment more important, and that judgment is quite difficult to assess?

A cause can be too neglected

It seems to me that there are two separate frameworks:

1) the informal Importance, Neglectedness, Tractability framework best suited to ruling out causes (i.e. this cause isn't among the highest priority because it's not [insert one or more of the three]); and

2) the formal 80,000 Hours Scale, Crowdedness, Solvability framework best used for quantitative comparison (by scoring causes on each of the three factors and then comparing the total).

Treating the second one as merely a formalization of the first one can be unhelpful when thinking through them. For example, even though the 80,000 Hours framework does not account for diminishing marginal returns, it justifies the inclusion of the crowdedness factor on the basis of diminishing marginal returns.

Notably, EA Concepts has separate pages for the informal INT framework and the 80,000 Hours framework.

Are selection forces selecting for or against altruism? Will people in the future be more, as, or less altruistic?

In his blog post "Why Might the Future Be Good," Paul Christiano writes:

What natural selection selects for is patience. In a thousand years, given efficient natural selection, the most influential people will be those who today cared what happens in a thousand years. Preferences about what happens to me (at least for a narrow conception of personal identity) will eventually die off, dominated by preferences about what society looks like on the longest timescales.

(Please read all of "How Much Altruism Do We Expect?" for the full context.)

AMA: Elie Hassenfeld, co-founder and CEO of GiveWell

Thanks Lucy! Readers should note that Elie's answer is likely partly addressed to Lucy's question.

AMA: Toby Ord, author of "The Precipice" and co-founder of the EA movement

What are your thoughts on the argument that the track record of robustly good actions is much better than that of actions contingent on high uncertainty arguments? (See here and here at 34:38 for pushback.)

AMA: Toby Ord, author of "The Precipice" and co-founder of the EA movement

Should non-suffering focused altruists cooperate with suffering-focused altruists by giving more weight to suffering than they otherwise would given their worldview (or given their worldview adjusted for moral uncertainty)?

AMA: Elie Hassenfeld, co-founder and CEO of GiveWell

Has your thinking about donor coordination evolved since 2016, and if so, how? (My main motivation for asking is that this issue is the focus of a chapter in a recent book on philosophical issues in effective altruism though the chapter appears to be premised on this blog post, which has an update clarifying that it has not represented GiveWell's approach since 2016.)

AMA: Toby Ord, author of "The Precipice" and co-founder of the EA movement

How confident are you that the solution to infinite ethics is not discounting? How confident are you that the solution to the possibility of an infinitely positive/infinitely negative world automatically taking priority is not capping the amount of value we care about at a level low enough to undermine longtermism? If you're pretty confident about both of these, do you think additional research on infinites is relatively low priority?

AMA: Toby Ord, author of "The Precipice" and co-founder of the EA movement

What do you think is the strongest argument against working to improve the long-term future? What do you think is the strongest argument against working to reduce existential risk?

Load More