RandomEA

Comments

What are the leading critiques of "longtermism" and related concepts

I've completed my draft (now at 47,620 words)! 

I've shared it via the EA Forum share feature with a number of GPI, FHI, and CLR people who have EA Forum accounts.

I'm sharing it in stages to limit the number of people who have to point out the same issue to me.

80,000 Hours user survey closes this Sunday

Thanks Howie.

Something else I hope you'll update is the claim in that section that GiveWell estimates that it costs the Against Malaria Foundation $7,500 to save a life.

The archived version of the GiveWell page you cite does not support that claim; it states the cost per life saved of AMF is $5,500. (It looks like earlier archives of that same page do state $7,500 (e.g. here), so that number may have been current while the piece was being drafted.)

Additionally, the $5,500 number, which is based on GiveWell's Aug. 2017 estimates (click here and see B84), is unusually high. Here are GiveWell's estimates by year:

2017 (final version): $3,280 (click here and see B91)

2018 (final version): $4,104 (click here and see R109)

2019 (final version): $2,331 (click here and see B162) (downside adjustments seem to cancel with excluded effects)

2020 (Sep. 11th version): $4,450 (click here and see B219)

Once the AMF number is updated, the near-term existential risk number is less than five times as good as the AMF number. And if the existential risk number is adjusted for uncertainty (see here and here), then it could end up worse than the AMF number. That's why I assumed the change on the page represented a shift in your views rather than an illustration. It puts the numbers so close to each other that it's not obvious that the near-term existential risk number is better and it also makes it easier for factors like personal fit to outweigh the difference in impact.

80,000 Hours user survey closes this Sunday

Hi Arden and the 80,000 Hours team,

Thank you for the excellent content that you produce for the EA community, especially the podcasts.

There is one issue that I want to raise. I gave serious thought to raising this via your survey, but I think it is better raised publicly.

In your article "The case for reducing extinction risk" (which is linked to in your "Key ideas" article), you write:

Here are some very rough and simplified figures to show how this could be possible. It seems plausible to us that $100 billion spent on reducing extinction risk could reduce it by over 1% over the next century. A one percentage point reduction in the risk would be expected to save about 100 million lives among the present generation (1% of about 10 billion people alive today). This would mean the investment would save lives for only $1000 per person.

At the top of the page, it says the article was published in October 2017 and last updated in October 2017. There are no footnotes indicating any changes were made to that section.

However, an archived copy of the article from June 2018 shows that, at the time, the article read:

We roughly estimate that if $10 billion were spent intelligently on reducing these risks, it could reduce the chance of extinction by 1 percentage point over the century. In other words, if the risk is 4% now, it could be reduced to 3%.
A one percentage point reduction in the risk would be expected to save about 100 million lives (1% of 10 billion). This would mean it saves lives for only $100 each.

I think it would be helpful to members of the community to indicate when and how an article has been substantively updated. There are many ways this can be done, including:

  • an article explaining how and why your views have changed (e.g. here, here/here, and here);
  • linking to an archived version of the article (as you do here) ideally with a change log; and
  • a footnote in the section indicating what it previously said and why your views have changed.

I understand that you have a large amount of content and limited staff capacity to review all of your old content. But what I'm talking about here is limited to changes you choose to make.

I'm sure it was just an oversight on the part of whoever made the change. You all have a lot on your plate, and it's most convenient for an article to just present your current views on the subject.

But when it comes to something as important as the effectiveness of spending to reduce existential risk and something as major as a shift of an order of magnitude, I really think it'd be helpful to note and explain any change in your thinking.

Thank you for reading, and keep up the good work.

What are the leading critiques of "longtermism" and related concepts

While I have made substantial progress on the draft, it is still not ready to be circulated for feedback.

I have shared the draft with Aaron Gertler to show that it is a genuine work in progress.

What are the leading critiques of "longtermism" and related concepts

Thanks Ben. There is actually at least one argument in the draft for each alternative you named. To be honest, I don't think you can get a good sense of my 26,000 word draft from my 570 word comment from two years ago. I'll send you my draft when I'm done, but until then, I don't think it's productive for us to go back and forth like this.

What are the leading critiques of "longtermism" and related concepts

Thanks Pablo and Ben. I already have tags below each argument for what I think it is arguing against. I do not plan on doing two separate posts as there are some arguments that are against longtermism and against the longtermist case for working to reduce existential risk. Each argument and its response are presented comprehensively, so the amount of space dedicated to each is based mostly on the amount of existing literature. And as noted in my comment above, I am excerpting responses to the arguments presented.

What are the leading critiques of "longtermism" and related concepts

As an update, I am working on a full post that will excerpt 20 arguments against working to improve the long-term future and/or working to reduce existential risk as well as responses to those arguments. The post itself is currently at 26,000 words and there are six planned comments (one of which will add 10 additional arguments) that together are currently at 11,000 words. There have been various delays in my writing process but I now think that is good because there have been several new and important arguments that have been developed in the past year. My goal is to begin circulating the draft for feedback within three months.

What will 80,000 Hours provide (and not provide) within the effective altruism community?

For those who are curious,

  • in April 2015, GiveWell had 18 full-time staff, while
  • 80,000 Hours currently has a CEO, a president, 11 core team members, and two freelancers and works with four CEA staff.
What will 80,000 Hours provide (and not provide) within the effective altruism community?

Hi Ben,

Thank you to you and the 80,000 Hours team for the excellent content. One issue that I've noticed is that a relatively large number of pages state that they are out of date (including several important ones). This makes me wonder why it is that 80,000 Hours does not have substantially more employees. I'm aware that there are issues with hiring too quickly, but GiveWell was able to expand from 18 full-time staff (8 in research roles) in April 2017 to 37 staff today (13 in research roles and 5 in content roles). Is the reason that 80,000 Hours cannot grow as rapidly that its research is more subjective in nature, making good judgment more important, and that judgment is quite difficult to assess?

A cause can be too neglected

It seems to me that there are two separate frameworks:

1) the informal Importance, Neglectedness, Tractability framework best suited to ruling out causes (i.e. this cause isn't among the highest priority because it's not [insert one or more of the three]); and

2) the formal 80,000 Hours Scale, Crowdedness, Solvability framework best used for quantitative comparison (by scoring causes on each of the three factors and then comparing the total).

Treating the second one as merely a formalization of the first one can be unhelpful when thinking through them. For example, even though the 80,000 Hours framework does not account for diminishing marginal returns, it justifies the inclusion of the crowdedness factor on the basis of diminishing marginal returns.

Notably, EA Concepts has separate pages for the informal INT framework and the 80,000 Hours framework.

Load More