james.lucassen

Posts

Sorted by New

Wiki Contributions

Comments

Public Health Research

I agree that relatively small improvements in public health could potentially be highly beneficial. Research on this might be totally tractable. 

What I am concerned might be intractable is deploying results. Public health (and all health-relevant products) is a massive industry, with a lot of strong interests pushing in different directions. It seems entirely possible that all the answers are already out there, just drowned out by food, exercise, sexual health, self-help, and other industries.

There's so much noise out there, it seems unlikely that a few EAs will be able to get a word in edgewise.

Promoting Simple Altruism

Thank you for posting! Many kudos for contributing to the frontpage discussion rather than lurking for years like many people (including me).

I agree with most of your assessment here. But I think rather than "simple altruism", it would be better to focus on "altruistic intent". Making this substitution doesn't change much, the major differences are just that it includes EA itself, and excludes cynically motivated giving. The thing I think we care about is people trying to do good, not specifically doing non-EA things.

That said, increasing altruistic intent is, I think, included under the heading of broad longtermism. I don't have a source for this, but my impression is that not that much work goes towards broad longtermism because it seems really hard, not that urgent, and EAs tend to be bad at the key skills involved, like persuasion and politics.

Why "cause area" as the unit of analysis?

I think this definition of "cause area" is roughly how the EA community uses the term in practice, and explains a lot of why/how it's useful. It helps facilitate good discussion by pointing towards the best people to talk to, since others in my cause area will have common knowledge and interests with myself and each other. On this view, "cause area" is just EA-speak for a subcommunity.

That makes it a bit hard to justify the common EA practice of "cause prioritization" though, since causes aren't really particularly homogeneous with regard to their impact. I think doing "intervention prioritization" would be a lot more useful, even though there's way more interventions than causes.

Metaculus Questions Suggest Money Will Do More Good in the Future

Is there some kind of up-to-date dashboard or central source for GiveWell's main "cost-per-expected-life" figure? 

  • The Metaculus question mentioned in this post cites values like $890 in 2016,  $823 in 2017, $617 in 2018 and $592 in 2019, and I can't find the field they refer to in the resolve condition (?!)
  • This 80K article lists the value as $2300 in 2020.
  • This GiveWell summary sheet from 2016 has a minimum value of $901
  • GiveWell's Top Charities page lists $3000-$5000 to save a life for Malaria Consortium, Against Malaria Foundation, New Incentives, and Hellen Keller International.

If such a thing does not exist, I'll probably reach out to GiveWell and see what they think about implementing one. There are so many numbers floating around that are hard to verify and differ dramatically.

A proposal for a small inducement prize platform

I am pretty excited about the potential for this idea, but I am a bit concerned about the incentives it would create. For example, I'm not sure how much I would trust a bibliography, summary, or investigation produced via bounty. I would be worried about omissions that would conflict with the conclusions of the work, since it would be quite hard for even a paid arbitrator to check for such omissions without putting in a large amount of work. I think the reason this is not currently much of a concern is precisely because there is no external incentive to produce such works - as a result, you can pretty much assume that research on the Forum is done in good faith and is complete to the best of the author's ability.

Potential ways around this that come to mind:

  • Maybe linking user profiles on this platform to the EA Forum (kind of like the Alignment Forum and LessWrong sharing accounts) would provide sufficient trust in good intentions?
  • Maybe even without that, there's still such a strong self-selection effect anyway that we can still mostly rely on trust in good intentions?
  • Maybe this only slightly limits the scope of what the platform can be used for, and preserves most of its usefulness?
What key facts do you find are compelling when talking about effective altruism?

If it costs $4000 to prevent a death from malaria, malaria deaths happen at age 20 on average, and life expectancy in Africa is 62 years, then the cost per lifetime saved is $0.0109/hour. 

If you make the average US income of $15.35/hour, this means that every marginal hour you work to donate can be expected to save 1,412 hours of life, if you take the very thoroughly researched, very scalable, low-risk baseline option. If you can only donate 10% of your income, then your leverage is reduced to a mere 141.2. Just by virtue of having been born in a developed country, every hour of your time can be converted to days or weeks of additional life for someone else.

While not as insanely huge as some of the figures making the argument for longtermism, I find this figure more shocking on a psychological level because it's so simple to calculate and yields such an unexpected result. This type of calculation is what first got me interested in Singer-style EA.