aaronb50

Blog: aaronbergman.substack.com

Dropping my bio from EAG London here until I have the motivation to write up something better:

I graduated from Georgetown University in December, 2021 with degrees in economics, mathematics, and philosophy, where I founded and helped to lead Georgetown Effective Altruism. In recent years, I've interned at the Department of the Interior, the Federal Deposit Insurance Corporation, and Nonlinear, a new longtermist EA org.

Shortly after this event, I'll be starting as an independent researcher, trying to answer hard, important EA-relevant questions.

Topic Contributions

Comments

Open Philanthropy's Cause Exploration Prizes: $120k for written work on global health and wellbeing

To clarify, are you also interested in proposals concerning animal welfare?

How much current animal suffering does longtermism let us ignore?

I'm not intending to, although it's possible I'm using the term "opportunity cost" incorrectly or in a different way than you. The opportunity cost of giving a dollar to animal welfare is indeed whatever that dollar could have bought in the longtermist space (or whatever else you think is the next best option). 

However, it seems to me that at least some parts of longtermist EA , some of the time, to some extent, disregard the animal suffering opportunity cost almost entirely. Surely the same error is committed in the opposite direction by hardcore animal advocates, but the asymmetry comes from the fact that this latter group controls a way smaller share of financial pie. 

How much current animal suffering does longtermism let us ignore?

Related to the funding point (note 4): 

It seems important to remember that even if high status (for lack of a more neutrally-valenced term) longtermist interventions like AI safety aren't currently "funding constrained," animal welfare at large most definitely is. As just one clear example, an ACE report from few months ago estimated that Faunalytics has room for more than $1m in funding. 

That means there remains a very high (in absolute terms) opportunity cost to longtermist spending, because each dollar spent is one not being donated to an animal welfare org. This doesn't make liberal longtermist spending wrong, but it does make it costly in terms of expected nearterm suffering. 

This is the main reason big longtermist spending gives me pause, even though I just about entirely buy the longtermist thesis. EA is, by and large, pretty good at giving due concern to non-salient opportunity costs, but this seems to be an area in which we're falling short. 

How about we don't all get COVID in London?

You're right I didn't make a full, airtight argument, and that severity of infection is indeed a crucial consideration. My extremely unqualified impression is that: 

  • Long covid is real but no longer the main source of expected disvalue for the 3x-vax'd
  • A non-trivial number of 3x-vax'd people (20%?) who catch covid lose more than half productivity and/or quality of life for 4-21 days, and this is where most of the expected disvalue comes from

This is what my brain has decided on after being exposed to a bunch of unstructured information so the error bars are very large, and I should probably update toward your POV

When to get off the train to crazy town?

Taking the Boltzmann brain example, isn't the issue that the premises that would lead to such a conclusion are incorrect, rather than the conclusion being "crazy" per se?

aaronb50's Shortform

Effective Altruism Georgetown will be interviewing Rob Wiblin for our inaugural podcast episode this Friday! What should we ask him? 

The unthinkable urgency of suffering

You're welcome and thanks for the comment. I too want to preserve what is good, but I can't help but think that EAs tend to focus too much on preserving the good instead of reducing the bad, in large part because we tend to be relatively wealthy, privileged humans who rarely if ever undergo terrible suffering. 

The unthinkable urgency of suffering

Yes, I believe things would change a lot. Hopefully we can find some way to induce this kind of cognitive empathy without making people actually suffer for first hand experience.

The unthinkable urgency of suffering

Yes, this was a bit puzzling for me. Good to see it redeemed a bit. I could see the post being disliked for a few reasons:

  • An image of EA as focused on suffering might be bad for the movement
  • It's preaching to the choir (which it definitely is)

Anyway, thanks for the reassuring comment!

Load More