Robi Rahman

Data Scientist @ Epoch
1331 karmaJoined Working (6-15 years)New York, NY, USA
www.robirahman.com

Bio

Participation
9

Data scientist working on AI forecasting through Epoch and the Stanford AI Index. GWWC pledge member since 2017. Formerly social chair at Harvard Effective Altruism, facilitator for Arete Fellowship, and founder of the DC Slate Star Codex meetup.

Comments
197

Wow, incredible that this has 0 agree votes and 43 disagree votes. EAs have had our brains thoroughly fried by politics. I was not expecting to agree with this but was pleasantly surprised at some good points.

 

Now that the election is over, I'd love to see a follow-up post on what will probably happen during the next administration, and what will be good and bad from an EA perspective.

I agree with your numbered points, especially that if your discount rate is very high, then a catastrophe that kills almost everyone is similar in badness to a catastrophe that kills everyone.

But one of the key differences between EA/LT and these fields is that we're almost the only ones who think future people are (almost) as important as present people, and that the discount rate shouldn't be very high. Under that assumption, the work done is indeed very different in what it accomplishes.

I don't know what you mean by fields only looking into regional disasters - how are you differentiating those investigations from the fields that you mention that the general public has heard of in large part because a ton of academic and governmental effort has gone into it? 

I'm skeptical that the insurance industry isn't bothering to protect against asteroids and nuclear winter just because they think the government is already handling those scenarios. For one, any event that kills all humans is uninsurable, so a profit-motivated mitigation plan will be underincentivized and ineffective. Furthermore, I don't agree that the government has any good plan to deal with x-risks. (Perhaps they have a secret, very effective, classified plan that I'm not aware of, but I doubt it.)

I saw a lot of criticism of the EA approach to x-risks on the grounds that we're just reinventing the wheel, and that these already exist in government disaster preparedness and the insurance industry. I looked into the fields that we're supposedly reinventing, and they weren't the same at all, in that the scale of catastrophes previously investigated was far smaller, only up to regional things like natural disasters. No one in any position of authority had prepared a serious plan for what to do in any situation where human extinction was a possibility, even the ones the general public has heard of (nuclear winter, asteroids, climate change).

I went to a large event, and the organizers counted the number of attendees present and then ordered chicken for everyone's meal. Unfortunately I didn't have a chance to request a vegetarian alternative. What's the most efficient way to offset my portion of the animal welfare harm, and how much will it cost? I'm looking for information such as "XYZ is the current best charity for reducing animal suffering, and saves chickens for $xx each", but I'm open to donating to something that helps other animals - doesn't necessarily have to be chickens, if I can offset the harm more effectively or do more good per dollar elsewhere.

Humans are just lexically worth more than animals? You would torture a million puppies for a century to protect me from stubbing my toe?

Reducing animal agriculture for the benefits to humans by reducing habitat destruction is a really roundabout and ineffective way to help humans.

If you want to help humans, you should do whatever most helps humans.

If you want to protect someone from climate change, you should do whatever most effectively mitigates the effects of climate change.

If you want to help animals for the sake of helping animals, you should do that.

But you shouldn't decide that helping animals is better than helping humans on the grounds that helping animals also indirectly helps humans.

  1. Animal welfare is extremely neglected compared to human philanthropy. (However, effective interventions receive only a small fraction of altruistic funding intended to help humans.)
  2. I'm highly uncertain about counterfactuals and higher-order effects, such as changes in long-term human population and eating patterns due to accelerated global economic development.

Risk aversion doesn't change the best outcome from donating to a single charity to splitting your donation, once you account for the fact that many other people are already donating to both charities.

Given that both orgs already have many other donors, the best action for you to take is to give all of your donations to just one of the options (unless you are a very large donor).

a portfolio approach does more good given uncertainty about the moral weight on animals

No, this is totally wrong. Whatever your distribution of credences of different possible moral weights of animals, either the global health charity or the animal welfare charity will do more good than the other, and splitting your donations will do less good than donating all to the single better charity.

I believe this is not a valid analogy. If you uninvite someone from events for making rude comments about other attendees' appearances, that only applies to that one rude person, or to people who behave rudely. If you disinvite someone for holding political views you're uncomfortable with, that has a chilling effect on all uncommon political views, and is harmful to everyone's epistemics.

Load more