Harry_Taussig

Comments

How do you compare human and animal suffering?

Hi Kevin, I definitely agree with your point on longtermism, and thanks for sending that article as I think it gets a lot closer to one my main concerns here which is indefinitely extending a bad future.

How do you compare human and animal suffering?

Thank you so much!! This is really helpful and I'm taking a look at it now, and that last article looks like it gets to the center of my concern.

CEA update: Q1 2021

Are you able to reveal who this YouTube creator is? I'm surprised by how little EA YouTube content there is aside from recorded talks. I feel like an EA-related Veritasium or Kurzgesagt could be super helpful and popular as per this post.

What posts do you want someone to write?

Has anyone done this yet? If so I'd be interested in the article, otherwise I'd be interested in giving it a go

What key facts do you find are compelling when talking about effective altruism?
  1. We are always in an emergency.
    1. We cannot understand or see the extent of suffering that is going on, and have basically no intuition for dealing with this scale of suffering. Our natural response to it becomes indifference.
  2. We are implicitly making decisions of prioritization whether or not we make these decisions consciously. 
    1. Choosing to donate to one charity is choosing to donate to it instead of any other particular charity, whether or not you considered these other options. 
What Makes Outreach to Progressives Hard

Thanks for writing! This definitely helped clarify some of the push-back I often get when trying to explain these ideas to friends.

For reasons that elude my comprehension, many progressives do not seem to conceptualize the current assortment of economic and legal policies that cause some countries to be ~100x richer than others to be a relevant form of oppression. If they do, they are unlikely to give it as high a priority as, e.g., within-country racial disparities or within-country economic inequality. 

This will definitely stick with me. It seems the only way to get around this contradiction is to just not think about it, but maybe I'm missing something?

Kessler Syndrome in Effective Altruism

I have run into a similar problem here when trying to introduce EA to others. It feels intuitive to give others an example cause area, like AI safety or global poverty, but then the other person becomes much more likely to align EA with just that cause area, and not the larger questions of how we do the most good.  

At the same time, it seems hard to get someone new excited about EA without giving some examples of what the community actually does.

Great post, thanks!