EA

Emma Abele

162 karmaJoined Dec 2019Working (0-5 years)99 School St, Cambridge, MA 02139, USA

Bio

Currently I'm working on starting a new meta EA org called Global Challenges Project.

Previously I worked on EA Virtual Programs, starting the Brown EA student group,  some cultivated meat research, and volunteering at ALLFED.

Comments
22

In general it doesn't seem logical to me to bucket cause areas as either "longtermist" or "neartermist". 

I think this bucketing can paint an overly simplistic image of EA cause prioritization that is something like:

Are you longtermist?  

  • If so, prioritize AI safety, maybe other x-risks, and maybe global catastrophic risks
  • If not, prioritize global health or factory farming depending on your view on how much non-human animals matter compared to humans

But really the situation is way more complicated than this, and I don't think the simplification is accurate enough to be worth spreading. 

  • There was a time when I thought ending factory farming was highest priority, motivated by a longtermist worldview
  • There was also a time when I thought bio-risk reduction was highest priority, motivated by a neartermist worldview
  • (now I think AI-risk reduction is highest priority regardless of what I think about longtermism)

When thinking through cause prioritization, I think most EAs (including me) over-emphasize the importance of philosophical considerations like longtermism or speciesism, and under-emphasize the importance of empirical considerations like AI timelines, how much effort it would take to make bio-weapons obsolete or what diseases cause the most intense suffering. 

In talking to many Brown University students about EA (most of who are very progressive), I have noticed that longtermist-first and careers-first EA outreach does better and seems to be because of these objections that come up in response to 'GiveWell style EA'. 

That is very helpful- thank you EdoArad!

(and I'll be sure to update you on how our program turns out)

Thank you so much!
I agree and am adding this to our list of types of projects to suggest to students :)

Thank you Brian!
We have considered this, and have it as part of our "funnel", but still think there is room for this kind of projects program in addition. 

I also like the idea of EA Uni groups encouraging interested members to start these other (EA related) student groups you mention (Alt Protein group, OFTW and GRC). At Brown, we already have OFTW and GRC, and I'm in the process of getting some students from Brown EA to start an Alt Protein group as well :)

This is really cool! Thank you for doing this!

Also, I'm curious - to what extent is AI safety is discussed in your group? 

I noticed the cover of Superintelligence has a quote of Bill Gates saying "I highly recommend this book" and I'm curious if AI safety is something Microsoft employees discuss often.

I do think there is a good case for interventions aimed at improving the existential risk profile of post-disaster civilization being competitive with interventions aimed at improving the existential risk profile of our current civilization.

I'd love to hear more about this and see any other places where this is discussed.

Answer by Emma AbeleSep 06, 202080

What do you think are the most likely ways that plant based and cell based products might both fail to significantly replace factory farmed products?

Load more