Emma Abele

President @ METR (formally ARC Evals)
236 karmaJoined Dec 2019Working (0-5 years)
metr.org/

Bio

Currently I'm at METR (formally ARC Evals). 
 

Previously founded Global Challenges Project, and EA Virtual Programs

Posts
2

Sorted by New

Comments
23

I mostly want to +1 to Jonas’ comment and share my general sentiment here, which overall is that this whole situation makes me feel very sad. I feel sad for the distress and pain this has caused to everyone involved. 

I’d also feel sad if people viewed Owen here as having anything like a stereotypical sexual predator personality.

My sense is that Owen cares extraordinarily about not hurting others. 

It seems to me like this problematic behavior came from a very different source – basically problems with poor theory of mind and underestimating power dynamics. Owen can speak for himself on this; I’m just noting as someone who knows him that I hope people can read his reflections genuinely and with an open mind of trying to understand him. 

That doesn’t make Owen’s actions ok – it’s definitely not – but it does make me hopeful and optimistic that Owen has learnt from his mistakes and will be able to tread cautiously and not make problems of this sort again.

Personally, I hope Owen can be involved in the community again soon. 

 

[Edited to add: I’m not at all confident here and just sharing my perspective based on my (limited) experience. I don’t think people should give my opinion/judgment much weight. I haven’t engaged at all deeply in understanding this, and don’t plan to engage more]

In general it doesn't seem logical to me to bucket cause areas as either "longtermist" or "neartermist". 

I think this bucketing can paint an overly simplistic image of EA cause prioritization that is something like:

Are you longtermist?  

  • If so, prioritize AI safety, maybe other x-risks, and maybe global catastrophic risks
  • If not, prioritize global health or factory farming depending on your view on how much non-human animals matter compared to humans

But really the situation is way more complicated than this, and I don't think the simplification is accurate enough to be worth spreading. 

  • There was a time when I thought ending factory farming was highest priority, motivated by a longtermist worldview
  • There was also a time when I thought bio-risk reduction was highest priority, motivated by a neartermist worldview
  • (now I think AI-risk reduction is highest priority regardless of what I think about longtermism)

When thinking through cause prioritization, I think most EAs (including me) over-emphasize the importance of philosophical considerations like longtermism or speciesism, and under-emphasize the importance of empirical considerations like AI timelines, how much effort it would take to make bio-weapons obsolete or what diseases cause the most intense suffering. 

In talking to many Brown University students about EA (most of who are very progressive), I have noticed that longtermist-first and careers-first EA outreach does better and seems to be because of these objections that come up in response to 'GiveWell style EA'. 

That is very helpful- thank you EdoArad!

(and I'll be sure to update you on how our program turns out)

Thank you so much!
I agree and am adding this to our list of types of projects to suggest to students :)

Thank you Brian!
We have considered this, and have it as part of our "funnel", but still think there is room for this kind of projects program in addition. 

I also like the idea of EA Uni groups encouraging interested members to start these other (EA related) student groups you mention (Alt Protein group, OFTW and GRC). At Brown, we already have OFTW and GRC, and I'm in the process of getting some students from Brown EA to start an Alt Protein group as well :)

This is really cool! Thank you for doing this!

Also, I'm curious - to what extent is AI safety is discussed in your group? 

I noticed the cover of Superintelligence has a quote of Bill Gates saying "I highly recommend this book" and I'm curious if AI safety is something Microsoft employees discuss often.

I do think there is a good case for interventions aimed at improving the existential risk profile of post-disaster civilization being competitive with interventions aimed at improving the existential risk profile of our current civilization.

I'd love to hear more about this and see any other places where this is discussed.

Load more