All of Emma Abele's Comments + Replies

I mostly want to +1 to Jonas’ comment and share my general sentiment here, which overall is that this whole situation makes me feel very sad. I feel sad for the distress and pain this has caused to everyone involved. 

I’d also feel sad if people viewed Owen here as having anything like a stereotypical sexual predator personality.

My sense is that Owen cares extraordinarily about not hurting others. 

It seems to me like this problematic behavior came from a very different source – basically problems with poor theory of mind and underestimat... (read more)

In general it doesn't seem logical to me to bucket cause areas as either "longtermist" or "neartermist". 

I think this bucketing can paint an overly simplistic image of EA cause prioritization that is something like:

Are you longtermist?  

  • If so, prioritize AI safety, maybe other x-risks, and maybe global catastrophic risks
  • If not, prioritize global health or factory farming depending on your view on how much non-human animals matter compared to humans

But really the situation is way more complicated than this, and I don't think the simplification is ... (read more)

Agreed! And we should hardly be surprised to see such a founder effect, being that EA was started by philosophers and philosophy fans.

1
kuhanj
3y
Sorry fixed!

In talking to many Brown University students about EA (most of who are very progressive), I have noticed that longtermist-first and careers-first EA outreach does better and seems to be because of these objections that come up in response to 'GiveWell style EA'. 

That is very helpful- thank you EdoArad!

(and I'll be sure to update you on how our program turns out)

Thank you so much!
I agree and am adding this to our list of types of projects to suggest to students :)

Thank you Brian!
We have considered this, and have it as part of our "funnel", but still think there is room for this kind of projects program in addition. 

I also like the idea of EA Uni groups encouraging interested members to start these other (EA related) student groups you mention (Alt Protein group, OFTW and GRC). At Brown, we already have OFTW and GRC, and I'm in the process of getting some students from Brown EA to start an Alt Protein group as well :)

This is really cool! Thank you for doing this!

Also, I'm curious - to what extent is AI safety is discussed in your group? 

I noticed the cover of Superintelligence has a quote of Bill Gates saying "I highly recommend this book" and I'm curious if AI safety is something Microsoft employees discuss often.

I do think there is a good case for interventions aimed at improving the existential risk profile of post-disaster civilization being competitive with interventions aimed at improving the existential risk profile of our current civilization.

I'd love to hear more about this and see any other places where this is discussed.

2
Linch
3y
Likewise.

(I'm only addressing a small part of your question, not the main question)

When we are looking at the potential branches in the future, should you make the choice that will lead you to the cluster of outcomes with the highest average utility or to the cluster with the highest possible utility?

I'd say the one with the highest average utility if they are all equally likely. Basically, go with the one with the highest expected value.

What do you think are the most likely ways that plant based and cell based products might both fail to significantly replace factory farmed products?

Sounds very exciting!

And seems like there is some overlap with EA Uni group fellowships so I would be happy to talk to you about those if you want; although maybe better to talk to the community builders more involved in syllabus writing. ( this Intro Fellowship I'm running at Brown EA )

Hi Max,

I'm curious how big you are thinking this "EA curriculum" might be. Are you thinking of something similar to an EA Uni group fellowship (usually ~4 hours/ week for ~ 8 weeks) or are you thinking of something much larger?

2
Max_Daniel
4y
I was mostly thinking of a curriculum that would eventually be much larger (though could be modular, and certainly would have a smaller MVP as first step to gauge viability of the larger curriculum). But my views on this aren't firm, and in general one of the first things I'll do is to determine various fundamental properties I don't feel certain about yet. Other than length these are e.g. target audience and intended outcomes (e.g. attracting new people to EA, "onboarding" new EAs, bringing moderately experienced EA to the same level, or allowing even quite involved EAs to learn something new by increasing the amount of content that publicly accessible as opposed to in some people's minds or nonpublic docs), scope (e.g. only longtermism?), and focus on content/knowledge vs. skills/methods.

I agree with Marisa

Rather than a single body of knowledge being a standard education for EAs, I like the fellowship structure that many EA Uni groups use.

For me, one of the main goals in running these fellowship to expose students to enough EA ideas and discussions to decide for themselves what knowledge and skills they want to build up in order to do good better. For some people, this will involve economics, statistics, and decision analysis knowledge, but for others, it will look totally different.

(For fellowship syllabus examples you can check out thi... (read more)

Thanks so much for your response Ross!

The values obtained for table 1 on reduction in far future potential were obtained from a survey of existential risk researchers at Ea global 2018 see methods:

Yeah that makes sense - I was just curious if the reasonings in the introduction were from the reasonings of those who filled out the survey. But thanks for clarifying!

Surviving the new environment might also favour the development of stable yet repressive social structures that would prevent rebuilding of civilization to previous levels. This could be facilitated by dominant groups having technology of the previous civilization.

Very interesting and makes sense - thank you!

I have two questions/clarifications:

(1) Regarding:

Reasons that civilization might not recover include: ...

Are the reasons mentioned in this section what leads to the estimated reduction in far future potential in Table 1? Or are there other reasons that play into those estimates as well?

(2) Regarding:

Another way to far future impact is the trauma associated with the catastrophe making future catastrophes more likely, e.g. global totalitarianism (Bostrom & Cirkovic, 2008)

Intuitively I feel that the trauma associated with the catastrophe would make peopl... (read more)

1
Ross_Tieman
4y
In response to: The reasons civilization might not recover discussed in the introduction are intended to provide evidence that recovery from civilization collapse is unlikely to occur, it is not an exhaustive list but provides some of the major arguments. The values obtained for table 1 on reduction in far future potential were obtained from a survey of existential risk researchers at Ea global 2018 see methods: The 'trauma' refers flow on societal impacts due to the various challenges faced by humans of a post collapse civilization, although intuitively it would make sense to coordinate the harshness of the new environment would result in small groups simply trying to survive. Humans of this heavily degraded civilization would have to deal with a range of basic challenges (food, water, shelter) and would exist without access to previous technologies making it unlikely they would be able to coordinate on a global scale, to address global catastrophic risks. Surviving the new environment might also favour the development of stable yet repressive social structures that would prevent rebuilding of civilization to previous levels. This could be facilitated by dominant groups having technology of the previous civilization. there are obviously a lot of unknowns concerning post collapse scenarios.

I am wondering why you say that "Human reconstruction will be beneficial to the next civilization."

I think it would be great if we could leave messages to a future non-human civilization to help them achieve a grand future and reduce their x-risk (by learning from our mistakes, for example). But I don't feel that human reconstruction is particularly important.

If anything, I worry that this future advanced civilization might reconstruct humans to enslave us. And if they are not the type to enslave us, then I feel pretty good about them existing and homo sapiens not existing.

1
turchin
2y
If they advance enough to reconstruct us, then most of bad enslavement ways are likely not interesting to them. For example, we no try to reconstruct mammoths in order to improve climate in Siberia, but not for hunting or meet.

Such a great book!

I am struggling to get my friends and family to read it though as they are put off by it being quite a sizeable hefty book (even when I tell them they can skip the footnotes).

Are there plans to make a short/abridged paperback version that might spread more widely outside of the EA community? I'd love to see the main ideas and thoughts become somewhat common knowledge. Or is it more important to have fewer people have a deep understanding then many people have a surface level understanding?

Are there results from this? I would love to see :)

I agree with the previous answers- that is, I think the best argument here has to do with moral circle expansion affecting the long term future.

In addition, eating meat could increase existential risk through its effects on worsening climate change and the emergence of natural pandemics.

See this chart to compare greenhouse gas emissions per kg of different food products to see how much more animal products contribute to climate change. In total, animal agriculture contributes around 14% to 18% of all anthropogenic greenhouse gas emissions.

Animal agricult... (read more)

This is a great idea! From starting a group at Brown University last year, I can say that it definitely would have been helpful to have a remote volunteer helping out.

It is very hard to start a well-run EA group at your university because it requires that you have a lot of time and a good idea of how to start and lead the group. Having volunteers help remotely would make the process a lot easier.

Here are some things that I think could be helpful for a remote volunteer to do for a small EA university group:

  • Give advice on how to structure the group (while un
... (read more)

I only recently got around to reading this, but I'm very glad I did! I definitely recommend reading the full paper, but for those interested, there is also this TED talk where you can get the gist of it.

In any case, the paper made me wonder about the possibility of having a sort of 'worst case scenario bunker system' capable of rebuilding society. I imagine such discussion was not included in this paper because it isn't relevant to protecting against a "devastation of civilization" (death of at least 15% of world population) a... (read more)

2
DavidNash
4y
Switzerland seems to have a bunker and archive system - link.