Hide table of contents

What's an ideal name for the larger ecosystem that EA resides in? Including things like the Progress Studies, Longtermist, Rationality communities?

28

0
0

Reactions

0
0
New Answer
New Comment


5 Answers sorted by

Why not just call it the EA-adjacent ecosystem? I think there are lots of communities that intersect with EA, and it would probably be difficult to make one acronym that includes all of these communities.

Strong upvote for something like this unless there's some context I'm missing? I.e. what is the sentence that this would belong in?

Maybe the "PEARL communities"? (Progress studies, effective altruism, rationality and longtermism)?

Another adjacent community you might want to mention is the forecasting community.

2
RyanCarey
Yes, and I guess there's a lot of other components that could possibly be added to that list: science reform (reproducibility, open science), fact-checking, governance reform (approval voting or empowerment of the technocracy), that vary from being possible small ingredients of  any new "enlightenment" to being unlikely to come to much...
4
Stefan_Schubert
My sense is that the forecasting community overlaps more with the PEARL communities, e.g. the fact-checking does.
2
RyanCarey
The FFLARRP ecosystem: forecasting, fact-checking, longtermism, altruism, rationality, reform, and progress! :P
8
Milan Griffes
Sexy.
[anonymous]4
0
0

Clarification question: why do you understand longtermism to be outside of EA?

It seems to me that longtermism ( I assume you talk about the combination of believing in strong longtermism (Greaves and Macaskill, 2019) and believing in doing the most good) is just one particular kind of an effective altruist (an effective altruist with particular moral and empirical beliefs).

4
RyanCarey
Just like environmentalism and animal rights  intersect with EA, without being a subset of it, the same could be true for longtermism. (I expect longtermism to grow a lot outside of EA, while remaining closer to EA than those other groups.)
7
Luke Freeman 🔸
Yeah – things like the Long Now Foundation have been around for decades and aren't necessarily approaching longtermism from the same angle as EA.

Not bad but maybe not catchy enough? I’m also worried about the connotation of “pearl” as in a prized thing.

Worried about analogue where some atheists and rationalists started calling themselves “Brights” and everyone threw up in their mouth a little. :)

I think EA and Rationality are fine.

How would you define longtermism so that it isn't pretty much by definition EA? Like longtermism that isn't necessarily primarily motivated by consequences for people in the future? I think GPI may have explored some such views, but I think it's close enough to EA that we don't need a new term.

If we're including progress studies, why not international development, global health, AI safety, biosecurity, nuclear security, social movements, animal ethics, vegan studies, conflict and peace studies, transhumanism, futurism, philosophy of mind, etc.? Is progress studies more cause-neutral?

Spontaneously I find "Broad Rationality" a plausible candidate (I spontaneously found it being used as a very specific concept mainly by Elster 1983, but I find on google only 46 hits on '"broad rationality" elster ', though there are of course more hits more generally on the word combination)

I typically refer to this as "EA+", and people seem to understand what I mean.

Comments8
Sorted by Click to highlight new comments since:

Could you say a little more about the context(s) where a name seems useful?

(I think it's often easier to think through what's wanted from a name when you understand the use case, and sometimes when you try to do this you realise it was a slightly different thing that you really wanted to name anyway.)

TBH, it's a question that popped into mind from background consciousness, but I can think of many possible applications:

  • helping people in various parts of the EA-adjacent ecosystem know about the other parts, which they may be better-suited to helping
  • helping people in various parts of this ecosystem understand what thinking (or doing) has already been done in other parts of the ecosystem
  • building kinship between parts of the ecosystem
  • academically studying the overall ecosystem - why have these similar movements sprung up at a similar time?
  • planning for which parts are comparatively advantaged at what different types of tasks

Thanks, makes sense. This makes me want to pull out the common characteristics of these different groups and use those as definitional (and perhaps realise we should include other groups we're not even paying attention to!), rather than treat it as a purely sociological clustering. Does that seem good?

Like maybe there's a theme about trying to take the world and our position in it seriously?

Makes sense - I guess they're all taking an enlightenment-style worldview and pursuing intellectual progress on questions that matter over longer timescales...

Maybe the obvious suggestion then is "new enlightenment"? I googled, and the term has some use already (e.g. in a talk by Pinker), but it feels pretty compatible with what you're gesturing at. I guess it would suggest a slightly broader conception (more likely to include people or groups not connected to the communities you named), but maybe that's good?

I find "new enlightenment" very fitting. But wonder whether it might at times be perceived as a not very humble name (must not be a problem, but I wonder whether some, me included, might at times end up feeling uncomfortable calling ourselves part of it).

I agree that this is potentially an issue. I think it's (partially) mitigated the more it's used to refer to ideas rather than people, and the more it's seen to be a big (and high prestige) thing.

As I mentioned above, cf “Brights”

Curated and popular this week
Paul Present
 ·  · 28m read
 · 
Note: I am not a malaria expert. This is my best-faith attempt at answering a question that was bothering me, but this field is a large and complex field, and I’ve almost certainly misunderstood something somewhere along the way. Summary While the world made incredible progress in reducing malaria cases from 2000 to 2015, the past 10 years have seen malaria cases stop declining and start rising. I investigated potential reasons behind this increase through reading the existing literature and looking at publicly available data, and I identified three key factors explaining the rise: 1. Population Growth: Africa's population has increased by approximately 75% since 2000. This alone explains most of the increase in absolute case numbers, while cases per capita have remained relatively flat since 2015. 2. Stagnant Funding: After rapid growth starting in 2000, funding for malaria prevention plateaued around 2010. 3. Insecticide Resistance: Mosquitoes have become increasingly resistant to the insecticides used in bednets over the past 20 years. This has made older models of bednets less effective, although they still have some effect. Newer models of bednets developed in response to insecticide resistance are more effective but still not widely deployed.  I very crudely estimate that without any of these factors, there would be 55% fewer malaria cases in the world than what we see today. I think all three of these factors are roughly equally important in explaining the difference.  Alternative explanations like removal of PFAS, climate change, or invasive mosquito species don't appear to be major contributors.  Overall this investigation made me more convinced that bednets are an effective global health intervention.  Introduction In 2015, malaria rates were down, and EAs were celebrating. Giving What We Can posted this incredible gif showing the decrease in malaria cases across Africa since 2000: Giving What We Can said that > The reduction in malaria has be
Ronen Bar
 ·  · 10m read
 · 
"Part one of our challenge is to solve the technical alignment problem, and that’s what everybody focuses on, but part two is: to whose values do you align the system once you’re capable of doing that, and that may turn out to be an even harder problem", Sam Altman, OpenAI CEO (Link).  In this post, I argue that: 1. "To whose values do you align the system" is a critically neglected space I termed “Moral Alignment.” Only a few organizations work for non-humans in this field, with a total budget of 4-5 million USD (not accounting for academic work). The scale of this space couldn’t be any bigger - the intersection between the most revolutionary technology ever and all sentient beings. While tractability remains uncertain, there is some promising positive evidence (See “The Tractability Open Question” section). 2. Given the first point, our movement must attract more resources, talent, and funding to address it. The goal is to value align AI with caring about all sentient beings: humans, animals, and potential future digital minds. In other words, I argue we should invest much more in promoting a sentient-centric AI. The problem What is Moral Alignment? AI alignment focuses on ensuring AI systems act according to human intentions, emphasizing controllability and corrigibility (adaptability to changing human preferences). However, traditional alignment often ignores the ethical implications for all sentient beings. Moral Alignment, as part of the broader AI alignment and AI safety spaces, is a field focused on the values we aim to instill in AI. I argue that our goal should be to ensure AI is a positive force for all sentient beings. Currently, as far as I know, no overarching organization, terms, or community unifies Moral Alignment (MA) as a field with a clear umbrella identity. While specific groups focus individually on animals, humans, or digital minds, such as AI for Animals, which does excellent community-building work around AI and animal welfare while
Recent opportunities in Building effective altruism
46
Ivan Burduk
· · 2m read