New & upvoted

Customize feedCustomize feed

Quick takes

Show community
View more
Set topic
Frontpage
Global health
Animal welfare
Existential risk
Biosecurity & pandemics
10 more
We should shut down EA UK, change our mind   EA UK is hiring a new director, and if we don't find someone who can suggest a compelling strategy, shutting down is a likely outcome despite having ~9 months of funding runway. Over the last decade EA in the UK has been pretty successful, Loxbridge in particular has the highest number of people involved in EA, there are multiple EA related organisations, and many people in government, tech, business, academia, media, etc who are positively inclined towards EA. Because of this success (not that we're claiming counterfactual credit), there is less low hanging fruit for a national/city group to do. For example: * Conferences - EAG London and student summits are run by CEA * Co-working - There are at least 3 different places to co-work (LEAH, LISA, AIM) for 100+ people, as well as many other orgs that have space for guests * Student groups - A combination of Arcadia Impact and CEA * Incubation of new organisations - AIM/CE * Media outreach - Mainly done by the most relevant organisations/CEA   I'm not saying mission accomplished, but for EA specific community building in the UK, I think there will have to be a good understanding of the existing landscape and ideas for what is missing and is unlikely to be done by someone else.
I just saw that Season 3 Episode 9 of Leverage: Redemption ("The Poltergeist Job") that came out on 2025 May 29 has an unfortunately very unflattering portrayal of "effective altruism". The main antagonist and CEO of Futurilogic, Matt, uses EA to justify horrific actions, including allowing firefighters to be injured when his company's algorithm throttles cell service during emergencies. He also literally murders people while claiming it's for the greater good. And if that's not enough, he's also laundering money for North Korea through crypto investments! Why would he do this? He explicitly invokes utilitarian reasoning ("Trolley Theory 101") to dismiss harm caused: And when wielding an axe to kill someone, Matt says: "This is altruism, Skylar! Whatever I need to do to save the world." But what's his cause area? Something about ending "global hunger and homelessness" through free internet access. Matt never articulates any real theory of change beyond "make money (and do crimes) → launch free internet → somehow save world." And of course the show depicts the EA tech executives at Futurilogic as being in a "polycule" with a "hive mind" mentality. Bummer.
Musings on non-consequentialist altruism under deep unawareness (This is a reply to a comment by Magnus Vinding, which ended up seeming like it was worth a standalone Quick Take.)   From Magnus: The intuition here seems to be, "trying to actively do good in some restricted domain is morally right (e.g., virtuous), even when we're not justified in thinking this will have net-positive consequences[1] according to impartial altruism". Let's call this intuition Local Altruism is Right (LAR). I'm definitely sympathetic to this. I just think we should be cautious about extending LAR beyond fairly mundane "common sense" cases, especially to longtermist work. For one, the reason most of us bothered with EA interventions was to do good "on net" in some sense. We weren't explicitly weighing up all the consequences, of course, but we didn't think we were literally ignoring some consequences — we took ourselves to be accounting for them with some combination of coarse-grained EV reasoning, heuristics, "symmetry" principles, discounting speculative stuff, etc. So it's suspiciously convenient if, once we realize that that reason was confused, we still come to the same practical conclusions. Second, for me the LAR intuition goes away upon reflection unless at least the following hold (caveat in footnote):[2] 1. The "restricted domain" isn't too contrived in some sense, rather it's some natural-seeming category of moral patients or welfare-relevant outcome. 1. (How we delineate "contrived" vs. "not contrived" is of course rather subjective, which is exactly why I'm suspicious of LAR as an impartial altruistic principle. I'm just taking the intuition on its own terms.) 2. I'm at least justified in (i) expecting my intervention to do good overall in that domain, and (ii) expecting not to have large off-target effects of indeterminate net sign in domains of similar "speculativeness" (see "implementation robustness"). 1. ("Speculativeness", too, is subjective. And whil
14
calebp
3d
4
The flip side of “value drift” is that you might get to dramatically “better” values in a few years time and regret locking yourself into a path where you’re not able to fully capitalise on your improved values. 
There's a serious courage problem in Effective Altruism. I am so profoundly disappointed in this community. It's not for having a different theory of change-- it's for the fear I see in people's eyes when considering going against AI companies or losing "legitimacy" by not being associated with them. The squeamishness I see when considering talking about AI danger in a way people can understand, and the fear of losing face within the inner circle. A lot of you value being part of a tech elite more than you do what happens to the world. Full stop. And it does bother me that you have this mutual pact to think of yourselves as good for your corrupt relationships with the industry that's most likely to get us all killed.