All posts

New & upvoted

2022

Community 59
Building effective altruism 51
FTX collapse 23
Criticism of effective altruism culture 14
Philosophy 13
FTX Foundation 13
More

Frontpage Posts

496
Gavin
· · 2m read
480
· · 2m read
359
415
Helen
· · 16m read
346
[anonymous]
· · 3m read
337
299
· · 5m read
231
Arepo
· · 8m read
202
· · 16m read

Quick takes

136
Thomas Kwa
2y
23
EA forum content might be declining in quality. Here are some possible mechanisms: 1. Newer EAs have worse takes on average, because the current processes of recruitment and outreach produce a worse distribution than the old ones 2. Newer EAs are too junior to have good takes yet. It's just that the growth rate has increased so there's a higher proportion of them. 3. People who have better thoughts get hired at EA orgs and are too busy to post. There is anticorrelation between the amount of time people have to post on EA Forum and the quality of person. 4. Controversial content, rather than good content, gets the most engagement. 5. Although we want more object-level discussion, everyone can weigh in on meta/community stuff, whereas they only know about their own cause areas. Therefore community content, especially shallow criticism, gets upvoted more. There could be a similar effect for posts by well-known EA figures. 6. Contests like the criticism contest decrease average quality, because the type of person who would enter a contest to win money on average has worse takes than the type of person who has genuine deep criticism. There were 232 posts for the criticism contest, and 158 for the Cause Exploration Prizes, which combined is more top-level posts than the entire forum in any month except August 2022. 7. EA Forum is turning into a place primarily optimized for people to feel welcome and talk about EA, rather than impact. 8. All of this is exacerbated as the most careful and rational thinkers flee somewhere else, expecting that they won't get good quality engagement on EA Forum
The EA Mindset   This is an unfair caricature/ lampoon of parts of the 'EA mindset' or maybe in particular, my mindset towards EA.    Importance: Literally everything is at stake, the whole future lightcone astronomical utility suffering and happiness. Imagine the most important thing you can think of, then times that by a really large number with billions of zeros on the end. That's a fraction of a fraction of what's at stake.    Special: You are in a special time upon which the whole of everything depends. You are also one of the special chosen few who understands how important everything is. Also you understand the importance of rationality and evidence which everyone else fails to get (you even have the suspicion that some of the people within the chosen few don't actually 'really get it').    Heroic responsiblity: "You could call it heroic responsibility, maybe,” Harry Potter said. “Not like the usual sort. It means that whatever happens, no matter what, it’s always your fault. Even if you tell Professor McGonagall, she’s not responsible for what happens, you are. Following the school rules isn’t an excuse, someone else being in charge isn’t an excuse, even trying your best isn’t an excuse. There just aren’t any excuses, you’ve got to get the job done no matter what.”    Fortunately, you're in a group of chosen few. Unfortunately, there's actually only one player character, that's you, and everyone else is basically a robot. Relying on a robot is not an excuse for failing to ensure that everything ever goes well (specifically, goes in the best possible way).    Deference: The thing is though, a lot of this business seems really complicated. Like, maximising the whole of the impact universe long term happiness... where do you even start? Luckily some of the chosen few have been thinking about this for a while, and it turns out the answer is AI safety. Obviously you wouldn't trust just anyone on this, everything is at stake after all. But the chosen few have concluded this based on reason and evidence and they also drink huel like you. And someone knows someone who knows Elon Musk, and we have $10 trillion now so we can't be wrong. (You'd quite like to have some of that $10 trillion so you can stop eating supernoodles, but probably it's being used on more important stuff...) (also remember, there isn't really a 'we', everyone else is a NPC, so if it turns out the answer was actually animals and not AI safety after all, that's your fault for not doing enough independent thinking).    Position to do good: You still feel kinda confused about what's going on and how to effectively maximise everything. But the people at EA orgs seem to know what's going on and some of them go to conferences like the leaderscone forum. So if you can just get into an EA org then probably they'll let you know all the secrets and give you access to the private google docs and stuff.  Also, everyone listens to people at EA orgs, and so you'll be in a much better position to do good afterwards. You might even get to influence some of that $10 trillion dollars that everyone talks about. Maybe Elon Musk will let you have a go on one of his rockets.    Career capital: EA orgs are looking for talented, impressive ambitious, high potential, promising people. You think you might be one of those, but sometimes you have your doubts, as you sometimes fail at basic things like having enough clean clothes. If you had enough career capital, you could prove to yourself and others that you in fact did have high potential, and would get a job at an EA org. You're considering getting enough career capital by starting a gigaproject or independently solving AI safety.  This things seem kind of challenging, but you can just use self improvement to make yourself the kind of person that could do these things. 
108
Linch
1y
5
tl;dr: In the context of interpersonal harm: 1. I think we should be more willing than we currently are to ban or softban people. 2. I think we should not assume that CEA's Community Health team "has everything covered" 3. I think more people should feel empowered to tell CEA CH about their concerns, even (especially?) if other people appear to not pay attention or do not think it's a major concern. 4. I think the community is responsible for helping the CEA CH team with having  a stronger mandate to deal with interpersonal harm, including some degree of acceptance of mistakes of overzealous moderation. (all views my own) I want to publicly register what I've said privately for a while: For people (usually but not always men) who we have considerable suspicion that they've been responsible for significant direct harm within the community, we should be significantly more willing than we currently are to take on more actions and the associated tradeoffs of limiting their ability to cause more harm in the community. Some of these actions may look pretty informal/unofficial (gossip, explicitly warning newcomers against specific people, keep an unofficial eye out for some people during parties, etc). However, in the context of a highly distributed community with many people (including newcomers) that's also embedded within a professional network, we should be willing to take more explicit and formal actions as well.  This means I broadly think we should increase our willingness to a) ban potentially harmful people from events, b) reduce grants we make to people in ways that increase harmful people's power, c) warn organizational leaders about hiring people in positions of power/contact with potentially vulnerable people.  I expect taking this seriously to involve taking on nontrivial costs. However, I think this is probably worth it. I'm not sure why my opinion here is different from others[1]', however I will try to share some generators of my opinion, in case it's helpful: A. We should aim to be a community that's empowered to do the most good. This likely entails appropriately navigating the tradeoff of both attempting to reducing the harms of a) contributors feeling or being unwelcome due to sexual harassment or other harms and b) contributors feeling or being unwelcome due to false accusations or overly zealous response. B. I think some of this is fundamentally a sensitivity vs specificity tradeoff. If we have a detection system that's too tuned to reduce the risk of false positives (wrong accusations being acted on), we will overlook too many false negatives (people being too slow to be banned/censured, or not at all), and vice versa. Consider the first section of "Difficult Tradeoffs"  Avoid false negatives: take action if there’s reason to think someone is causing problemsAvoid false positives: don’t unfairly harm someone’s reputation / ability to participate in EA  In the world we live in, I've yet to hear of a single incidence where, in full context, I strongly suspect CEA CH (or for that matter, other prominent EA organizations) was overzealous in recommending bans due to interpersonal harm. If our institutions are designed to only reduce first-order harm (both from direct interpersonal harm and from accusations), I'd expect to see people err in both directions.  Given the (apparent) lack of false positives, I broadly expect we accept too high a rate of false negatives. More precisely, I do not think CEA CH's current work on interpersonal harm will lead to a conclusion like "We've evaluated all the evidence available for the accusations against X. We currently think there's only a ~45% chance that X has actually committed such harms, but given the magnitude of the potential harm, and our inability to get further clarity with more investigation, we've pre-emptively decided to ban X from all EA Globals pending further evidence."  Instead, I get the impression that substantially more certainty is deemed necessary to take action. This differentially advantages conservatism, and increases the probability and allowance of predatory behavior. C. I expect an environment with more enforcement is more pleasant than an environment with less enforcement. I expect an environment where there's a default expectation of enforcement for interpersonal harm is more pleasant for both men and women. Most directly in reducing the first-order harm itself, but secondarily an environment where people are less "on edge" for potential violence is generally more pleasant. As a man, I at least will find it more pleasant to interact with women in a professional context if I'm not worried that they're worried I'll harm them. I expect this to be true for most men, and the loud worries online about men being worried about false accusations to be heavily exaggerated and selection-skewed[2].  Additionally, I note that I expect someone who exhibit traits like reduced empathy, willingness to push past boundaries, sociopathy, etc, to also exhibit similar traits in other domains. So someone who is harmful in (e.g.) sexual matters is likely to also be harmful in friendly and professional matters. For example, in the more prominent cases I'm aware of where people accused of sexual assault were eventually banned, they also appeared to have done other harmful activities like  a systematic history of deliberate deception, being very nasty to men, cheating on rent, harassing people online, etc. So I expect more bans to broadly be better for our community. D. I expect people who are involved in EA for longer to be systematically biased in both which harms we see, and which things are the relevant warning signals. The negative framing here is "normalization of deviance". The more neutral framing here is that people (including women) who have been around EA for longer a) may be systematically less likely to be targeted (as they have more institutional power and cachet) and b) are selection-biased to be less likely to be harmed within our community (since the people who have received the most harm are more likely to have bounced off). E. I broadly trust the judgement of CEA CH in general, and Julia Wise in particular. EDIT 2023/02: I tentatively withhold my endorsement until this allegation is cleared up. I think their judgement is broadly reasonable, and they act well within the constraints that they've been given. If I did not trust them (e.g. if I was worried that they'd pursue political vendettas in the guise of harm-reduction), I'd be significantly more worried about given them more leeway to make mistakes with banning people.[3] F. Nonetheless, the CEA CH team is just one group of individuals, and does a lot of work that's not just on interpersonal harm. We should expect them to a) only have a limited amount of information to act on, and b) for the rest of EA to need to pick up some of the slack where they've left off. For a), I think an appropriate action is for people to be significantly more willing to report issues to them, as well as make sure new members know about the existence of the CEA CH team and Julia Wise's work within it. For b), my understanding is that CEA CH sees themself as having what I call a "limited purview": e.g. they only have the authority to ban people from official CEA and maybe CEA-sponsored events, and not e.g. events hosted by local groups. So I think EA community-builders in a group organizing capacity should probably make it one of their priorities to be aware of the potential broken stairs in their community, and be willing to take decisive actions to reduce interpersonal harms.  Remember: EA is not a legal system. Our objective is to do the most good, not to wait to be absolutely certain of harm before taking steps to further limit harm. One thing my post does not cover is opportunity cost. I mostly framed things as changing the decision-boundary. However, in practice I can see how having more bans is more costly in time and maybe money than the status quo. I don't have good calculations here, however my intuition is strongly in the direction that having a safer and more cohesive is worth the relevant opportunity costs. 1. ^ fwiw my guess is that the average person in EA leadership wishes the CEA CH team does more (is currently insufficiently punitive), rather than wish that they did less (is currently overzealous). I expect there's significant variance in this opinion however. 2. ^ This is a potential crux. 3. ^ I can imagine this being a crux for people who oppose greater action. If so, I'd like to a) see this argument explicitly being presented and debated, and b) see people propose alternatives for reducing interpersonal harm that routes around CEA CH.
104
Jonas V
1y
22
EA Forum discourse tracks actual stakes very poorly Examples: 1. There have been many posts about EA spending lots of money, but to my knowledge no posts about the failure to hedge crypto exposure against the crypto crash of the last year, or the failure to hedge Meta/Asana stock, or EA’s failure to produce more billion-dollar start-ups. EA spending norms seem responsible for $1m–$30m of 2022 expenses, but failures to preserve/increase EA assets seem responsible for $1b–$30b of 2022 financial losses, a ~1000x difference. 2. People are demanding transparency about the purchase of Wytham Abbey (£15m), but they’re not discussing whether it was a good idea to invest $580m in Anthropic (HT to someone else for this example). The financial difference is ~30x, the potential impact difference seems much greater still. Basically I think EA Forum discourse, Karma voting, and the inflation-adjusted overview of top posts completely fails to correctly track the importance of the ideas presented there. Karma seems to be useful to decide which comments to read, but otherwise its use seems fairly limited. (Here's a related post.)
Comments on Jacy Reese Anthis' Some Early History of EA (archived version). Summary: The piece could give the reader the impression that Jacy, Felicifia and THINK played a comparably important role to the Oxford community, Will, and Toby, which is not the case. I'll follow the chronological structure of Jacy's post, focusing first on 2008-2012, then 2012-2021. Finally, I'll discuss "founders" of EA, and sum up. 2008-2012 Jacy says that EA started as the confluence of four proto-communities: 1) SingInst/rationality, 2) Givewell/OpenPhil, 3) Felicifia, and 4) GWWC/80k (or the broader Oxford community). He also gives honorable mentions to randomistas and other Peter Singer fans. Great - so far I agree. What is important to note, however, is the contributions that these various groups made. For the first decade of EA, most key community institutions of EA came from (4) - the Oxford community, including GWWC, 80k, and CEA, and secondly from (2), although Givewell seems to me to have been more of a grantmaking entity than a community hub. Although the rationality community provided many key ideas and introduced many key individuals to EA, the institutions that it ran, such as CFAR, were mostly oriented toward its own "rationality" community.  Finally, Felicifia is discussed at greatest length in the piece, and Jacy clearly has a special affinity to it, based on his history there, as do I. He goes as far as to describe the 2008-12 period as a history of "Felicifia and other proto-EA communities". Although I would love to take credit for the development of EA in this period, I consider Felicifia to have had the third- or fourth-largest role in "founding EA" of groups on this list. I understand its role as roughly analogous to the one currently played (in 2022) by the EA Forum, as compared to those of CEA and OpenPhil: it provides a loose social scaffolding that extends to parts of the world that lack any other EA organisation. It therefore provides some interesting ideas and leads to the discovery of some interesting people, but it is not where most of the work gets done.  Jacy largely discusses the Felicifia Forum as a key component, rather than the Felicifia group-blog. However, once again, this is not quite what I would focus on. I agree that the Forum contributed a useful social-networking function to EA. However, I suspect we will find that more of the important ideas originated on Seth Baum's Felicifia group-blog and more of the big contributors started there. Overall, I think the emphasis on the blog should be at least as great as that of the forum. 2012 onwards Jacy describes how he co-founded THINK in 2012 as the first student network explicitly focused on this emergent community. What he neglects to discuss at this time is that the GWWC and 80k Hours student networks already existed, focusing on effective giving and impactful careers. He also mentions that a forum post dated to 2014 discussed the naming of CEA but fails to note that the events described in the post occurred in 2011, culminating in the name "effective altruism" being selected for that community in December 2011. So steps had already been taken toward having an "EA" moniker and an EA organisation before THINK began. Co-founders of EA To wrap things up, let's get to the question of how this history connects to the "co-founding" of EA. > Some people including me have described themselves as “co-founders” of EA. I hesitate to use this term for anyone because this has been a diverse, diffuse convergence of many communities. However, I think insofar as anyone does speak of founders or founding members, it should be acknowledged that dozens of people worked full-time on EA community-building and research since before 2012, and very few ideas in EA have been the responsibility of one or even a small number of thinkers. We should be consistent in the recognition of these contributions. There may have been more, but only three people come to mind, who have described themselves as co-founders of EA: Will, Toby, and Jacy. For Will and Toby, this makes absolute sense: they were the main ringleaders of the main group (the Oxford community) that started EA, and they founded the main institutions there. The basis for considering Jacy among the founders, however, is that he was around in the early days (as were a couple of hundred others), and that he started one of the three main student groups - the latest, and least-important among them. In my view, it's not a reasonable claim to have made. Having said that, I agree that it is good to emphasise that as the "founders" of EA, Will and Toby only did a minority - perhaps 20% - of the actual work involved in founding it. Moreover, I think there is a related, interesting question: if Will and Toby had not founded EA, would it have happened otherwise? The groundswell of interest that Jacy describes suggests to me an affirmative answer: a large group of people were already becoming increasingly interested in areas relating to applied utilitarianism, and increasingly connected with one another, via GiveWell, academic utilitarian research, Felicifia, utilitarian Facebook groups, and other mechanisms. I lean toward thinking that something like an EA movement would have happened one way or another, although it's characteristics might have been different.