I know multiple victims/survivors/whatever who were interviewed by TIME, not only one of the named individuals but some of the anonymous interviewees as well.The first time I cried because of everything that has happened in EA during the last few months was when I learned for the fifth or sixth time that some of my closer friends in EA lost everything because of the FTX collapse.The second time I cried about it all was today.
I just posted on the Facebook wall of another effective altruist:
Hey, I really appreciate everything you do for the effective altruism community! Happy birthday!
We would all greatly benefit from expressing our gratitude like this to each other more often.
After the collapse of FTX, any predictions that the effective altruism movement will die with it are greatly exaggerated. Effective altruism will change that maybe none of us ourselves can even predict but it won't die.There are countless haters of so many movements who on the internet will themselves into believing what will happen to that movement when it fails is what they wish will happen. I.e., that the movement will die. Sensationalist polemicists and internet trolls don't understand history or the world enough to know what they're talking about when they celebrate the gleeful end of whatever cultural forces they hate. This isn't just true for effective altruism. This is true for every such movement towards which anyone takes such a shallow interpretation. If movements like socialism, communism, and fascism can make a worldwide comeback in the 2010s and 2020s in spite of their histories, effective altruism isn't going to just up and die, not by a longshot.
Small movements (like species with few members, I think) die more quickly, as do younger movements.
Also EA seems to have a quite specific type of person it appeals to & a stronger dependence on current intellectual strands (it did not develop separately in China & the Anglosphere and continental Europe), which seems narrower than socialism/communism/reactionary thought.
I think it's good to worry about EA disappearing or failing in other ways (becoming a cargo-cult shell of its original form, mixing up instrumental and terminal goals, stagnating & disappearing like general semantics &c).
I've tried to find a paper investigating this question, but haven't been successful—anyone got a link? ↩︎
Events of the last few months have shown that in the last few years many whistleblowers weren't taken seriously enough. If they had been, a lot of problems in EA that have come to pass might have been avoided or prevented entirely. They at least could have been resolved much sooner and before the damage became so great.
As much as more effective altruists have come to recognize this in the last year, one case I think deserves to be revisited but hasn't been is this review of problems in EA and related research communities originally written by Simon Knutsson in 2019, based on his own experiences working in the field.
I'd be curious about more concretization on this, if possible. I don't think my current model is that "whistleblowers weren't taken seriously enough" is the reason a bunch of bad stuff happened here, but there's something that rhymes with that that I maybe do agree with.
Why are you posting these shortform instead of as a top level post?
I wrote my other reply yesterday from my smartphone and it was hard to tell which one of my short form posts you were replying to, so I thought it was a different one and that's why my comment from yesterday may not have seemed so relevant. I'm sorry for any confusion.Anyway, why I'm posting short forms like this too is that they're thoughts on my mind I want to express for at least some effective altruists to notice, though I'm not prepared right now to contend with the feedback and potential controversy that makings these as top level posts would provoke right now.
It's long enough to be a top level post, though I didn't have time around the days these thoughts were on my mind to flesh it out more, with links or more details, or time to address what I'm sure would be a lot of good questions I'd receive. I wouldn't want to post before it could be of better quality.
I've started using my short form to draft stubs or snippets of top level posts. I'd appreciate any comments or feedback on them encouraging me to turn them it top level posts, or, alternatively, feedback even discouraging me from turning them into a top level post if someone would think it's worthwhile.
Any formal conflict of interest I ever had in effective altruism I shed myself of almost five years ago. I've been a local and online group organizer in EA for a decade, so I've got lots of personal friends who work at or with support from EA-affilated organizations. Those might be called more informal conflicts of interest, though I don't know how much they might count as conflicts of interest at all.
I haven't had any greater social conflicts of interest, like being in a romantic relationship with anyone else in EA, for that long as well.
I've never signed a non-disclosure agreement for any EA-affiliated organization I might have had a role at or contracted with for any period of time. Most of what I'm referring to here is nothing that should worry anyone who is aware of the specific details of my personal history in effective altruism. My having dated someone for a few months who wasn't a public figure or a staffer at any EA-affiliated organization, or me having been a board member in name only for a few months to help get off the ground a budding EA organization that has now been defunct for years anyway, are of almost no relevance or significance to anything happening in EA in 2023.
In 2018, I was a recipient of an Effective Altruism Grant, one of the kinds of alternative funding programs administered by the Centre for Effective Altruism (CEA), like the current Effective Altruism Funds or the Community Building Grants program, though the EA Grants program was discontinued a few years ago.
I was also contracted for a couple months in 2018 with the organization then known as the Effective Altruism Foundation, as a part-time researcher for one of the EA Foundation's projects, the Foundational Research Institute (FRI), which has for a few years now been succeeded by a newer effort launched by many of the same effective altruists who operated FRI, called the Center for Long-Term Risk (CLTR).
Most of what I intend to focus on posting about on this forum in the coming months won't be at all about CLTR as it exists today or its background, though there will be some. Much of what I intend to write will technically entail referencing some of the CEA's various activities, past and present, though that's almost impossible to avoid when trying to address the dynamics of the effective altruism community as a whole anyway. Most of what I intend to write that will touch upon the CEA will have nothing to do with my past conflict of interest of having been a grant recipient in 2018.
Much of the above is technically me doing due diligence, though that's not my reason for writing this post.
I'm writing this post because everyone else should understand that I indeed have zero conflicts of interest, that I've never signed a non-disclosure agreement, and that for years and still into the present, I've had no active desire to work up to netting a job or career within most facets of EA.
(Note, Jan. 17: Some of that could change but I don't expect any of it to change for at least the next year.)
People complained about how the Centre for Effective Altruism (CEA) had said they were trying not to be like the "government of Effective Altruism" but then they kept acting exactly like they were the Government of EA for years and years.Yet that's wrong. The CEA was more like the police force of effective altruism. The de facto government of effective altruism was for the longest time, maybe from 2014-2020, Good Ventures/Open Philanthropy. All of that changed with the rise of FTX. All of that changed again with the fall of FTX. I've put everything above in the past tense because that was the state of things before 2022. There's no such thing as a "government of effective altruism" anymore, regardless of whether anyone wants one or not. Neither the CEA, Open Philanthropy, nor Good Ventures could fill that role, regardless of whether anyone would want it or not. We can't go back. We can only go forward. There is no backup plan anyone in effective altruism had waiting in the wings to roll out in case of a movement-wide leadership crisis. It's just us. It's just you. It's just me. It's just left to everyone who is still sticking around in this movement together. We only have each other.
I can't overstate how much the UX and UI for the EA Forum on mobile sucks. It sucks so much. I know the Online Team at the CEA is endlessly busy and I don't blame anyone for this as their fault, though the UX/UI on mobile for the EA Forum is abysmal.
It should be noted that for most of the period that the Centre for Effective Altruism itself admits and acknowledges as its longest continuous period of a pattern of mistakes from 2016-2020, according to the Mistakes page on the CEA's website, two of the only three members of the board of directors were Nick Beckstead, Toby Ord and William MacAskill.
(Note, January 15th: as I'm initially writing this and as of right now, I want to be clear and correct about this enough that I'll be running it by someone from the CEA. If someone from CEA reads this before I contact any of you, please feel free to either reply here or send me a private message for any mistakes/errors I've made here.)
(Note, Jan. 16th: I previously stated that Holden Karnofsky was a board member, not Toby. I also stated that this was the board of the CEA in the UK, that was my mistake. I've now been corrected by a staffer at the CEA, as I mentioned before that I'd be in contact with. I apologize for my previous errors.)
As of June 2022, Holden Karnofsky said he was "currently on 4 boards in addition to Open Philanthropy's."
If that's still the case, that's too many organizations for a single individual in effective altruism to hold board positions at.
Some Takes on Radical Action and Existential Risk Reduction
There have been some social movements that have pursued the reduction of extinction risks that have been analogous to movements to reduce existential risks today, including AI safety/alignment, and affiliated movements such as effective altruism, longtermism, etc. In the links provided are some examples of the most radical violent actions taken in North America by the more extreme wings of those other movements.
The Plowshares movement is an example of a non-violent anti-war movement that has organized some of the most radical non-violent actions taken intended to ostensibly decrease the chance of nuclear war from the 1980s to the present.
There are examples of many different kinds of non-violent direct action from environmental movements but there are so many that even a summary evaluation of them would be worthy of its own post. Environmental movements have been some of the biggest global social movements at their peaks, which come in waves like with most over long-standing/multi-generational mass movements:
More radical tactics have always continuously been practiced by environmental movements somewhere around the world since the 1960s, though there are times when more radical action is less prominent and widespread globally across all environmental movements at large. Like the waves of many other anti-establishment movements in the western world at the time, environmental movements became increasingly radical, or even violent, as the 1970s went on, which killed that wave of environmentalism by alienating public/popular support and provoking an overwhelming backlash from the state. Environmental movements tended to moderate, with more radical actions becoming less prominent overall and the movement being more non-violent during the 2nd wave. The 3rd wave is like the 1st wave in that it has become increasingly radical over time.
I'll probably make a link post with a proper summary later but here is a follow-up from Simon Knutsson on recent events related to longtermism and the EA school of thought out of Oxford.https://www.simonknutsson.com/on-the-results-of-oxford-style-effective-altruism-existential-risk-and-longtermism/
The FTX bankruptcy broke something in the heart effective altruism but in the process, I'm astonished with how dank it has become. This community was never supposed to be this dank and has never been danker. I never would've expected this. It's absurd.
I thought more this morning about my shortform post from yesterday (https://forum.effectivealtruism.org/posts/KfwFDkfQFQ4kAurwH/evan_gaensbauer-s-shortform?commentId=SjzKMiw5wBe7bGKyT)and I've changed my mind about much of it. I expected my post to be downvoted because most people would perceive it as a stupid and irrelevant take. Here are some reasons I disagree now, though I couldn't guess whether anyone downvoted my post because they took my take seriously but still thought it sucked.
I've concluded that Dustin Moskowitz shouldn't go full Dark Brandon after all. It'd not just be suboptimal. It'd be too risky and could backfire. I don't know at what point it'd happen specifically, though at some point there'd be diminishing marginal returns to Dustin adopting more of Dark Brandon-esque personal style. In hindsight, I should've applied the classic too for so much effective altruism, thinking on the margin, to the question: what is the optimal amount of Dark Brandon Dustin Moskowitz should embrace?
Dustin leaning in a more Dark-Brandon-esque direction couldn't totally solve any problems EA faces. There are some kinds of problems Dustin doing so couldn't solve. It could ameliorate the severity of some problems, in particular some image problems EA has.
For those who don't know at all what I'm getting at, I'm thinking about how Dustin Moskowitz might tweak his public image or personal brand than to improve upon its decent standing right now. Dustin is not the subject of as many conspiracy theories as many other billionaires and philanthropists, especially as one who had his start on Silicon Valley. He's not the butt of as many jokes as Mark Zuckerberg or Jeff Bezos about how he's a robot or an alien. If you asked a socialist, or someone who just hates billionaires for whatever reason, to make a list of the ten worst billionaires they hate the most, Dustin Moskowitz is one name that would almost certainly not make it onto the list.
The downside risk of Dustin becoming a more controversial or bold personality gets at the value he provides to EA by being the opposite. That he has been a quieter philanthropist has caused him not to be seen nearly as much as the poster boy for EA as a movement. Hypothetically, for the sake of argument, if Asana went bankrupt for some reason, that would not be nearly as bad for EA as the collapse of FTX was. Dustin not feuding with so many people like Elon Musk has means he doesn't have nearly as many enemies. That means the EA community overall has far fewer enemies. It's less hated. It's not as polarized or politicized. These are all very good things. Much of that is thanks to Dustin being more normal and less eccentric, less volatile and more predictable, and more of a private person than blowhard.
I'm going to be explicit in my controversial rejection or even condemnation of criticisms of the perceived status quo in EA from multiple angles and also the defence of it. I want to preregister my intentions before any pending allegations that I'm biased in favour of a particular side. If anything, I will be biased against all parties involved for their common failure to do right by the matters at hand.
For years now, much ink has been spilled about the promise and peril of the portents, for effective altruism, of dank memes. Many have singled me out as the person best suited to speak to this controversy. I've heard, listened, and taken all such sentiments to heart. This is the year I've opted to finally to complete a full-spectrum analysis of the role of dank memes as a primary form of outreach and community-building.This won't be a set of shitposts on other social media websites. This will be a sober evaluation of dank EA memes, composed of at least one post, if not a series of multiple posts, on the EA Forum. They may be so serious that few, if any, memes will be featured at all. It is time for the dank EA memes to come home.