Recently, Ben Pace wrote a well-intentioned blog post mostly based on complaints from 2 (of 21) Nonlinear employees who 1) wanted more money, 2) felt socially isolated, and 3) felt persecuted/oppressed.
Of relevance, one has accused the majority of her previous employers, and 28 people of abuse - that we know of.
She has accused multiple people of threatening to kill her and literally accused an ex-employer of murder. Within three weeks of joining us, she had accused five separate people of abuse: not paying her what was promised, controlling her romantic life, hiring stalkers, and other forms of persecution.
We have empathy for her. Initially, we believed her too.
We spent weeks helping her get her “nefarious employer to finally pay her” and commiserated with her over how badly they mistreated her.
Then she started accusing us of strange things.
You’ve seen Ben’s evidence, which is largely the word of two people and a few misleadingly cropped screenshots. Below, we provide extensive evidence (contracts, recordings, screenshots, etc) demonstrating that the post’s claims are false, misleading, or are catastrophizing normal things. This post is a summary; we also include a ~200 page appendix of additional evidence. We also present a hypothesis for how Ben got so much wrong.
Two ways you can read this: 1) stop whenever you’re convinced because you’ve seen enough falsehoods that you no longer think their remaining claims are likely to be true, or 2) jump to the specific claims that are most important to you, and look at the evidence we provide for them. You can see summary tables of the key claims and evidence here, here, and here.
Our request as you read on: consider this new evidence you haven’t seen yet with a scout mindset, and reflect on how to update on the accuracy of the original claims.
It’s messy, sorry. Given the length, we’re sure we’ve made mistakes - please do let us know. We’re very happy to receive good faith criticism - this is what makes EA amazing.
Finally, we want to note that we have a lot of empathy for Alice and Chloe. We believe them when they say they felt bad, and we present a hypothesis for what caused their negative emotions.
Short summary overview table
| Claim | What actually happened |
|---|---|
| Alice claimed: they asked me to travel with illegal drugs. | - False. It was legal medicine - from a pharmacy. - Ben knew this and published it anyway. |
| Alice claimed: I was running out of money, so I was scared to quit because I was financially dependent on them (“[I] had €700 in [my] account”* etc.) | - Alice repeatedly misrepresented how much money she had. She actually had a separate bank account/business generating (according to her) ~$3,000 a month in passive income. - Alice told us she was an independent business owner, so she either lied to Ben, Ben misled his readers about this, or she lied to us about the business. |
| Chloe claimed: they tricked me by refusing to write down my compensation agreement | - False. We did write it down. We have a work contract and interview recordings. And when she realized this accusation was false, instead of apologizing, she tried to change the topic - “it’s not about whether I had a contract or salary.”* - We told Ben we had proof, and he refused to look at it and published this anyway. |
| Alice claimed: they paid me next to nothing and were financially controlling | We were the opposite of “financially controlling”*: - We gave her almost complete control over a ~$240,000 budget we had raised. - We even let her choose her own pay.
|
| Alice/Chloe claimed Nonlinear failed to pay them. Later, they denied ever claiming this. | - Alice/Chloe accused us many times of not paying them - a serious accusation. We proved this was false. - Ben tried to walk this back last minute, saying “I no longer believe this is true”* - However, he didn’t remove all the references to this accusation - each one is proof that they were going around telling people this falsehood. - Even our friends thought we didn’t pay Alice anything (due to the rumors that Alice spread). - So they lied, got caught, and are now lying again by saying they never told the first lie. - Instead of apologizing and questioning Alice/Chloe’s other claims based on them being caught telling him provably false and damaging information, Ben shifted the topic - “the real issue is about the wealth disparity between her and Emerson”* |
| Alice claimed: They refused to get me food when I was sick, starving me into giving up being vegan | False. People heard this and thought we were monsters. We ran around for days getting her food, despite all 3 of us being sick or injured. We also had vegan food in the house that she liked, which Kat offered to cook for her (but she declined the offer). |
| Alice claimed: we were not able to live apart from them | - Strange, false accusation: Alice spent 2 of the 4 months living/working apart (dozens of EAs can verify she lived/worked in the FTX condos, which we did not live at) |
| Chloe claimed: they told me not to spend time with my romantic partner | - Also a strange, false accusation: we invited her boyfriend to live with us for 2 of the 5 months. We even covered his rent and groceries. - We were just about to invite him to travel with us indefinitely because it would make Chloe happy, but then Chloe quit. |
| Alice/Chloe claimed: we could only talk to people that Kat/Emerson invited to travel with us, making us feel socially dependent | - False. Chloe herself wrote the invite policy explicitly saying they were encouraged to invite friends/family. - They regularly invited people who joined us (e.g. Chloe’s boyfriend joined for 40% of the time) |
| Alice claimed: they told me not to see my family, making me socially dependent and isolated | - Bizarre, false accusation given that Alice spent 1 of the 4 months with her family - Kat encouraged her to set up regular calls with her family, and she did. |
| Alice/Chloe claimed: I was paid $1,000 per month (and kept implying this was all she was paid, saying it was “tiny pay” or “low pay”) | - The $1k/month was a stipend on top traveling the world all-expenses-paid, which was the majority of the value (~$58k of the ~$70k estimated value of the compensation package) - It’s not the same as a salary, but it’s the comp Chloe signed up for and we clearly communicated. And when Alice asked for pure cash, we said “sure” and even let her choose how much she paid herself. - It’s also misleading. Imagine somebody goes to the EA Hotel and then loudly shouts, “they only paid me $100 a month”. The biggest thing the EA Hotel provides is room & board. |
Alice/Chloe painted a picture of poverty and isolation, which simply does not match the exotic, socially-rich lifestyle they actually lived.
| Claim | What actually happened |
| Alice: You didn’t pay me! | - We paid Alice consistently on time and she herself often said “Thanks for paying me so fast!” - Once she accused us of not paying but she just hadn’t checked her bank account. - Another time she accused us of not paying her for “many months” when she’d received her stipend just 2 weeks prior. - She said she had to “strongly request” her salary, when really, she just hadn’t filled out the reimbursement system for months - We have text messages & bank receipts and she’s still telling people this. |
| Chloe claimed: I was expected to do chores around the house because I was considered low value | - This was part of her job - she was an assistant. We were very upfront, and have interview recordings showing she knew this before she accepted the job. - Imagine applying to be a dishwasher, hating washing dishes, then writing a “tell all” about how you felt demeaned/devalued because the restaurant “expected” you to wash dishes. |
| Chloe: I felt like they didn’t value me or my time (she implied she spent all her time doing assistant work) | - Chloe spent just ~10% of her time on assistant work (according to her own time tracking), the rest was high level ops & reading - We allocated 25% of her time to professional development (~$17,000 a year) - This is basically unheard of for any job, much less an assistant. - She got to read/develop any skills she wanted 2 hours a day (leadership, M&E, hiring, etc) - a dream to many EAs. - Kat showed so much gratitude that Chloe actually asked her to stop expressing gratitude. She said it made her feel Kat only valued her for her work. So Chloe accuses us of both valuing her work too much and too little. - It’s not that Kat didn’t value Chloe’s assistant work, it’s that Chloe didn’t seem to value assistant work, so constantly felt diminished for doing it (despite having agreed to do it when we hired her) - Base rate: ~50% of people feel undervalued at work. |
| Alice: Kat threatened my career for telling the truth | - False. Alice had spent months slandering Kat by spreading falsehoods that were damaging our reputation (see the numerous pages of evidence below). - Kat reached out multiple times, trying to hear her side, share her own, and make some attempts at conflict resolution. Alice refused. - However, despite being attacked, Kat had not defended herself by sharing the truth about what really occurred (which would have made Alice look very bad) - Kat communicated to Alice: Please stop attacking me. I don’t want to fight. If you don’t stop attacking me, I’ll have to defend myself. I haven’t yet told the truth about what you did, and if I do, it will end your career (paraphrased) - Alice painted herself as the victim and Kat out as the attacker, despite Alice being the attacker for months, who had been harming Kat by telling lies. - Why didn’t Kat defend herself? 1) She felt compassion for Alice. She was clearly struggling and needed professional help, not more discord. 2) She was terrified of Alice. Alice had accused 28+ people of abuse - wouldn’t you be scared knowing that? She was worried Alice would escalate further. Which she did anyway. |
| Saying “if you keep sharing your side, I’ll share mine - and that will end your career” is unethical and retaliatory | - Everybody agrees that if somebody is spreading damaging falsehoods about you that it can be good and ethical to share your side and correct the record. - If the truth would hurt the slanderer’s own career, you should still be able to share the truth - In fact, warning the slanderer first is often preferable to going public with the truth without warning them - it at least gives them a chance to stop. - The question is: did Alice spread falsehoods or “just share her negative experience”? (numerous pages of evidence below) - There’s a double standard here: if you share your experience and you’re lower status, that’s “brave”, but if you do the same thing and you’re higher status, that’s “retaliation”. This epistemic norm will predictably lead to inaccurate beliefs and unethical outcomes. |
This post is long, so if you read just one illustrative story, read this one
Ben wrote: “Before she went on vacation, Kat requested that Alice bring a variety of illegal drugs across the border for her (some recreational, some for productivity). Alice argued that this would be very dangerous for her personally.”
This conjures up vivid images of Kat as a slavemaster forcing poor Alice to be a cocaine smuggler, risking life in prison. Is it true?
Parts of the story Alice didn’t share:
- Kat requested Alice bring legal medicine from a pharmacy - specifically antibiotics and one pack of ADHD medicine - not illegal drugs. These medicines are cheap and legal without a prescription in other parts of Mexico we’d visited, and she was already going to a pharmacy anyway.
- After arriving, Alice learned that they require a prescription there. When she told Kat and Drew this, they both said “oh well, never mind!” - it wasn’t a big deal. But then Alice just went and got a prescription anyway.
Alice never argued this would be “very dangerous for her personally”:
- In direct contradiction of her story, thinking traveling with legal medicine would be too dangerous, she flew with psilocybin mushrooms for herself to Mexico.
- Not only that, while in Mexico, she did an actual drug deal for herself - she went out and illegally purchased, then traveled internationally with, actual recreational drugs (cannabis), again completely contradicting her story.
- In fact, Alice never told you that she traveled with actual illegal drugs - cannabis/LSD/psilocybin - for herself across most borders we know of. And Kat was the one warning her not to do that! For example, Alice bought psilocybin for herself just before flying out and Kat expressed concern about her traveling with that.
- In contrast to her “I’m a sweet, innocent girl who would never take such legal risks as traveling with drugs” framing, Alice was literally an ex-drug dealer and manufacturer. She told us she used to make a lot of money growing and distributing marijuana and psilocybin, but she was smoking so much of her own product that she stopped making money.
So, she traveled across both international borders with actually illegal drugs for herself on these flights, and accused us of asking her to travel with -- legal medicine.
Alice took a small request - could you swing by a pharmacy and grab some cheap antibiotics/ADHD medicine? - and she twisted it into a narrative of forcing her to risk prison as a drug mule, that had commenters rushing for their pitchforks.
And it’s worse than that - Ben’s post implied that we largely agreed on the facts of the story, so people condemned us viciously in the comments! But he knew we didn’t agree - when he told us this story we literally laughed out loud because it was so absurd.
We shared much of this information with Ben - he knew it was legal medicine, not illegal drugs - yet he still published this misleading version. We were horrified that Ben published this knowing full well it wasn’t true. We told him we’d share these exact screenshots with him, but he refused to look at them.
It would be bad enough if Alice told this story to one person, but she was going around telling lots of people this! We were hearing from friends Alice started telling stories like this just minutes after she met them, completely unprompted. Saying that the only reason she wasn’t succeeding was because Kat was persecuting her, that we refused to pay her, forced her to do demeaning things, etc.
Ben looked into this because Alice/Chloe spent 1.5 years attacking us - and we didn’t defend ourselves by sharing our side. People only heard stories like the one above.
No wonder people treated us like lepers, disinvited us from events, etc. Can you imagine what that would feel like? For 1.5 years, I’ve lived with fear and confusion (“Why is she still attacking me?”), sleepless nights, fear of what Alice’s next attack might be (justified, apparently), and a sludgy, dark, toxic desolation in my chest at being rejected by my community based on false rumors.
The only thing that gave me hope during this entire thing was believing that EAs/rationalists are good at updating based on evidence, and the truth is on our side.
What is going on? Why did they say so many misleading things? How did Ben get so much wrong?
Ben’s hypothesis - “2 EAs are Secretly Evil”: 2 (of 21) Nonlinear employees felt bad because while Kat/Emerson seem like kind, uplifting charity workers, behind closed doors they are ill-intentioned ne’er do wells. (Ben said we're "predators" who "chew up and spit out" the bright-eyed youth of the community - witch hunter language.)
If what Alice and Chloe told Ben is true, then this hypothesis has merit. Unfortunately, they told him falsehoods. For instance, Alice falsely claims that she couldn’t live/work apart and yet did so for 2 of the 4 months.
Why would she say something so false that she must know is false?
Maybe they’re deliberately lying? We mostly don’t think so, because they wouldn’t keep lying about things we can easily disprove with evidence. Like, Chloe said we tricked her with a verbal contract when she knows we sent her a work contract and we recorded her interviews. So why would she say that?
Maybe they’re just exaggerating and trying to share an emotional truth? Like, Alice felt starved and uncared for, and she’s trying to share that by bending the truth (even though she knows that Kat offered to cook her food, and ended up going out to get her food even though Kat was sick also)?
The thing is, they bend the truth far beyond what anyone would consider normal. For example, with the “they starved me” thing, Alice told Drew she was “completely out of food” just one hour after Kat (also sick) had offered to cook her any of the vegan food in the house that Alice usually loved and ate every day.
This is quite extreme. And there are dozens of similar examples.
So what is going on? Below, we present relevant information to support an alternative hypothesis:
“2 EAs are Mentally Unwell”: They felt bad because, sadly, they had long-term mental health issues, which continued for the 4-5 months they worked for us.
| Relevant mental health history | - Alice has accused the majority of her previous employers, and 28 people - that we know of - of abuse. She accused people of: not paying her, being culty, persecuting/oppressing her, controlling her romantic life, hiring stalkers, threatening to kill her, and even, literally, murder. - They both told us they struggled with severe mental health issues causing extreme negative emotions for much of their lives. Alice said she’d had it for ~90% of her life. She told us that she’d been having symptoms just 4 months before joining us. But she told us then, as she tells people now, she’s totally better and happy all the time. - If she’s been suffering extreme negative emotions for most of her life, it could be that we caused the emotions this time. But it’s more likely a continuation of a longstanding issue. - She was forced to spend a month in a mental hospital. Shortly after, while still getting her bachelor’s, Alice started advertising herself as a life coach to make money. She has offered herself to EAs as a “spiritual guru” claiming she has achieved “unshakeable joy”. - During the period she started accusing us of strange things, she was microdosing LSD every day, only sleeping a few hours a night for weeks, speaking incoherently, writing on mirrors, etc. - She, sadly, claimed to have six separate painful health issues. (When she’s in pain she seems to see ill intent everywhere.) |
| Relevant instances of acting erratically | 1) Alice attempted to steal a Nonlinear project, one that she and 6 other people at Nonlinear had worked on for months. She locked us out of the project and was going around EA claiming it was solely her invention. We told her she could use it if she at least gave Nonlinear some credit for it - it would be insulting to all her colleagues who worked hard on it not to. She kept refusing to share any credit - not even a tiny mention. 2) Alice created a secret bank account and a separate organization (without telling us), and attempted to transfer $240,000 from our control despite being repeatedly told it was not her money and telling people she wasn’t sure if it was her money. However, we do not think she had malicious intent. Our best guess as to why she did this is that she was having an episode and lost touch with reality. 3) While at Nonlinear, Alice worked on a project. Then, weeks after she quit, she continued working on it without telling us, and then demanded we pay her for those weeks she worked after she quit. 5) Alice repeatedly lied about getting job offers to try to extort more money out of us. That or else she made them up as a part of her pattern of delusions. She’s groundlessly claimed to have 4 fabricated job/funding offers that we know of. |
| Key pattern: Alice/Chloe confuse emotions for reality | Example: Alice was saying we literally made her homeless - a very serious accusation. We reminded her of the proof that this was false, and she said “It doesn’t matter, because I felt homeless.” But it really does matter. This is a key pattern of Alice/Chloe’s - they think that feeling persecuted/oppressed means they were persecuted/oppressed, even if they weren’t. |
Why share this? If we refute their claims point by point without explaining the patterns, it’s hard not to think “but they felt bad. Surely you did something bad.” There needs to be a plausible alternative hypothesis for why they felt oppressed.
This info is relevant because mental health issues, particularly having delusions of persecution, explain what happened better:
- Hypothesis 1: actual persecution
- Hypothesis 2: delusions of persecution
To support Hypothesis 2, we simply must share relevant mental health history.
Of course, just because somebody has frequent delusions of persecution doesn’t mean that they’re all false. We agree. That’s why this doc contains numerous pages of evidence to counter their unsupported claims.
And just because somebody has mental health issues doesn’t mean they’re less worthy of compassion. If they are mentally unwell, knowing that allows us to actually help them. If somebody is experiencing delusions, going after whatever “demon” they claim to see won’t actually help them.
If you learn that someone has made many false accusations, which follow a similar pattern to their previous delusions, and many are quite implausible (e.g. hiring stalkers is a weird accusation), then those patterns are relevant. And if somebody was mentally unwell most of their life, then that’s a relevant explanatory factor for why they felt bad.
Ben admitted in his post that he was warned in private by multiple of his own sources that Alice was untrustworthy and told outright lies. One credible person told Ben "Alice makes things up."
We are horrified we have to share all this publicly, but Ben, who refused to look at our evidence, left us no choice. We do not want Alice’s accusations to destroy yet more people’s lives and more drama is the last thing EA needs right now, so we do not intend to expand the scope of accusations in this post, but we think it’s important to share a flavor for Alice’s past with the specifics redacted.
However, we want to make sure it’s clear, this is just the tip of the iceberg for the lives Alice has ruined.
Here is an illustration of how many people we know Alice has accused:
- Alice accused [Person] of [abusing/persecuting/oppressing her]
- Alice accused [Person] of [abusing/persecuting/oppressing her]
- Alice accused [Person] of [abusing/persecuting/oppressing her]
- Alice accused [Person] of [abusing/persecuting/oppressing her]
- Alice accused [Person] of [abusing/persecuting/oppressing her]
- Alice accused [Person] of [abusing/persecuting/oppressing her]
- Alice accused [Person] of [abusing/persecuting/oppressing her]
- Alice accused [Person] of [abusing/persecuting/oppressing her]
- Alice accused [Person] of [abusing/persecuting/oppressing her]
- Alice accused [Person] of [abusing/persecuting/oppressing her]
- Alice accused [Person] of [abusing/persecuting/oppressing her]
- Alice accused [Person] of [abusing/persecuting/oppressing her]
- Alice accused [Person] of [abusing/persecuting/oppressing her]
- Alice accused [Person] of [abusing/persecuting/oppressing her]
- Alice accused [Person] of [abusing/persecuting/oppressing her]
- Alice accused [Person] of [abusing/persecuting/oppressing her]
- Alice accused [Person] of [abusing/persecuting/oppressing her]
- Alice accused [Person] of [abusing/persecuting/oppressing her]
- Alice accused [Person] of [abusing/persecuting/oppressing her]
- Alice accused [Person] of [abusing/persecuting/oppressing her]
- Alice accused [Person] of [abusing/persecuting/oppressing her]
- Alice accused [Person] of [abusing/persecuting/oppressing her]
- Alice accused [a previous employer] of [refusing to pay her, stalking her, toxic culture, making her do unethical/illegal things, assault and murder. Yes, she literally accused her former boss of murder.]
- Alice accused [a previous employer] of [abuse, toxic culture, sexism]
- Alice accused [a previous employer] of [abuse, toxic culture, doing illegal/unethical things, refusing to pay her]
- Alice accused [a previous employer] of [being a cult, toxic culture, doing illegal/unethical things]
- Alice accused [a previous employer] of [abuse]
- Alice accused [a previous employer] of [child abuse, assault, threatening to kill her]
- Alice lied about [serious thing] on her resume
- Alice lied about [serious thing] on her resume
- Alice lied about [serious thing] on her resume
- Alice lied about [serious thing] on her resume
- Alice lied about [serious thing] on her resume
- Alice lied about [serious thing] on her resume
- Alice [____] involving [police]
- Alice [____] involving [police]
- Alice [____] involving [police]
Continuing the pattern, the only public writing I can find of hers outside of social media and the forum is her publicly accusing a person of persecution.
Within weeks of joining us, she accused five separate, unrelated people of abuse. This should have been a major warning sign, but we just thought she’d been unlucky. We hadn’t known her long enough yet to spot the pattern and we were trusting.
These are just the ones we know of from a very shallow investigation. How many would we find if we spent 6 months investigating her? Then we contacted each of these people she accused of abuse and only shared their side? What do they think of Alice?
What would they think if they heard that she was once again accusing a former employer of oppressing her?
We actually completely understand why Ben and most people believed her when she accused us of things - because we believed her too. Within just weeks of first arriving, she told us how:
- Her current employer was refusing to pay her and she’d been waiting for months for payment.
- They had “unclear boundaries” and were disorganized and unprofessional.
- They promised her control of projects then reneged later.
- Her previous employer was culty and unethical.
- Her boyfriend was trying to control her by pressuring her to stop practicing the type of poly she preferred (“no rules” relationship anarchy)
And we just believed her, because 1) we didn’t hear the other side and 2) who lies about things like that?
Also, Alice is one of the most charming people we’ve ever met. She stares deeply into your eyes and makes you feel like the most special person, like you’ve been friends forever. It’s so easy to believe her when she says these people have been being mean to her for no reason. She believes it herself and makes you feel protective of her.
We ourselves were trying to help her get paid by her “evil employer who was refusing to pay her” and congratulating her for “escaping from her culty ex-employer”.
And then she started accusing us of the same kinds of things.
Of course, she could be just very unlucky. But it’s very rare to be that unlucky. If one person is a jerk to you, then that person’s probably a jerk. If everybody’s “mysteriously mean” to you for “no reason” - she kept saying this - maybe it’s not the other people.
And anybody who knows her will notice that she appears to have endless stories of people “bullying/oppressing/mistreating” her, often for what seem to be strange reasons or no reason at all (e.g. she was “bullied” in university for “being too happy”. She almost got a kid expelled from school for this.)
Alice would randomly get texts saying “You ruined my life. I wish I had never met you.” Apparently Alice had destroyed that person’s marriage. She claimed to have done nothing wrong, as is her pattern.
We also wish we had never met Alice. She seems to hop from community to community leaving a trail of wreckage in her wake.
Shortly after being forced to spend a month in a mental hospital, while still in university, Alice started advertising herself as a life coach to make money. She said she stopped because she’d ruined multiple peoples’ lives. At least, this is what she told us.
It looks like she’s started up again. At a recent EAG she told people that she had figured out “unshakeable joy” years ago and offered to teach EAs. Just before she started accusing us of things that made no sense, she was again offering to be a “spiritual guru” to an EA in the Bahamas. She did not follow through because she spent the next months, according to her, “mentally all over the place”.
In other words, during the same time she’s claiming she was miserable, subjected to the worst experience of her life, she was at the same time offering to teach EAs her secret to “unshakeable joy”.
Many people reached out to us privately after Ben released his article who were afraid to come to our defense publicly because it’s dangerous to defend a witch burning on a pyre lest ye be accused of being a witch yourself. Many EA leaders are quietly keeping their heads down since FTX, because visibility in EA has become dangerous.
We had to redact quotes here because, as one person said, “I’m worried Alice will attack me like she’s attacking you.”
Alice has similarities to Kathy Forth, who, according to Scott Alexander, was “a very disturbed person” who, multiple people told him, “had a habit of accusing men she met of sexual harassment. They all agreed she wasn't malicious, just delusional.” As a community, we do not have good mechanisms in place to protect people from false accusations.
Scott wrote a post saying that some of Kathy's accusations were false, “because those accusations were genuinely false, could have seriously damaged the lives of innocent people.”
Of note, we tried to handle this like Scott, who minimized what was shared in public “in order to not further harm anyone else's reputation (including Kathy's)”. This is why we avoided publicly saying anything for the last 1.5 years. Also, once we learned about her history of accusations, we were terrified of Alice, because… well, wouldn’t you be?
Multiple people have actually recommended I get a restraining order on her. Unfortunately, given her previous behavior, it’s unlikely that would help.
Scott said: “I think the Kathy situation is typical of how effective altruists respond to these issues and what their failure modes are. … the typical response in this community is the one which, in fact, actually happened - immediate belief by anyone who didn't know the situation and a culture of fear preventing those who did know the situation from speaking out. I think it's useful to acknowledge and push back against that culture of fear.”
“Suppose the shoe was on the other foot, and some man (Bob), made some kind of false and horrible rumor about a woman…Maybe he says that she only got a good position in her organization by sleeping her way to the top. If this was false, the story isn't "we need to engage with the ways Bob felt harmed and make him feel valid." It's not "the Bob lied lens is harsh and unproductive". It's 'we condemn these false and damaging rumors.'"
We need to carefully separate two questions: 1) is Alice deserving of sympathy? and 2) did Alice spread damaging falsehoods?
For 1) Yes, we feel sympathy for Alice. Seeing secret ill-intent everywhere must be horrible. We hope she gets professional help.
But if she’s going around saying that we forced her to travel with illegal drugs, we starved her, we isolated her on purpose, we refused to pay her, and other horrible false things, then the story isn’t that she felt isolated or she felt scared, the story is that she told false and damaging rumors.
And we need to not mix up our laudable compassion for all with our need to set up systems to prevent false accusations from causing massive harm. In addition to a staggering misallocation of the community’s time, Alice, Ben, and Chloe hurt me (Kat) so much I couldn’t sleep, I couldn’t eat, and I cried more than any other time in my life. My hands were shaking so badly I couldn’t type responses to comments. I wouldn’t wish this experience on anyone.
Why didn’t Ben do basic fact-checking to see if their claims were true? I mean, multiple people warned him?
In sum, Ben appears to have believed Alice/Chloe, unaware of their history, prematurely committed to the “2 EAs are Secretly Evil Hypothesis”, then looked exclusively for confirming evidence.
Crucially, by claiming that they were afraid of retaliation, despite the fact that they’d been attacking us for 1.5 years without us retaliating, Alice/Chloe convinced him that he shouldn’t give us time to provide evidence, that he should just take them at their word. As a result, he shot us in the stomach before hearing our side.
His “fact-checking” seems to have been mostly talking to Alice and Chloe, Alice/Chloe’s friends, and a few outsiders who didn’t know much about the situation.
Imagine applying Ben’s process after a messy breakup: “I heard you had a bad breakup with your ex. To find the truth, I’m going to talk to your ex and her friends and uncritically publicly share whatever they tell me, without giving you the chance first to provide counterevidence, because they told me I shouldn’t let you. Also, I paid them a total of $10,000 before looking at your evidence, so it may be difficult to convince me I wasted all that time and money.”
One example of Ben’s bias: one source told Ben lots of positive things about us. How much of that did Ben choose to include? ~Zero.
A few more examples:
| Claim | What actually happened |
| Ben implied: Kat/Emerson didn’t write things down because they’re dangerously negligent | Actually, when we heard this, we said “What? Yes we did. Just give us time to show you.” (He did not.) |
| Ben: After my call with Kat/Emerson I sent over my notes. Emerson said “Good summary!” (implying Kat/Emerson largely agreed with the facts of the article) | - We were horrified to see that Ben cut off the second part of Emerson’s statement - “Some points still require clarification” and “You don't want to post false things that if you'd waited a bit, you'd know not to include. This draft is filled with literally dozens of 100% libelous and false claims - and, critically, claims that we can prove are 100% false.” - This was especially damaging because many people thought the story was complete, instead of just being one side. People were so angry at us for things “we admitted to” (we didn’t!) |
| Ben: these are consistent patterns of behavior, so you should avoid Nonlinear because of these patterns | - Ben was so committed to his hypothesis, he didn’t speak to any of the people who worked for us in the 1.5 years since Alice/Chloe left to see if any of these patterns were actual patterns. - 100% of them left overall positive reviews. |
| Ben: Alice was the only person to go through their incubator program | - False. Ben’s “fact-checking” appears to mostly have consisted of asking Alice/Chloe’s friends, he thought Alice was the only person we incubated. Actually, there were 6 others, 100% of whom reported a positive experience. He talked to 0 of them. - Alice & Chloe knew this was false and did not correct it. |
Ben: Emerson’s previous company had a bad culture
| - Actually, people liked working for Emerson. His anonymous Glassdoor ratings were similar to the 57th best place to work. - However, not only did he not apologize, despite the facts changing massively, he kept the vibe/conclusion the same. And still, after all this, he included false information! - Side note: the EA Forum, months later, banned someone for sockpuppeting the original unsubstantiated gossip EA Forum thread (based on Alice/Chloe’s falsehoods) - the sockpuppets created even more false consensus. |
| Acknowledging the elephant in the room: a number of reviewers advised us to at least point to the common hypothesis that Ben white-knighted for Alice too hard, given both their personalities and Alice’s background. We’ll leave the pointer, but don’t think it’s hugely appropriate to discuss further. |
Longer summary table
Below you’ll find another longer summary. It’s not comprehensive - the full appendix correcting all the falsehoods (200+ pages) is here. We cover many things in the full appendix that aren’t linked to here.
It’s messy, sorry. We were originally going to literally go sentence by sentence to point out all the inaccuracies, then that got too complicated. There were just too many because Ben didn’t wait to see our evidence. Many claims are partially rebutted in different places and it’s hard to see the big picture.
Ben Gish galloped us by just uncritically sharing every negative thing he heard without fact-checking. Gish galloping means “overwhelming your opponent by providing an excessive number of arguments with no regard for the accuracy or strength of those arguments. Each point raised by the Gish galloper takes considerably more time to refute or fact-check than it did to state in the first place, which is known as Brandolini's law.
Read on to consider which hypothesis seems more plausible:
2 EAs are Secretly Evil Hypothesis: 2 (of 21) Nonlinear employees felt bad because while Kat/Emerson seem like kind, uplifting charity workers publicly, behind closed doors they are ill-intentioned ne’er do wells. (Ben said we're "predators" who "chew up and spit out" the bright-eyed youth of the community - witch hunter language.)
2 EAs are Mentally Unwell Hypothesis: They felt bad because, sadly, they had long-term mental health issues, which continued for the 4-5 months they worked for us.
| Claim | What actually happened |
| “Chloe was only paid $1k/month, and otherwise had many basic things compensated i.e. rent, groceries, travel” Ben describes this as - “next to nothing” and “tiny pay” (they kept implying they were only paid $1,000, so many people walked away with that impression) | - We offered a compensation package: all-expenses-paid (jetsetting around the Caribbean) plus a $1,000 a month stipend on top, working for a charity, as a recent college grad. - We estimated this would be around $70,000, but there was never a plan to make it “add up”. It was simple: “We pay for everything - you live the same lifestyle as us.” - This is “next to nothing”? What happened to EA?
- She was living what for many is a dream life. She was so financially comfortable she didn’t even have to think about money - She somehow turns this into blaming Emerson for her forgetting about her own savings. We don’t think she had to spend a penny of her stipend and 100% of it went into her savings. |
| Alice: I was paid next to nothing! | - Alice was in the top 1-0.1% of income globally - working for a charity! - yet she was paid “next to nothing”. - She was allowed to choose how much she got paid and she chose $72,000, annualized. She also had a separate business making, according to her, around $36,000 a year. That adds up to $108,000 annualized income. - Even before she got the pay raise just 3 months into her job, her comp was $12k stipend, room, board, travel, and medical adding up to around $73k total per year, plus $36k per year from her business. That’s $109k total, living virtually the same lifestyle as us. - This was a huge increase in pay for her - her previous jobs were ~minimum wage. |
| Alice: They asked me to help around the house even when I was sick. This is abuse! | She neglected to mention that
|
| Chloe’s first story: I was packing and Kat/Emerson just sat there on their laptops, working on AI safety instead of helping | This was her job. She was explicitly hired to do “life ops” so that Kat and Emerson could spend more time on AI safety. She knew this before she took the job and we have interview transcripts proving it. |
| Chloe’s second story: Emerson snapped at me | Emerson shouldn’t have done that. But also, Chloe snapped at Emerson sometimes too. It was a really stressful travel day for everybody. This was not an ongoing pattern and the only time we recall this happening. Kat checked in the next day and Chloe said she actually loved the chaos of traveling and it was just that she’d had a bad sleep the night before. |
| Chloe’s third story: Kat threw out all of my hard work right in front of me, showing that my work hours are worth so little | - Chloe got the wrong product and Kat just hadn’t told her till then because she was trying to protect her feelings since she’d worked so hard on it. Chloe knew this and still published this story. - Chloe got so much appreciation from Kat that Chloe actually asked her to do it less. |
| Chloe: I had unclear work boundaries and was pressured into working on a weekend (implies this was a regular occurrence) | “My boss offered me an all-expenses-paid trip to the Caribbean island St. Barths, which required one hour of work to arrange the boat and ATV rentals (for me to enjoy too). But it was one hour on a weekend, so I complained, and it never happened again.” |
| Chloe: I was put into complex situations and told I could do it | - This is not actually bad - We said in the job ad that you would be a good fit if "It’s hard to phase you. You like the challenge of tackling complex problems instead of feeling stressed out about them" - This is some of the best public evidence of her being mentally unwell. These are not overwhelming tasks for most people. |
| Alice: they told me not to talk to locals! | Strange accusation. She asked “How can I increase my impact?” and we said, “you might try spending less time with random bartenders and more time with all the high-level EAs Kat introduced you to”. She continued to talk to locals all the time she was with us, which was totally fine by us. |
| Alice: the Productivity Fund ($240,000) was mine | - We have in writing in multiple places that Alice was the project manager of the Productivity Fund, a project under Nonlinear. - We never did anything to make her think it was hers. She was still attending Nonlinear weekly meetings. We were still reimbursing her for expenses. We never sent her the money. We never sent her a grant agreement. We told her to not make a separate bank account for the money (she did anyway in secret). We threw a party and toasted her promotion (not grant or new charity) in front of many people. We told her if she wanted to do something outside of the scope of the project, she’d have to get our permission. Chloe, our operations manager, was handling all of her ops. - The only thing she has to show it was “hers” is her word, where she remembers a conversation very differently than Emerson or Kat. - This is one of at least 4 separate times we know of where she’s said she was offered money/employment when she wasn’t. |
| Alice/Chloe complain about “unclear boundaries” as if we kept them unclear as part of a nefarious plot. | If they wanted clear boundaries, they should have applied to Bureacracy Inc, not a tiny nomadic startup with a tiny budget. Our job ad said to expect “flexibility, informality” and “startup culture”. |
| Chloe: A tiny startup with a tiny budget did very little accounting! | - Chloe was literally hired to do accounting - We did all of the accounting that we are legally and practically required to do, to the best of our knowledge |
Chloe: I gained no professional advancement from my 5 months there!
| A strange accusation given that: |
| Alice: I couldn’t work for months afterward, I was so upset. | - We have multiple text messages of her telling us that she’d been working that entire time. She told us she hadn’t even taken weekends off. - Perhaps relevant: she was trying to get more money from us by saying she’d continued working. But when talking to Ben, she’d get money saying that she hadn’t worked. - Either way, she lied to Ben or she lied to us. |
| Alice/Chloe: Emerson told us stories of him being a shark | - Emerson shared stories about how he almost died in shark attacks to help Alice/Chloe defend themselves against shark attacks. They then painted Emerson as a shark. - A different Nonlinear team member heard the same stories, but spent weeks taking notes and was grateful! |
| Alice: I got constant compliments from the founders that ended up seeming fake. | Strange accusation. Alice was in a dark place and interpreted compliments as evidence that Kat/Emerson were secretly evil. |
| Alice: Emerson said, "how much value are you able to extract from others in a short amount of time?" - he openly advocates exploiting people! | He said “to have productive conversations, ask good questions to maximize learning/value per second” |
| Chloe: I was pressured into learning to drive | - Chloe was an enthusiastic consenting adult for the independence it gave her (“I was excited to learn how to drive”) - She regularly drove on her own for fun - She was told many times that she didn’t have to drive if she didn’t want to. We’d just pay for Ubers for her. She always insisted she did. - We spent 1 hour a day for 2 months patiently teaching her in parking lots. She had tons of supervised practice. - Ben said she risked “substantial risk of jail time in a foreign country” (sounds terrifying). False, it was just a $50 fine, the same amount you’d be fined for jaywalking (we told him this. The article is filled with falsehoods he refused to correct). - She once decided to stop driving. She didn’t even tell Kat/Em because it was so not a big deal. She just told Drew, and he was like “cool”. She started driving around a week later because she missed driving. Drew didn’t talk to her about it and Em/Kat didn’t even know so there was no pressure to start again. |
| Ben: Alice/Chloe are “finally” speaking out. They couldn’t speak out for fear of retaliation. and didn’t want anyone to know until. | - False. Alice/Chloe spent the last 1.5 years telling many people in EA, which seriously damaged Nonlinear's reputation. - Chloe and Alice have been attacking us that whole time - without us retaliating against them! They report being worried about us hiring stalkers, doing spurious lawsuits, or otherwise legally dubious actions. None of those things happened. |
| Ben: 12 years ago in a dispute Emerson used “intimidation tactics” | - Someone tried to steal Emerson’s company, throwing his 25 employees on the street, with a legal loophole. Emerson said he would countersue and actually share his side (he hadn’t). Ben frames this as Emerson is the evil attacker, not the defender. Everything Emerson does is “intimidation” tactics, it doesn’t matter if he’s the one getting knifed in the chest. - This is another instance of the double-standard: somebody is allowed to sue Emerson & share their side, but if Emerson does the same, Ben frames it as unethical and "retaliatory". |
| Ben: “I think standard update rules suggest not that you ignore the information, but you think about how bad you expect the information would be if I selected for the worst, credible info I could share” | - The most common criticisms ex-employees have of their orgs is low pay, feeling not valued enough by management, and a “toxic” work culture. - Most of Ben's article is totally run-of-the-mill criticisms (but presented as very serious) - Base rate: ~50% of people feel undervalued at work. - Base rate: 71% of EAs claim to have a mental illness. - The probability that 2 (of 21) people who work for any EA org felt this way is extremely high |
| “But you threatened to sue Lightcone if they didn’t give you a week to gather your evidence” | - We did that because we had tried everything else, yet Ben kept, unbelievably, refusing to even look at our evidence. What were we supposed to do? He was about to publish reputation-destroying things he would know were false if he just waited to see the evidence. - Despite the fact that he published numerous things he knew were false (e.g. verbal agreement, accounting, vegan food, legal medicine, & many more), we decided not to sue because we think that would increase p(doom). |
| What are we doing differently in the future? | - We’ve spent ages analyzing this and trying to figure out what happened and what we can do differently. - We asked Alice and Chloe multiple times to share their side and do some conflict resolution and they refused - The accusations are almost entirely false, misleading, or catastrophizing normal things, so we cannot improve on that front. Nevertheless, some things we are doing differently are: - Not living with employees & all employees being remote. - Not using that compensation structure again. - Hiring assistants who’ve already been assistants, so they know they like it. |
| Alice/Chloe: Nonlinear, a charity startup, had an entrepreneurial and creative problem-solving culture. However, this is actually a bad thing, because sometimes that leads to people feeling pressured and overwhelmed | - Accurate. We did have a culture of “being entrepreneurial and creative in problem-solving”. The fact that they applied to work at a startup and considered this to be bad is strange. Others have said this is the best part about being around us, our “contagious mindset around problem-solving” -The things they felt “pressured” into are disproven elsewhere. Evidence/read more, evidence #2, evidence #3, evidence #4, evidence #5 |
| “But Alice seems so open and nice” | Why does Alice get away with telling falsehoods so much? - It takes months to catch her in enough falsehoods to see the pattern. In the meantime, she seems so joyful. - She bounces from jobs/communities quickly. Her longest job is 13 months, so by the time you start catching on, she’s already gone. - She (well, part of her) believes what she says and she’s genuinely kind, so she’s convincing. - She builds trust by quickly telling you things that seem very personal - “wow, she must really like and trust me to be telling me all this!” - about how other people have oppressed her, which triggers protective instincts. |
To many EAs, this would have been a dream job
Alice/Chloe/Ben painted a picture of Alice/Chloe having terrible jobs and they barely survived those few months they were with us. Now, I do not deny that Alice and Chloe suffered, and I deeply wished they hadn’t. But a lot of people would have loved these jobs. Look at the job ad - “you get paid to see the world and live in endless summer, since we only stay in places where it’s warm and sunny.”
Clearly aspects of the job didn’t work for Alice (wanted 100% control and nothing less) and Chloe (found being an assistant to be beneath her). However, I’d like to describe the job to the people who would have liked it.
Chloe beat out 75 other “overqualified” (which she described herself as being) EAs who applied for Chloe’s job - getting an EA job is hard.
Imagine a job where you’re always in beautiful, sunny, exotic places. Part of the year is spent in various EA Hubs: London, Oxford, Berkeley, San Francisco. Part of the year you explore the world: Venice, the Caribbean, Rome, Paris, the French Riviera, Bali, Costa Rica.
You’re surrounded by a mix of uplifting, ambitious entrepreneurs and a steady influx of top people in the AI safety space. In the morning, you go for a swim with one of your heroes in the field. In the evening, a campfire on a tropical beach. Jungle hiking. Adventure. Trying new foods. Surfing. Sing-a-longs. Roadtrips. Mountain biking. Yachting. Ziplining. Hot tub karaoke parties. All with top people in your field.
Your group has a really optimistic and warm vibe. There’s this sense in the group that anything is possible if you are just creative, brave, and never give up. It feels really empowering and inspiring.
Chloe’s job was a lot of operations people’s dream job. She got to set up everything from scratch, instead of having to work with existing sub-optimal systems. She was working on big, challenging operations puzzles that were far above the usual entry-level admin stuff that you’d get as a person who just graduated from uni.
About 10% of the time was doing laundry, groceries, packing, and cooking - and she has to do many of those things for herself anyways! At least this is on paid time, feels high impact, and means she’s not sitting in front of the computer all day. Also, everybody starts somewhere, and being in charge of setting up all of the operations for an org is a pretty great place to start, even if it does also include doing some pretty simple tasks. As a job straight out of university, this is a pretty plush job. And getting a job in EA is hard.
And she gets two hours a day of professional development. Paid! She spends the time learning things like management, lean methodology, measuring impact, etc. She gets to choose basically whatever it is she wants to learn. Getting paid to read whatever you want for 2 hours a day would be a dream for many EAs.
Even more people would have loved Alice’s job, especially entrepreneurial types. When Alice arrived, just as a friend, she was encouraged to read a book a day on entrepreneurship, to quickly skill up. She started working with us building a product that seemed likely to be very high impact. Especially since it was a project that was meant to help do decentralized, automated prioritization research, so she’d be able to use the product herself to find the idea she wanted to start.
She had tons of freedom on strategy and she was very quickly given more responsibility. Within a few weeks of starting, she was managing an intern. She received hours of mentorship from experienced entrepreneurs every single day. She was quickly introduced to a huge percentage of all the major players in the field, to help her design the product better.
Then, within just a few months of starting, she was given nearly complete control of $240,000 - so much control that she could also choose how much she got paid! Imagine being quickly given so much financial and strategic freedom. As long as it falls within the scope of the department, you have control over nearly a quarter million dollars. Whatever you want to pay yourself out of that budget, you can. If you do a good job, that $240,000 could rapidly expand to $2-3 million a year.
Especially given that there’s a chance in half a year or so that you could spin out and be an entirely separate organization. Or hand it off to somebody else after gaining invaluable experience launching a really big project, all the while with the guidance and guardrails of an experienced entrepreneur.
Sure, it’s an unorthodox payment arrangement. But, man, you are certainly living a glamorous lifestyle. Always in sunny, exotic, places. Living in beautiful homes. Going on adventures in bioluminescent bays, yachting, kayaking, and snorkeling in tropical reefs. And you’re living that glam life while working for a charity. Not bad.
And, I mean, you had been considering living at the EA Hotel, where you’d be living in much less nice conditions, wouldn’t see the sun for half the year, and wouldn’t get nearly the exposure to experienced entrepreneurs and top people in the field. Maybe you’d get a stipend of max $150 a month.
Anyways, for you, it’s not about the money. You’re an aspiring charity entrepreneur, for goodness sake! That’s not a career you go into for the money. It’s about the impact and the life you’re living. And you want a job where you’re seeing the world and doing your best to save it.
Sure, maybe when you’re older, you’ll get a job that pays more and stays in one place so you can put down more roots, but right now you’re young. You want to explore. You’re living the dream and seeing the world.
You could maybe get a job with higher pay, though your previous jobs were ~minimum wage, and Nonlinear is paying you a lot more than that, so maybe not. But none would involve the travel. None would involve the adventure.
You want to go snorkeling in tropical reefs with EA leaders but also work in Oxford and have deep conversations with your favorite EA researchers at lunch. You want to pet the cats in the Grand Bazaar in Istanbul while you’re also building something really high impact. You want to be investing so much into your personal growth that you get to spend a quarter of your time just learning. You want adventure and impact.
Again - this doesn’t mean everybody would like the job. However, to paint this job as “inhumane” or as if Alice was “a fully dependent and subservient house pet” - is a dark, paranoid view of the warm, positive, uplifting environment we created.
Alice was constantly given more and more responsibility. She was given more freedom than almost any EA job and then told everybody she was kept in metaphorical shackles. She made Ben (and everybody else in the community she spent the last year telling) think that she was essentially a slave, kept under the oppressive hold of a controlling and isolating group of abusers.
[Emerson’s note: Kat paid herself $12,000 a year - half of minimum wage - for most of her charity career because she took the drowning child argument seriously. Not $1,000 a month on top of all-expenses-paid travel, adventures, villas, and restaurants - $1k/month total. In Canada’s most expensive city. Sharing a single always-damp towel with her partner. Kat doesn’t usually bring this up because she doesn’t want to make people feel bad who won’t or can’t do the same, but I think it’s important information about her character. Say what you will about her, but she deeply cares about altruism.]
But through some combination of mental illness, daily LSD use, and a society that uncritically rewards anyone claiming to be a victim, she turned financial freedom into financial servitude. She turned gratitude into manipulation.
Yes, Alice suffered. Chloe did too. Nobody is doubting that. The question is what caused the suffering. Because for most people, having to work for an hour on a weekend, then clearing it up with your boss and it never happening again isn’t a cause for months of depression.
For most people, having a separate business bringing in $3,000 a month and being able to choose your own pay is financial freedom, not servitude.
For most people who applied to these jobs, they would be considered great jobs. And if they found out they didn’t like it, they’d just quit and do something else. They wouldn’t demand a public lynching.
Sometimes people are depressed and see everything as bad and hostile. Sometimes people are sleep deprived, taking LSD every day, in chronic pain, and start seeing plots everywhere. Sometimes people have been struggling with mental health issues for their entire life.
This was not an objectively bad job that caused them psychological harm. It was a woman who kept forgetting she was an assistant and feeling outraged when asked to do her job. She felt she was overqualified and turned that resentment on her employers. It was a woman who’s struggled with severe mental illness for over 90% of her life and continued to do so while she was with us.
Sharing Information on Ben Pace
Since the article was published, an alarming number of people in the community have come forward to report worrying experiences with Ben Pace, and report feeling frightened about speaking out because of what Ben might do to them.
As just one example, one woman had a deeply traumatic experience with Ben but is afraid to say anything, because he runs LessWrong and is surrounded by so many powerful people in the community who would defend him. She’s worried if she comes forward that he’ll use his power to hurt her career, both directly by attacking her again, or indirectly, by making sure none of her posts get onto the front page. (We’ve heard multiple reports of people having a conflict with one of the Lightcone team and then suddenly, their posts just never seem to be on the front page anymore. We don’t know if this is true.)
She asked me to not share it with Ben because she’s frightened of him, but she said it was finally time to be strong and speak up now, as long as she was fully anonymized. She couldn’t live with herself if she allowed another person to be hurt by Ben the way Ben hurt her. I ask you to please respect her privacy and if you know her, not bring this up unless she does.
She’s been struggling with mental health issues since he attacked her, unable to sleep or eat. She still, after all this time, just randomly breaks down crying on sidewalks. She even considered leaving effective altruism. She no longer feels safe at Lightcone events and no longer goes to them, despite missing the many good people in the rationalist community. It’s shaken her trust in the community and talking about it still makes her visibly upset.
She told me to not talk to Ben about it, because he takes absolutely no responsibility for the harm he’s done, and has explicitly told her so. And he shows a friendly face to people, which is how he gets away with it, all the while professing simply an interest in truth. But he’ll be smiling at you and friendly, all the while having the intention to stab you in the back. One source reported that “Ben is a wolf in sheep’s clothing.”
People who knew what happened to this woman confirmed that what Ben had done to her was “horrifying” and “they couldn’t believe he would do that to a person”. They were shocked at his lack of concern for her suffering and confirmed that he would probably really hurt her career if she came forward with her information.
She knows of at least one other person who’s had really worrying experiences with him. Where deep and preventable harm was happening and he just didn’t seem to care. He actually blamed the person who was being hurt! She hasn’t brought it up with the person much because she doesn’t want to stir up old hurts. She can tell it still hurts them, but they’ve managed to move on and remember the things they really care about.
She had heard about what had happened to this person before, but she thought it was probably just a one-off thing and it wouldn’t happen to her. She wishes she had paid more attention so she could have avoided her own traumatic experience. She’s still suffering. She’s still lying awake each night, replaying, over and over, the nightmare of what Ben did to her.
Another person reports “I wish I had never met Ben. He hurt me more than I even thought was possible. I highly recommend not being friends with him and if you see him at a party, I would just subtly avoid him. I hope he gets better and stops doing to others what he did to me, but as far as I’ve heard, he’s still completely in denial about the harm he’s caused and has no intention of changing.”
---
This information above is true to the best of my knowledge. What other worrying things might I find if I spent months investigating like Ben did?
However, this is completely unfair to Ben. It’s written in the style of a hit piece. And I believe you should not update much on Ben’s character from this.
- Like Ben did to us, I did basically no fact-checking.
- Like Ben did to us, I assumed ill-intent.
- Like Ben did to us, I unfairly framed everything using emotional language to make Ben seem maximally nefarious.
- Like Ben did to us, I uncritically shared anonymous accusations. Since they’re anonymous, Ben can’t even properly defend himself, which is why courts don’t accept anonymous hearsay.
- Ask legal history scholars what happens when courts allow anonymous hearsay: kangaroo courts and mob justice.
- Like Ben did to us, I didn’t give him a proper chance to respond to these accusations before publishing them.
- I mentioned none of his many very good qualities.
- I interviewed none of the people who like Ben, and exclusively focused on the testimonies of a small number of people who don’t like him.
- I even left out the good things these people said about Ben, like he did to us. It reads very differently when it’s not just negative.
- I used culture-war optimized language (victim/oppressor) to turn people’s brains off.
- I used wording that was technically accurate but implied “a lot of people are saying”, like Ben did to us.
I’m not yet worried about these “patterns” about Ben because I don’t know if they are patterns. I haven’t heard his side. And I refuse to pass judgment on someone without hearing their side.
Further, through using emotional and one-sided language, I made it sound like it was incredibly obvious that what Ben did was awful and you’d be a monster to disagree. However, given what I know about these allegations, I think 35-75% of EAs would think that they’re not nearly as bad as the witnesses made them out to be. The other 35-75% would think it was clearly and deeply unethical. It would depend on each allegation and how it was presented.
It would be a matter of debate, not a matter of public lynching.
At least, it would be if we presented it in an even-handed manner, investigating both sides, looking for disconfirming evidence, and not presuming guilt until proven innocent.
Also, in case you’re worried about these people, they all say they’re OK. All of the situations are either being taken care of or have ended and they’re no longer suffering and do not want to pursue further actions to prevent Ben from doing it to other people.
I could do this for anybody. Just to give one example: almost everybody has had “bad breakups” and if you only speak to “disgruntled exes” you will get a warped, distorted view of reality.
I don’t think Ben should even have to respond to these. It would also be a very expensive use of time, since in his follow-up post, he said he’s now available for hire as an investigative journalist for $800,000 a year.
At that hourly rate, he spent perhaps ~$130,000 of Lightcone donors’ money on this. But it’s more than that. When you factor in our time, plus hundreds/thousands of comments across all the posts, it’s plausible Ben’s negligence cost EA millions of dollars of lost productivity. If his accusations were true, that could have potentially been a worthwhile use of time - it's just that they aren't, and so that productivity is actually destroyed. And crucially, it was very easy for him to have not wasted everybody’s time - he just had to be willing to look at our evidence.
Even if it was just $1 million, that wipes out the yearly contribution of 200 hardworking earn-to-givers who sacrificed, scrimped and saved to donate $5,000 this year.
I am reminded of this comment from the EA Forum: “digging through the threads of previous online engagements of someone to find some dirt to hopefully hurt them and their associated organizations and acquaintances is personally disgusting to me, and I really hope that we don't engage in similar sort of tactics…though I don't think it's a really worry because the general level of decency from EAs at least seems to be higher than the ever lowering bar journalists set."
As a community, if we normalize this, we will tear ourselves apart and drown in a tidal wave of fear and suspicion.
This is a universal weapon that can be used on anybody. What if somebody exclusively only talked to the people who didn’t like you? What if they framed it in the maximally emotional and culture-war way? Have you ever accidentally made people uncomfortable? Have you ever made a social gaff? Does the idea of somebody exclusively looking for and publishing negative things about you make you feel uneasy? Terrified?
I actually played this game with some of my friends to see how easy it was. I tried to say only true things but in a way that made them look like villains. It was terrifyingly easy. Even for one of my oldest friends, who is one of the more universally-liked EAs, I could make him sound like a terrifying creep.
I could do this for any EA org. I know of so many conflicts in EA that if somebody pulled a Ben Pace on, it would explode in a similar fashion.
But that’s not because EA orgs are filled with abuse. It’s because looking exclusively for negative information is clearly bad epistemics and bad ethics (and so is not something I would do). It will consistently be biased and less likely to come to the truth than when you look for good and bad information and try to look for disconfirming evidence.
And it will consistently lead to immense suffering. Knowing that somebody in the community is deliberately looking for only negative things about you, then publishing it to your entire community? It’s a suffering I wouldn’t wish on anybody.
EA’s high trust culture, part of what makes it great, is crumbling, and “sharing only negative information about X person/charity” posts will destroy it.
----
In the preceding pages and our extensive appendix we presented evidence supporting an alternative hypothesis:
2 EAs are Secretly Evil Hypothesis: 2 (of 21) Nonlinear employees felt bad because while Kat/Emerson seem like kind, uplifting charity workers, behind closed doors they are ill-intentioned ne’er do wells.
2 EAs are Mentally Unwell Hypothesis: They felt bad because, sadly, they had long-term mental health issues, which continued for the 4-5 months they worked for us.
Below we share concluding thoughts.
So how do we learn from this to make our community better? How can we make EA antifragile?
Imagine that you are a sophomore in college. It’s midwinter, and you’ve been feeling blue and anxious. You sit down with your new therapist and tell him how you’ve been feeling lately.
He responds, “Oh, wow. People feel very anxious when they’re in great danger. Do you feel very anxious sometimes?”
This realization that experiencing anxiety means you are in great danger is making you very anxious right now. You say yes. The therapist answers, “Oh, no! Then you must be in very great danger.”
You sit in silence for a moment, confused. In your past experience, therapists have helped you question your fears, not amplify them.
The therapist adds, “Have you experienced anything really nasty or difficult in your life? Because I should also warn you that experiencing trauma makes you kind of broken, and you may be that way for the rest of your life.”
He briefly looks up from his notepad. “Now, since we know you are in grave danger, let’s discuss how you can hide.
Jonathan Haidt, The Coddling of the American Mind
EA is becoming this therapist.
EA since FTX has trauma. We’re infected by a cancer of distrust, suspicion, and paranoia. Frequent witch burnings. Seeing ill-intent everywhere. Forbidden questions (in EA!) Forbidden thoughts (in EA!)
We’re attacking each other instead of attacking the world’s problems.
Anonymous accounts everywhere because it’s not safe anymore, too easy to get cancelled.
People afraid to come to the defense of the accused witch lest they be accused (as Scott Alexander said).
High impact people and donors quietly leaving, turned off by the insularity and drama.
Well, did a bunch of predators join overnight or is it more that we have trauma?
If you were new to EA and you looked at the top posts of all time and saw it was anonymous gossip from 2 (of 21) people who worked for a tiny charity for a few months, what would you think this community values? What is its revealed preference?
Would that community seem healthy to you? If you weren’t already part of this community, would that make you want to join?
People spent hours debating whether a person in a villa in a tropical paradise got a vegan burger delivered fast enough - would you think this community cared about scope sensitivity and saving the world (like we normally do)?
“First they came for one EA leader, and I did not speak out --
because I just wanted to focus on making AI go well.
Then they came for another, and I did not speak out --
because surely these are just the aftershocks of FTX, it will blow over.
Then they came for another, and I still did not speak out --
because I was afraid for my reputation if they came after me.
Then they came for me - and I have no reputation to protect anymore.”
So, what do we do? We have a choice to make:
Are we fragile - continuing to descend into a spiral of PTSD madness with regular lynchings?
Are we resilient - continuing to do good despite the trauma?
Or are we antifragile - can we experience post-traumatic growth and become stronger?
Can this be the last EA leader lynching, and the beginning of the EA community becoming stronger from what we’ve learned post-FTX? If we want to do the most good, we must be antifragile.
Alice, Chloe, or Ben mean well and are trying to do good, so we will not demand apologies from them. We are all on the same team. We wish them the best, we hope they’re happy, and we hope they learn from this.
As Tim Urban of Wait But Why said: “In a liberal democracy, the hard cudgel of physical violence isn't allowed. You can't burn villains at the stake. But you can burn their reputation and livelihood at the stake. This is the soft cudgel of social consequences. It only works if everyone decides to let it work. If enough people stand up for the target and push back against the smear campaign, the soft cudgel loses its impact.”
Conclusion: a story with no villains
I wish I could think that Alice, Ben, and Chloe were villains.
They hurt me so much, I couldn’t sleep. I cried more than any other time in my life.
My hands were shaking so badly I couldn’t type responses to comments, and people attacked me for this, saying my not responding immediately was evidence I was a witch.
Alice, Ben, and Chloe show absolutely no remorse and I don’t predict they’re going to stop. They’re in too deep now. They can’t change their minds.
Although I certainly hope they do. If they updated I think the community would applaud them, because that takes epistemic courage similar to Geoffrey Hinton updating on AI.
And yet, despite all the harm they’ve done to me and the community, I can see their good intentions clear as day. So why are they hurting us if they have such good intentions?
Most harm done by good people is either accidental or because they think they’re fighting the bad guys. And they’ve full-on demonized us.
Demonizing somebody is the best way for good people to hurt other good people. Hence them calling us “predators”, going after the “bright-eyed” youth of the community, “chewing them up and spitting them out”. This is the language of a witch hunter, not a truthseeking rationalist.
Chloe explicitly says she can’t empathize with us at all. Reflect on this.
I don’t think they’re villains. But they think we are. And you’re allowed to do all sorts of things to people if they’re bad.
And that’s just what happened. Alice/Chloe had been telling everyone, Ben heard about it, and… monsters don’t deserve fair trials! They’ll just use their time to manipulate the system. And the two young women were afraid of retaliation!
Sure, they’d been telling lots of people in the community their false narratives for over a year and none of their strange fears of us “hiring stalkers” or “calling their families” had happened. But that doesn’t matter. You don’t stop while saving a community to check and see if there’s actually a witch. He’s the hero saving the collective from the nefarious internal traitors who must be purged.
Chloe isn’t a villain. She’s a woman who didn’t like her entry level job and wanted more money. She was a fresh graduate who felt entitled to something better. She struggled with mental health issues and blamed her feelings of worthlessness and overwhelm on Emerson and I. She took totally normal things and catastrophized them. Her story probably wouldn’t have been a scandal if it weren’t for our community’s PTSD around FTX.
Alice isn’t a villain. She’s an incredible human being who has struggled with mental health issues her entire life, and one of the symptoms is delusions of persecution - people trying to control her. This is why we’re #27 and #28 on her list of 28 people she’s accused of abuse (that we know of).
Imagine being able to choose how much you got paid and having a whole separate income stream (unrelated to your job) and yet feeling financially controlled? Imagine seeing ill-intentions everywhere?
That sounds horrible. I genuinely hope she gets the help she needs.
And finally, we’re not villains either. We paid our team what we said we’d pay them. We set it up so that they socialized with more people than the average person. We valued their time so much that we paid for Chloe to spend two hours a day doing professional development. I valued Chloe’s time so much that she asked me to stop sharing my gratitude as much. When Alice asked for a raise 3 months into her job, we let her choose her pay. We continue to have good experiences with the vast majority of people we work with.
We were not faultless. Emerson should not have snapped on that travel day and he should have apologized immediately. I should have scheduled a weekly meeting right after the conference instead of not properly talking to Alice about work stuff for three weeks, letting the misunderstanding last for so long.
But overall, it wasn’t that the job was bad or they were mistreated. They felt oppressed for some other reason. Maybe it was that Chloe hated being an assistant and found normal assistant work demeaning. Maybe it was because Alice was microdosing LSD nearly every day, sleeping just a few hours a night, and has a lifelong pattern of seeing persecution everywhere. Maybe it’s just because they’ve both struggled to be happy most of their lives and continued to do so for the 4-5 months they were with us. We’ll leave it to them and their loved ones to figure it out.
This combined poorly with our community being traumatized by FTX, being hyper-vigilant for another potential SBF. It also combined with poor epistemics because of the (unfounded) concern about retaliation. And it certainly didn’t help that Ben had already committed to paying them $10,000 before seeing our evidence.
This was a tragedy of errors. It was a bunch of well-intentioned and fallible humans trying to do good in the world. A recipe for trouble, really.
And there will be other conflicts in EA. I know of countless EA conflicts that if somebody pulled a Ben Pace, only looking for negative information attacking one side, would explode in a similarly spectacular fashion.
This doesn’t mean EA is rife with abuse, it just means that EA is rife with humans
Humans with strong moral emotions and poor social skills on average. We should expect a lot of conflict.
We need to find a better way to deal with this. Our community has been turning on itself with increasing ferocity, and we need to find a better way to recover from FTX.
Let’s do what EA does best: optimize dispassionately, embody scout mindset, and interpret people charitably.
Remember:
Almost nobody is evil.
Almost everything is broken.
Almost everything is fixable.
Let’s fix this.
If you are disturbed by what happened here, here are some ways you can help
How do we prevent the methodology of exclusively seeking and publishing negative information, without fact checking, from becoming an acceptable norm? This methodology will predictably lead to incorrect conclusions because wild accusations will always be more viral than boring bank receipts and work contracts. Because looking for disconfirming evidence is vital for truth-seeking and scout mindset.
Currently, the original post is one of the most upvoted posts of all time on EA Forum and LessWrong, so now one of the first things people see about EA is a gossip column whose claims have been debunked. This reflects poorly on our ethics, epistemics, and priorities.
- Consider if your vote on this post, the original post (EA Forum, LessWrong), and Ben’s victory lap post (EA Forum, LessWrong), reflect your beliefs.
- Contribute to the discourse in the comments. It can be scary to say what you believe in these sorts of threads, but that’s also why it’s especially impactful to do so.
- Given what they have done, a number of people expressed to us that they think Alice/Chloe are a danger to the health of the community and should not be anonymized. We will leave that to the community to decide.
- Publicly or privately say that you would respect Ben massively if he updated on this new information. Right now, he paid $10,000 and received massive karma, so the psychological pressure for him to dig in and never change his mind is immense. However, if Ben pulled a Geoffrey Hinton and was able to update based on new information despite massive psychological pressure against that, that would be an act of impressive epistemic virtue. As a community, we want to make it so that people are rewarded for doing the right but hard thing, and this is one of those times.
Acknowledgments
A big thank you to Spencer Greenberg, Neel Nanda, Nuño Sempere, Geoffrey Miller, Vlad Firoiu, Manuel Allgaier, Luca De Leo, Matt Berkowitz, River Bellamy, and others for providing insightful feedback (though they do not necessarily agree with/endorse anything in this post).

Hey everyone, on an admin note I want to announce that I'm stepping in as "Transition Coordinator." Basically, Max wanted to step down immediately, and choosing an ED even on an interim basis might take a bit, so I will be doing the minimal set of ED-like tasks to keep CEA running and start an ED search.
If things go well you shouldn’t even notice that I’m here, but you can reach me at ben.west@centreforeffectivealtruism.org if you would like to contact me personally.
Hey folks, a reminder to please be thoughtful as you comment.
The previous Nonlinear thread received almost 500 comments; many of these were productive, but there were also some more heated exchanges. Following Forum norms—in a nutshell: be kind, stay on topic, be honest—is probably even more important than usual in charged situations like these.
Discussion here could end up warped towards aggression and confusion for a few reasons, even if commenters are generally well intentioned:
Regarding this paragraph from the post:
... (read more)A short note as a moderator:[1] People (understandably) have strong feelings about discussions that focus on race, and many of us found the content that the post is referencing difficult to read. This means that it's both harder to keep to Forum norms when responding to this, and (I think) especially important.
Please keep this in mind if you decide to engage in a discussion about this, and try to remember that most people on the Forum are here for collaborative discussions about doing good.
If you have any specific concerns, you can also always reach out to the moderation team at forum-moderation@effectivealtruism.org.
Mostly copying this comment from one I made on another post.
I thought we could do a thread for Giving What We Can pledgers and lessons learnt or insights since pledging!
I'll go first: I was actually really worried about how donating 10% would feel, as well as it's impact on my finances - but actually it's made me much less stressed about money - to know I can still have a great standard of living with 10% less. It's actually changed the way I see money and finances and has helped me think about how I can increase my giving in future years.
If folks don't mind, a brief word from our sponsors...
I saw Cremer's post and seriously considered this proposal. Unfortunately I came to the conclusion that the parenthetical point about who comprises the "EA community" is, as far as I can tell, a complete non-starter.
My co-founder from Asana, Justin Rosenstein, left a few years ago to start oneproject.org, and that group came to believe sortition (lottery-based democracy) was the best form of governance. So I came to him with the question of how you might define the electorate in the case of a group like EA. He suggests it's effectively not possible to do well other than in the case of geographic fencing (i.e. where people have invested in living) or by alternatively using the entire world population.
I have not myself come up with a non-geographic strategy that doesn't seem highly vulnerable to corrupt intent or vote brigading. Given that the stakes are the ability to control large sums of money, having people stake some of their own (i.e. become "dues-paying" members of some kind) does not seem like a strong enough mitigation. For example, a hostile takeover almost happened to the Sierra Club in SF in 2015 (albeit fo... (read more)
It is 2AM in my timezone, and come morning I may regret writing this. By way of introduction, let me say that I dispositionally skew towards the negative, and yet I do think that OP is amongst the best if not the best foundation in its weight class. So this comment generally doesn't compare OP against the rest but against the ideal.
One way which you could allow for somewhat democratic participation is through futarchy, i.e., using prediction markets for decision-making. This isn't vulnerable to brigading because it requires putting proportionally more money in the more influence you want to have, but at the same time this makes it less democratic.
More realistically, some proposals in that broad direction which I think could actually be implementable could be:
- allowing people to bet against particular OpenPhilanthropy grants producing successful outcomes.
- allowing people to bet against OP's strategic decisions (e.g., against worldview diversification)
- I'd love to see bets between OP and other organizations about whose funding is more effective, e.g., I'd love to see a bet between your and Jaan Tallinn on who's approach is better, where the winner gets some large amount (e.g., $20
... (read more)Hi Dustin :)
FWIW I also don't particularly understand the normative appeal of democratizing funding within the EA community. It seems to me like the common normative basis for democracy would tend to argue for democratizing control of resources in a much broader way, rather than within the self-selected EA community. I think epistemic/efficiency arguments for empowering more decision-makers within EA are generally more persuasive, but wouldn't necessarily look like "democracy" per se and might look more like more regranting, forecasting tournaments, etc.
A couple replies imply that my research on the topic was far too shallow and, sure, I agree.
But I do think that shallow research hits different from my POV, where the one person I have worked most closely with across nearly two decades happens to be personally well researched on the topic. What a fortuitous coincidence! So the fact that he said "yea, that's a real problem" rather than "it's probably something you can figure out with some work" was a meaningful update for me, given how many other times we've faced problems together.
I can absolutely believe that a different person, or further investigation generally, would yield a better answer, but I consider this a fairly strong prior rather than an arbitrary one. I also can't point at any clear reference examples of non-geographic democracies that appear to function well and have strong positive impact. A priori, it seems like a great idea, so why is that?
The variations I've seen so far in the comments (like weighing forum karma) increase trust and integrity in exchange for decreasing the democratic nature of the governance, and if you walk all the way along that path you get to institutions.
On behalf of Chloe and in her own words, here’s a response that might illuminate some pieces that are not obvious from Ben’s post - as his post is relying on more factual and object-level evidence, rather than the whole narrative.
“Before Ben published, I found thinking about or discussing my experiences very painful, as well as scary - I was never sure with whom it was safe sharing any of this with. Now that it’s public, it feels like it’s in the past and I’m able to talk about it. Here are some of my experiences I think are relevant to understanding what went on. They’re harder to back up with chatlog or other written evidence - take them as you want, knowing these are stories more than clearly backed up by evidence. I think people should be able to make up their own opinion on this, and I believe they should have the appropriate information to do so.
I want to emphasize *just how much* the entire experience of working for Nonlinear was them creating all kinds of obstacles, and me being told that if I’m clever enough I can figure out how to do these tasks anyway. It’s not actually about whether I had a contract and a salary (even then, the issue wasn’t the amount or even the legali... (read more)
I confirm that this is Chloe, who contacted me through our standard communication channels to say she was posting a comment today.
Thank you very much for sharing, Chloe.
Ben, Kat, Emerson, and readers of the original post have all noticed that the nature of Ben's process leads to selection against positive observations about Nonlinear. I encourage readers to notice that the reverse might also be true. Examples of selection against negative information include:
- Ben has reason to exclude stories that are less objective or have a less strong evidence base. The above comment is a concrete example of this.
- There's also something related here about the supposed unreliability of Alice as a source: Ben needs to include this to give a complete picture/because other people (in particular the Nonlinear co-founders) have said this. I strongly concur with Ben when he writes that he "found Alice very willing and ready to share primary sources [...] so I don’t believe her to be acting in bad faith." Personally, my impression is that people are making an incorrect inference about Alice from her characteristics (that are perhaps correlated with source-reliability in a large population, but aren't logically related, and aren't relevant in this case).
- To the extent that you expect other people to have been silenced (e.g. via antici
... (read more)😬 There's a ton of awful stuff here, but these two parts really jumped out at me. Trying to push past someone's boundaries by imposing a narrative about the type of person they are ('but you're the type of person who loves doing X!' 'you're only saying no because you're the type of person who worries too much') is really unsettling behavior.
I'll flag that this is an old remembered anecdote, and those can be unreliable, and I haven't heard Emerson or Kat's version of events. But it updates me, because Chloe seems like a pretty good source and this puzzle piece seems congruent with the other puzzle pieces.
E.g., the vibe here matches something that creeped me out a lot about Kat's text message to Alice in the OP, which is the apparent attempt to corner/railroad Alice into agreement via a bunch of threats and strongly imposed frames, followed immediately by Kat repeatedly stat... (read more)
This sounds like a terribly traumatic experience. I'm so sorry you went through this, and I hope you are in a better place and feel safer now.
Your self-worth is so, so much more than how well you can navigate what sounds like a manipulative, controlling, and abusive work environment.
It sounds like despite all of this, you've tried to be charitable to people who have treated you unfairly and poorly - while this speaks to your compassion, I know this line of thought can often lead to things that feel like you are gaslighting yourself, and I hope this isn't something that has caused you too much distress.
I also hope that Effective Altruism as a community becomes a safer space for people who join it aspiring to do good, and I'm grateful for your courage in sharing your experiences, despite it (very reasonably!... (read more)
I’m responding on behalf of the community health team at the Centre for Effective Altruism. We work to prevent and address problems in the community, including sexual misconduct.
I find the piece doesn’t accurately convey how my team, or the EA community more broadly, reacts to this sort of behavior.
We work to address harmful behavior, including sexual misconduct, because we think it’s so important that this community has a good culture where people can do their best work without harassment or other mistreatment. Ignoring problems or sweeping them under the rug would be terrible for people in the community, EA’s culture, and our ability to do good in the world.
My team didn’t have a chance to explain the actions we’ve already taken on the incidents described in this piece. The incidents described here include:
We’ll be going through the piece to see if there are any situations we might be able to address further, but in most of them there’s not enough information to do so. If you ... (read more)
There's a lot of discussion here about why things don't get reported to the community health team, and what they're responsible for, so I wanted to add my own bit of anecdata.
I'm a woman who has been closely involved with a particularly gender-imbalanced portion of EA for 7 years, who has personally experienced and secondhand heard about many issues around gender dynamics, and who has never reported anything to the community health team (despite several suggestions from friends to). Now I'm considering why.
Upon reflection, here are a few reasons:
-
-
... (read more)Early on, some of it was naiveté. I experienced occasional inappropriate comments or situations from senior male researchers when I was a teenager, but assumed that they could never be interested in me because of the age and experience gap. At the time I thought that I must be misinterpreting the situation, and only see it the way I do now with the benefit of experience and hindsight. (I never felt unsafe, and if I had, would have reported it or left.)
Often, the behavior felt plausibly deniable. "Is this person asking me to meet at a coffeeshop to discuss research or to hit on me? How about meeting at a bar? Going for a walk on the be
To give a little more detail about what I think gave wrong impressions -
Last year as part of a longer piece about how the community health team approaches problems, I wrote a list of factors that need to be balanced against each other. One that’s caused confusion is “Give people a second or third chance; adjust when people have changed and improved.” I meant situations like “someone has made some inappropriate comments and gotten feedback about it,” not something like assault. I’m adding a note to the original piece clarifying.
What proportion of the incidents described was the team unaware of?
I appreciate you taking the effort to write this. However, like other commentators I feel that if these proposals were implemented, EA would just become the same as many other left wing social movements, and, as far as I can tell, would basically become the same as standard forms of left wing environmentalism which are already a live option for people with this type of outlook, and get far more resources than EA ever has. I also think many of the proposals here have been rejected for good reason, and that some of the key arguments are weak.
- You begin by citing the Cowen quote that "EAs couldn't see the existential risk to FTX even though they focus on existential risk". I think this is one of the more daft points made by a serious person on the FTX crash. Although the words 'existential risk' are the same here, they have completely different meanings, one being about the extinction of all humanity or things roughly as bad as that, and the other being about risks to a particular organisation. The problem with FTX is that there wasn't enough attention to existential risks to FTX and the implications this would have for EA. In contrast, EAs have put umpteen pers
... (read more)I don't think I am a great representative of EA leadership, given my somewhat bumpy relationship and feelings to a lot of EA stuff, but I nevertheless I think I have a bunch of the answers that you are looking for:
The Coordination Forum is a very loosely structured retreat that's been happening around once a year. At least the last two that I attended were structured completely as an unconference with no official agenda, and the attendees just figured out themselves who to talk to, and organically wrote memos and put sessions on a shared schedule.
At least as far as I can tell basically no decisions get made at Coordination Forum, and it's primary purpose is building trust and digging into gnarly disagreements between different people who are active in EA community building, and who seem to get along well with the others attending (with some bal... (read more)
There are 8.1 billion people on the planet and afaict 8,099,999,999 of them donate less to my favorite causes & orgs than @Dustin Moskovitz. That was true before this update and it will remain true after it. Like everyone else I have elaborate views on how GV/OP should spend money/be structured etc but let the record also show that I appreciate the hell out of Dustin & Cari, we got so lucky 🥲
Hey, I wanted to clarify that Open Phil gave most of the funding for the purchase of Wytham Abbey (a small part of the costs were also committed by Owen and his wife, as a signal of “skin in the game”). I run the Longtermist EA Community Growth program at Open Phil (we recently launched a parallel program for EA community growth for global health and wellbeing, which I don’t run) and I was the grant investigator for this grant, so I probably have the most context on it from the side of the donor. I’m also on the board of the Effective Ventures Foundation (EVF).
Why did we make the grant? There are two things I’d like to discuss about this, the process we used/context we were in, and our take on the case for the grant. I’ll start with the former.
Process and context: At the time we committed the funding (November 2021, though the purchase wasn’t completed until April 2022), there was a lot more apparent funding available than there is today, both from Open Phil and from the Future Fund. Existential risk reduction and related efforts seemed to us to have a funding overhang, and we were actively looking for more ways to spend money to support more good work, e... (read more)
Hi - Thanks so much for writing this. I'm on holiday at the moment so have only have been able to quickly skim your post and paper. But, having got the gist, I just wanted to say:
(i) It really pains me to hear that you lost time and energy as a result of people discouraging you from publishing the paper, or that you had to worry over funding on the basis of this. I'm sorry you had to go through that.
(ii) Personally, I'm excited to fund or otherwise encourage engaged and in-depth "red team" critical work on either (a) the ideas of EA, longtermism or strong longtermism, or (b) what practical implications have been taken to follow from EA, longtermism, or strong longtermism. If anyone reading this comment would like funding (or other ways of making their life easier) to do (a) or (b)-type work, or if you know of people in that position, please let me know at will@effectivealtruism.org. I'll try to consider any suggestions, or put the suggestions in front of others to consider, by the end of January.
I'm one of the people (maybe the first person?) who made a post saying that (some of) Kathy's accusations were false. I did this because those accusations were genuinely false, could have seriously damaged the lives of innocent people, and I had strong evidence of this from multiple very credible sources.
I'm extremely prepared to defend my actions here, but prefer not to do it in public in order to not further harm anyone else's reputation (including Kathy's). If you want more details, feel free to email me at scott@slatestarcodex.com and I will figure out how much information I can give you without violating anyone's trust.
I'm glad you made your post about how Kathy's accusations were false. I believe that was the right thing to do -- certainly given the information you had available.
But I wish you had left this sentence out, or written it more carefully:
It was obvious to me reading this post that the author made a really serious effort to stay constructive. (Thanks for that, Maya!) It seems to me that we should recognize that, and you're erasing an important distinction when you categorize the OP with imprudent tumblr call-out posts.
If nothing else, no one is being called out by name here, and the author doesn't link any of the tumblr posts and Reddit threads she refers to.
I don't think causing reputational harm to any individual was the author's intent in writing this. Fear of unfair individual reputational harm from what's written here seems a bit unjustified.
EDIT: After some time to cool down, I've removed that sentence from the comment, and somewhat edited this comment which was originally defending it.
I do think the sentence was true. By that I mean that (this is just a guess, not something I know from specifically asking them) the main reason other people were unwilling to post the information they had, was because they were worried that someone would write a public essay saying "X doesn't believe sexual assault victims" or "EA has a culture of doubting sexual assault victims". And they all hoped someone else would go first to mention all the evidence that these particular rumors were untrue, so that that person could be the one to get flak over this for the rest of their life (which I have, so good prediction!), instead of them. I think there's a culture of fear around these kinds of issues that it's useful to bring to the foreground if we want to model them correctly.
But I think you're gesturing at a point where if I appear to be implicitly criticizing Maya for bringing that up, fewer people will bring things like that up in the future, and even if this particular episode was false, many similar ones will be true, so her bri... (read more)
I want to strong agree with this post, but a forum glitch is preventing me from doing so, so mentally add +x agreement karma to the tally. [Edit: fixed and upvoted now]
I have also heard from at least one very credible source that at least one of Kathy's accusations had been professionally investigated and found without any merit.
Maybe also worth adding that the way she wrote the post would in a healthy person be intentionally misleading, and was at least incredibly careless for the strength of accusation. Eg there was some line to the effect of 'CFAR are involved in child abuse', where the claim was link-highlighted in a way that strongly suggested corroborating evidence but, as in that paraphrase, the link in fact just went directly to whatever the equivalent website was then for CFAR's summer camp.
It's uncomfortable berating the dead, but much more important to preserve the living from incredibly irresponsible aspersions like this.
(This comment is more of a general response to this post and others about Manifest than a response to what Austin has specifically said here)
I am a black person who attended Manifest, and I will say that I almost didn't attend because of Hanania, but decided to anyway because my interest in it outweighed my disagreements with his work.
I walked past a conversation he was having where he was asked why he thinks "minorities [black people] perform so poorly in so many domains," which did not feel great, but I also chatted to someone who runs a similar twitter as him and briefly told him my issues with it, which he was receptive to. I overall prefer cultures that give me space to have those sorts of conversations, but I do flinch a bit at the fact that my demographic is on the receiving end of so much of this. Many of the "edgy" people were super nice to me, I had fun conversations about other things with some of them, and their presence didn't take away from my overall experience. I felt fine after those interactions, but many people wouldn't. Perhaps they don’t “belong” at manifest, but that explanation isn’t very satisfying to me.
I think I'm much more tolerant of this sort of dynamic... (read more)
Hi Carla and Luke, I was sad to hear that you and others were concerned that funders would be angry with you or your institutions for publishing this paper. For what it's worth, raising these criticisms wouldn't count as a black mark against you or your institutions in any funding decisions that I make. I'm saying this here publicly in case it makes others feel less concerned that funders would retaliate against people raising similar critiques. I disagree with the idea that publishing critiques like this is dangerous / should be discouraged.
+1 to everything Nick said, especially the last sentence. I'm glad this paper was published; I think it makes some valid points (which doesn't mean I agree with everything), and I don't see the case that it presents any risks or harms that should have made the authors consider withholding it. Furthermore, I think it's good for EA to be publicly examined and critiqued, so I think there are substantial potential harms from discouraging this general sort of work.
Whoever told you that funders would be upset by your publishing this piece, they didn't speak for Open Philanthropy. If there's an easy way to ensure they see this comment (and Nick's), it might be helpful to do so.
+1, EA Funds (which I run) is interested in funding critiques of popular EA-relevant ideas.
This was incredibly upsetting for me to read. This is the first time I've ever felt ashamed to be associated with EA. I apologize for the tone of the rest of the comment, can delete it if it is unproductive, but I feel a need to vent.
One thing I would like to understand better is to what extent this is a bay area issue versus EA in general. My impression is that a disproportionate fraction of abuse happens in the bay. If this suspicion is true, I don't know how to put this politely, but I'd really appreciate it if the bay area could get its shit together.
In my spare time I do community building in Denmark. I will be doing a workshop for the Danish academy of talented highschool students in April. How do you imagine the academy organizers will feel seeing this in TIME magazine?
What should I tell them? "I promise this is not an issue in our local community"?
I've been extremely excited to prepare this event. I would get to teach Denmark's brightest high schoolers about hierarchies of evidence, help them conduct their own cost-effectiveness analyses, and hopefully inspire a new generation to take action to make the world a better place.
Now I have to worry about whether it would be more appropriate to send the organizers a heads up informing them about the article and give them a chance to reconsider working with us.
I frankly feel unequipped to deal with something like this.
A response to why a lot of the abuse happens in the Bay Area:
"I am one of the people in the Time Mag article about sexual violence in EA. In the video below I clarify some points about why the Bay Area is the epicenter of so many coercive dynamics, including the hacker house culture, which are like frat houses backed by billions in capital, but without oversight of HR departments or parent institutions. This frat house/psychedelic/male culture, where a lot of professional networking happens, creates invisible glass ceilings for women."
tweet: https://twitter.com/soniajoseph_/status/1622002995020849152
Zooming out from this particular case, I’m concerned that our community is both (1) extremely encouraging and tolerant of experimentation and poor, undefined boundaries and (2) very quick to point the finger when any experiment goes wrong. If we don’t want to have strict professional norms I think it’s unfair to put all the blame on failed experiments without updating the algorithm that allows people embark on these experiments with community approval.
To be perfectly clear, I think this community has poor professional boundaries and a poor understanding of why normie boundaries exist. I would like better boundaries all around. I don’t think we get better boundaries by acting like a failure like this is due to character or lack of integrity instead of bad engineering. If you wouldn’t have looked at it before it imploded and thought the engineering was bad, I think that’s the biggest thing that needs to change. I’m concerned that people still think that if you have good enough character (or are smart enough, etc), you don’t need good boundaries and systems.
Yep, I think this is a big problem.
More generally, I think a lot of EAs give lip service to the value of people trying weird new ambitious things, "adopt a hits-based approach", "if you're never failing then you're playing it too safe", etc.; but then we harshly punish visible failures, especially ones that are the least bit weird. In cases like those, I think the main solution is to be more forgiving of failures, rather than to give up on ambitious projects.
From my perspective, none of this is particularly relevant to what bothers me about Ben's post and Nonlinear's response. My biggest concern about Nonlinear is their attempt to pressure people into silence (via lawsuits, bizarre veiled threats, etc.), and "I really wish EAs would experiment more with coercing and threatening each other" is not an example of the kind of experimentalism I'm talking about when I say that EAs should be willing to try and fail at more things (!).
"Keep EA weird" does not entail "have low ethical standards... (read more)
It's fair enough to feel betrayed in this situation, and to speak that out.
But given your position in the EA community, I think it's much more important to put effort towards giving context on your role in this saga.
Some jumping-off points:
- Did you consider yourself to be in a mentor / mentee relationship with SBF prior to the founding of FTX? What was the depth and cadence of that relationship?
- e.g. from this Sequoia profile (archived as they recently pulled it from their site):
- What diligence did you / your team do on FTX before agreeing to join the Future Fund as an advisor?&nb
... (read more)"The math, MacAskill argued, means that if one’s goal is to optimize one’s life for doing good, often most good can be done by choosing to make the most money possible—in order to give it all away. “Earn to give,” urged MacAskill.
... And MacAskill—Singer’s philosophical heir—had the answer: The best way for him to maximize good in the world would be to maximize his wealth. SBF listened, nodding, as MacAskill made his pitch. The earn-to-give logic was airtight. It was, SBF realized, applied utilitarianism. Knowing what he had to do, SBF simply said, “Yep. That makes sense.”"
[Edit after months: While I still believe these are valid questions, I now think I was too hostile, overconfident, and not genuinely curious enough.] One additional thing I’d be curious about:
You played the role of a messenger between SBF and Elon Musk in a bid for SBF to invest up to 15 billion of (presumably mostly his) wealth in an acquisition of Twitter. The stated reason for that bid was to make Twitter better for the world. This has worried me a lot over the last weeks. It could have easily been the most consequential thing EAs have ever done and there has - to my knowledge- never been a thorough EA debate that signalled that this would be a good idea.
What was the reasoning behind the decision to support SBF by connecting him to Musk? How many people from FTXFF or EA at large were consulted to figure out if that was a good idea? Do you think that it still made sense at the point you helped with the potential acquisition to regard most of the wealth of SBF as EA resources? If not, why did you not inform the EA community?
Source for claim about playing a messenger: https://twitter.com/tier10k/status/1575603591431102464?s=20&t=lYY65-TpZuifcbQ2j2EQ5w
I don't think EAs should necessary require a community-wide debate before making major decisions, including investment decisions; sometimes decisions should be made fast, and often decisions don't benefit a ton from "the whole community weighs in" over "twenty smart advisors weighed in".
But regardless, seems interesting and useful for EAs to debate this topic so we can form more models of this part of the strategy space -- maybe we should be doing more to positively affect the world's public fora. And I'd personally love to know more about Will's reasoning re Twitter.
I think it's important to note that many experts, traders, and investors did not see this coming, or they could have saved/made billions.
It seems very unfair to ask fund recipients to significantly outperform the market and most experts, while having access to way less information.
See this Twitter thread from Yudkowsky
Edit: I meant to refer to fund advisors, not (just) fund recipients
Also from the Sequoia profile: "After SBF quit Jane Street, he moved back home to the Bay Area, where Will MacAskill had offered him a job as director of business development at the Centre for Effective Altruism." It was precisely at this time that SBF launched Alameda Research, with Tara Mac Aulay (then the president of CEA) as a co-founder ( https://www.bloomberg.com/news/articles/2022-07-14/celsius-bankruptcy-filing-shows-long-reach-of-sam-bankman-fried).
To what extent was Will or any other CEA figure involved with launching Alameda and/or advising it?
One specific question I would want to raise is whether EA leaders involved with FTX were aware of or raised concerns about non-disclosed conflicts of interest between Alameda Research and FTX.
For example, I strongly suspect that EAs tied to FTX knew that SBF and Caroline (CEO of Alameda Research) were romantically involved (I strongly suspect this because I have personally heard Caroline talk about her romantic involvement with SBF in private conversations with several FTX fellows). Given the pre-existing concerns about the conflicts of interest between Alameda Research and FTX (see examples such as these), if this relationship were known to be hidden from investors and other stakeholders, should this not have raised red flags?
Would Jimmy personally (or the business) ever consider taking a public pledge to give to effective charities, like the 🔸 10% Pledge - a pledge to donate at least 10% of income until you retire to the organisations that can most improve the lives of others?
Prominent pledgers like podcaster Sam Harris, youtuber Ali Abdaal, author and historian Rutger Bregman amongst others have raised awareness of the pledges we offer, as well as the principles of effective charities - and influenced more than 1500 people to take a pledge to give, which we estimate will generate over $100m USD of effective donations over time.
Hi Scott — I work for CEA as the lead on EA Global and wanted to jump in here.
Really appreciate the post — having a larger, more open EA event is something we’ve thought about for a while and are still considering.
I think there are real trade-offs here. An event that’s more appealing to some people is more off-putting to others, and we’re trying to get the best balance we can. We’ve tried different things over the years, which can lead to some confusion (since people remember messaging from years ago) but also gives us some data about what worked well and badly when we’ve tried more open or more exclusive events.
- We’ve asked people’s opinion on this. When we’ve polled our advisors including leaders from various EA organizations, they’ve favored more selective events. In our most recent feedback surveys, we’ve asked attendees whether they think we should have more attendees. For SF 2022, 34% said we should increase the number, 53% said it should stay the same, and 14% said it should be lower. Obviously there’s selection bias here since these are the people who got in, though.
- To your “...because people will refuse to apply out of scrupulosity” point — I want to clarify tha
... (read more)FWIW I generally agree with Eli's reply here. I think maybe EAG should 2x or 3x in size, but I'd lobby for it to not be fully open.
The timeline (in PT time zone) seems to be:
Jan 13, 12:46am: Expo article published.
Jan 13, 4:20am: First mention of this on the EA Forum.
Jan 13, 6:46am: Shakeel Hashim (speaking for himself and not for CEA; +110 karma, +109 net agreement as of the 15th) writes, "If this is true it's absolutely horrifying. FLI needs to give a full explanation of what exactly happened here and I don't understand why they haven't. If FLI did knowingly agree to give money to a neo-Nazi group, that’s despicable. I don't think people who would do something like that ought to have any place in this community."
Jan 13, 9:18pm: Shakeel follows up, repeating
that he sees no reason why FLI wouldn't have already made a public statementthat it's really weird that FLI hasn't already made a public statement, and raises the possibility that FLI has maybe done sinister questionably-legal things and that's why they haven't spoken up.Jan 14, 3:43am: You (titotal) comment, "If the letter is genuine (and they have never denied that it is), then someone at FLI is either grossly incompetent or malicious. They need to address this ASAP. "
Jan 14, 8:16am: Jason comments (+15 karma, +13 net agreement a... (read more)
Thanks for calling me out on this — I agree that I was too hasty to call for a response.
I’m glad that FLI has shared more information, and that they are rethinking their procedures as a result of this. This FAQ hasn’t completely alleviated my concerns about what happened here — I think it’s worrying that something like this can get to the stage it did without it being flagged (though again, I'm glad FLI seems to agree with this). And I also think that it would have been better if FLI had shared some more of the FAQ info with Expo too.
I do regret calling for FLI to speak up sooner, and I should have had more empathy for the situation they were in. I posted my comments not because I wanted to throw FLI under the bus for PR reasons, but because I was feeling upset; coming on the heels of the Bostrom situation I was worried that some people in the EA community were racist or at least not very sensitive about how discussions of race-related things can make people feel. At the time, I wanted to do my bit to make it clear — in particular to other non-white people who felt similarly to me — that EA isn’t racist. But I could and should have done that in a much better way. I’m sorry.
FTX collapsed on November 8th; all the key facts were known by the 10th; CEA put out their statement on November 12th. This is a totally reasonable timeframe to respond. I would have hoped that this experience would make CEA sympathetic to a fellow EA org (with much less resources than CEA) experiencing a media crisis rather than being so quick to condemn.
I'm also not convinced that a Head of Communications, working for an organization with a very restrictive media policy for employees, commenting on a matter of importance for that organization, can really be said to be operating in a personal capacity. Despite claims to the contrary, I think it's pretty reasonable to interpret these as official CEA communications. Skill at a PR role is as much about what you do not say as what you do.
Maya, I’m so sorry that things have made you feel this way. I know you’re not alone in this. As Catherine said earlier, either of us (and the rest of the community health team) are here to talk and try to support.
I agree it’s very important that no one should get away with mistreating others because of their status, money, etc. One of the concerns you raise related to this is an accusation that Kathy Forth made. When Kathy raised concerns related to EA, I investigated all the cases where she gave me enough information to do so. In one case, her information allowed me to confirm that a person had acted badly, and to keep them out of EA Global.
At one point we arranged for an independent third party attorney who specialized in workplace sexual harassment claims to investigate a different accusation that Kathy made. After interviewing Kathy, the accused person, and some other people who had been nearby at the time, the investigator concluded that the evidence did not support Kathy’s claim about what had happened. I don’t think Kathy intended to misrepresent anything, but I think her interpretation of what happened was different than what most people’s would have been.
I do want pe... (read more)
As AI heats up, I'm excited and frankly somewhat relieved to have Holden making this change. While I agree with 𝕮𝖎𝖓𝖊𝖗𝖆's comment below that Holden had a lot of leverage on AI safety in his recent role, I also believe he has an vast amount of domain knowledge that can be applied more directly to problem solving. We're in shockingly short supply of that kind of person, and the need is urgent.
Alexander has my full confidence in his new role as the sole CEO. I consider us incredibly fortunate to have someone like him already involved and and prepared to of succeed as the leader of Open Philanthropy.
I know that lukeprog's comment is mostly replying to the insecurity about lack of credentials in the OP. Still, the most upvoted answer seems a bit ironic in the broader context of the question:
If you read the comment without knowing Luke, you might be like "Oh yeah, that sounds encouraging." Then you find out that he wrote this excellent 100++ page report on the neuroscience of consciousness, which is possibly the best resource on this on the internet, and you're like "Uff, I'm f***ed."
Luke is (tied with Brian Tomasik) the most genuinely modest person I know, so it makes sense that it seems to him like there's a big gap between him and even smarter people in the community. And there might be, maybe. But that only makes the whole situation even more intimidating.
It's a tough spot to be in and I only have advice that maybe helps make the situation tolerable, at least.
Related to the advice about Stoicism, I recommend viewing EA as a game with varying levels of difficulty.
... (read more)I’m part of Anima International’s leadership as Director of Global Development (so please note that Animal Charity Evaluators’ negative view of the leadership quality is, among others, about me).
As the author noted, this topic is politically charged and additionally, as Anima International, we consider ourselves ‘a side’, so our judgment here may be heavily biased. This is why, even though we read this thread, we are quite hesitant to comment.
Nevertheless, I can offer a few factual points here that will clear some of the author’s confusion or that people got wrong in the comments.
We asked ACE for their thoughts on these points to make sure we are not misconstruing what happened due to a biased perspective. After a short conversation with Anima International, ACE preferred not to comment. They declined to correct what they feel is factually incorrect and instead let us know that they will post a reply to my post to avoid confusion, which we welcome.
1.
The author wrote: “it's possible that some Anima staff made private comments that are much worse than what is public”
While I don’t want to comment or judge whether comments are better or worse, we specifically asked ACE to publish all... (read more)
While I understand that people generally like Owen, I believe we need to ensure that we are not overlooking the substance of his message and giving him an overly favorable response.
Owen's impropriety may be extensive. Just because one event was over 5 years ago, does not mean that the other >=3 events were (and if they were, one expects he would tell us). Relatedly, if it indeed was the most severe mistake of this nature, there may have been more severe mistakes of somewhat different kinds. There may yet be further events that haven't yet been reported to, or disclosed by Owen, and indeed, on the outside view, most events would not be suchly reported.
What makes things worse is the kind of career Owen has pursued over the last 5+ years. Owen's work centered on: i) advising orgs and funders, ii) hiring junior researchers, and iii) hosting workshops, often residential, and with junior researchers. If as Owen says, you know as of 2021-22 that you have deficiencies in dealing with power dynamics, and there have been a series of multiple events like this, then why are you still playing the roles described in (i-iii)? His medium term career trajectory, even relative to other EAs, is in... (read more)
I want to make a small comment on your phrase "it could have a chilling effect on those who have their own cases of sexual assault to report." Owen has not committed sexual assault, but sexual harassment. If this imperfect wording was an isolated incident, I wouldn't have said anything, but in every sexual misconduct comment thread I've followed on the forum, people have said sexual assault when they mean sexual harassment, and/or rape when they mean sexual assault. I was a victim of sexual abuse both growing up and as an adult, so I'm aware that there are big differences between the three, and feel it would be helpful to be mindful of our wording.
For context, I'm black (Nigerian in the UK).
I'm just going to express my honest opinions here:
The events of the last 48 hours (slightly) raised my opinion of Nick Bostrom. I was very relieved that Bostrom did not compromise his epistemic integrity by expressing more socially palatable views that are contrary to those he actually holds.
I think it would be quite tragic to compromise honestly/accurately reporting our beliefs when the situation calls for it to fit in better. I'm very glad Bostrom did not do that.
As for the contents of the email itself, while very untasteful, they were sent in a particular context to be deliberately offensive and Bostrom did regret it and apologise for it at the time. I don't think it's useful/valuable to judge him on the basis of an email he sent a few decades ago as a student. The Bostrom that sent the email did not reflectively endorse its contents, and current Bostrom does not either.
I'm not interested in a discussion on race & IQ, so I deliberately avoided addressing that.
With apologies, I would like to share some rather lengthy comments on the present controversy. My sense is that they likely express a fairly conventional reaction. However, I have not yet seen any commentary that entirely captures this perspective. Before I begin, I perhaps also ought to apologise for my decision to write anonymously. While none of my comments here are terribly exciting, I would like to think, I hope others can still empathise with my aversion to becoming a minor character in a controversy of this variety.
Q: Was the message in question needlessly offensive and deserving of an apology?
Yes, it certainly was. By describing the message as "needlessly offensive," what I mean to say is that, even if Prof. Bostrom was committed to making the same central point that is made in the message, there was simply no need for the point to be made in such an insensitive manner. To put forward an analogy, it would be needlessly offensive to make a point about free speech by placing a swastika on one’s shirt and wearing it around town. This would be a highly insensitive decision, even if the person wearing the swastika did not hold or intend to express any of the views associated wit... (read more)
I. It might be worth reflecting upon how large part of this seem tied to something like "climbing the EA social ladder".
E.g. just from the first part, emphasis mine
Replace "EA" by some other environment with prestige gradients, and you have something like a highly generic social climbing guide. Seek cool kids, hang around them, go to exclusive parties, get good at signalling.
II. This isn't to say this is bad . Climbing the ladder to some extent could be instrumentally useful, or even necessary, for an ability to do some interesting things, sometimes.
III. But note the hidden costs. Climbing the social ladder can trade of against building things. Learning all the Berkeley vibes can trade of against, eg., learning the math actually useful for understanding agen... (read more)
Thanks Habryka. My reason for commenting is that a one-sided story is being told here about the administrative/faculty relationship stuff, both by FHI and in the discussion here, and I feel it to be misleading in its incompleteness. It appears Carrick and I disagree and I respect his views, but I think many people who worked at FHI felt it to be severely administratively mismanaged for a long time. I felt presenting that perspective was important for trying to draw the right lessons.
I agree with the general point that maintaining independence under this kind of pressure is extremely hard, that there are difficult tradeoffs to make. I believe Nick made many of the right decisions in maintaining integrity and independence, and sometimes incurred costly penalties to do so that likely contributed to the administrative/bureaucratic tensions with the faculty. However, I think part of what is happening here is that some quite different things from working-inside-fhi-perspective are being conflated under broad 'heading' (intellectual integrity/independence) which sometimes overlapped, but often relatively minimally, and can be usefully disaggregated - intellectual vision and integrity; fol... (read more)
And I guess I should just say directly. I do wish it were possible to raise (specific) critical points on matter like faculty relations where I have some direct insight and discuss these, without immediate escalation to counterclaims that my career’s work has been bad for the world, that I am not to be trusted, and and that my influence is somehow responsible for attacks on people’s intellectual integrity. It’s very stressful and upsetting.
I suffer from (mild) social anxiety. That is not uncommon. This kind of very forceful interaction is valuable for some people but is difficult and costly for others to engage with. I am going to engage less with EA forum/LW as a result of this and a few similar interactions, and I am especially going to be more hesitant to be critical of EA/LW sacred cows. I imagine, given what you have said about my takes, that this will be positive from your perspective. So be it. But you might also consider the effect it will have on others who might be psychologically similar, and whose takes you might consider more valuable.
Epistemic status: Probably speaking too strongly in various ways, and probably not with enough empathy, but also feeling kind of lonely and with enough pent-up frustration about how things have been operating that I want to spend some social capital on this, and want to give a bit of a "this is my last stand" vibe.
It's been a few more days, and I do want to express frustration with the risk-aversion and guardedness I have experienced from CEA and other EA organizations in this time. I think this is a crucial time to be open, and to stop playing dumb PR games that are, in my current tentative assessment of the situation, one of the primary reasons why we got into this mess in the first place.
I understand there is some legal risk, and I am trying to track it myself quite closely. I am also worried that you are trying to run a strategy of "try to figure out everything internally and tell nice narratives about where we are all at afterwards", and I think that strategy has already gotten us into is so great that I don't think now is the time to double-down on that strategy.
Please, people at CEA and other EA organizations, come and talk to the community. Explore with us what ... (read more)
Turning to the object level: I feel pretty torn here.
On the one hand, I agree the business with CARE was quite bad and share all the standard concerns about SJ discourse norms and cancel culture.
On the other hand, we've had quite a bit of anti-cancel-culture stuff on the Forum lately. There's been much more of that than of pro-SJ/pro-DEI content, and it's generally got much higher karma. I think the message that the subset of EA that is highly active on the Forum generally disapproves of cancel culture has been made pretty clearly.
I'm sceptical that further content in this vein will have the desired effect on EA and EA-adjacent groups and individuals who are less active on the Forum, other than to alienate them and promote a split in the movement, while also exposing EA to substantial PR risk. I think a lot of more SJ-sympathetic EAs already feel that the Forum is not a space for them – simply affirming that doesn't seem to me to be terribly useful. Not giving ACE prior warning before publishing the post further cements an adversarial us-and-them dynamic I'm not very happy about.
I don't really know how that cashes out as far as this post and posts like it are concerned. Biting one's tongue about what does seem like problematic behaviour would hardly be ideal. But as I've said several times in the past, I do wish we could be having this discussion in a more productive and conciliatory way, which has less of a chance of ending in an acrimonious split.
I agree with the content of your comment, Will, but feel a bit unhappy with it anyway. Apologies for the unpleasantly political metaphor, but as an intuition pump imagine the following comment.
"On the one hand, I agree that it seems bad that this org apparently has a sexual harassment problem. On the other hand, there have been a bunch of posts about sexual misconduct at various orgs recently, and these have drawn controversy, and I'm worried about the second-order effects of talking about this misconduct."
I guess my concern is that it seems like our top priority should be saying true and important things, and we should err on the side of not criticising people for doing so.
More generally I am opposed to "Criticising people for doing bad-seeming thing X would put off people who are enthusiastic about thing X."
Another take here is that if a group of people are sad that their views aren't sufficiently represented on the EA forum, they should consider making better arguments for them. I don't think we should try to ensure that the EA forum has proportionate amounts of pro-X and anti-X content for all X. (I think we should strive to evaluate content fairly; this involves not being more or less enthusiastic about content about views based on its popularity (except for instrumental reasons like "it's more interesting to hear arguments you haven't heard before).)
EDIT: Also, I think your comment is much better described as meta level than object level, despite its first sentence.
"On the other hand, we've had quite a bit of anti-cancel-culture stuff on the Forum lately. There's been much more of that than of pro-SJ/pro-DEI content, and it's generally got much higher karma. I think the message that the subset of EA that is highly active on the Forum generally disapproves of cancel culture has been made pretty clearly"
Perhaps. However, this post makes specific claims about ACE. And even though these claims have been discussed somewhat informally on Facebook, this post provides a far more solid writeup. So it does seem to be making a signficantly new contribution to the discussion and not just rewarming leftovers.
It would have been better if Hypatia had emailed the organisation ahead of time. However, I believe ACE staff members might have already commented on some of these issues (correct me if I'm wrong). And it's more of a good practise than something than a strict requirement - I totally understand the urge to just get something out of there.
"I'm sceptical that further content in this vein will have the desired effect on EA and EA-adjacent groups and individuals who are less active on the Forum, other than to al... (read more)
I feel there's a bit of a "missing mood" in some of the comments here, so I want to say:
I felt shocked, hurt, and betrayed at reading this. I never expected the Oxford incident to involve someone so central and well-regarded in the community, and certainly not Owen. Other EAs I know who knew Owen and the Oxford scene better are even more deeply hurt and surprised by this. (As other commenters here have already attested, tears have not been uncommon.)
Despite the length and thoughtfulness of the apology, it's difficult for me to see how someone who was already in a position of power and status in EA -- a community many of us see as key to the future of humanity -- behaved in a way that seems so inappropriate and destructive. I'm angry not only at the harm that was done to women trying to do good in the world, but also to the health, reputation, and credibility of our community. We deserve better from our leaders.
I really sympathize with all the EAs -- especially women -- who feel betrayed and undermined by this news. To all of you who've had bad experiences like this in EA -- I'm really sorry. I hope we can do better. I think we can do better -- I think we already have the seed... (read more)
I appreciate you writing this. To me, this clarifies something. (I'm sorry there's a rant incoming and if this comunity needs its hand held through these particular revelations, I'm not the one):
It seems like many EAs still (despite SBF) didn't put significant probability on the person from that particular Time incident being a very well-known and trusted man in EA, such as Owen. This despite the SBF scandal and despite (to me) this incident being the most troubling incident in the Time piece by far which definitely sounded to be attached to a "real" EA more than any of the others (I say as someone who still has significant problems with the Time piece). Some of us had already put decent odds on the probability that this was an important figure doing something that was at least thoughtless and ended up damaging the EA movement... I mean the woman who reported him literally tried to convey that he was very well-connected and important.
It seems like the community still has a lot to learn from the surprise of SBF about problematic incidents and leaders in general: No one expects their friends or leaders are gonna be the ones who do problematic things. That includes us. Update no... (read more)
[Epistemic status: I've done a lot of thinking about these issues previously; I am a female mathematician who has spent several years running mentorship/support groups for women in my academic departments and has also spent a few years in various EA circles.]
I wholeheartedly agree that EA needs to improve with respect to professional/personal life mixing, and that these fuzzy boundaries are especially bad for women. I would love to see more consciousness and effort by EA organizations toward fixing these and related issues. In particular I agree with the following:
> Not having stricter boundaries for work/sex/social in mission focused organizations brings about inefficiency and nepotism [...]. It puts EA at risk of alienating women / others due to reasons that have nothing to do with ideological differences.
However, I can't endorse the post as written, because there's a lot of claims made which I think are wrong or misleading. Like: Sure, there are poly women who'd be happier being monogamous, but there are also poly men who'd be happier being monogamous, and my own subjective impression is that these are about equally common. Also, "EA/rationalism and redpill fit like yin and y... (read more)
We (the Community Health team at CEA) would like to share some more information about the cases in the TIME article, and our previous knowledge of these cases. We’ve put these comments in the approximate order that they appear in the TIME article.
Re: Gopalakrishnan’s experiences
We read her post with concern. We saw quite a few supportive messages from community members, and we also tried to offer support. Our team also reached out to Gopalakrishnan in a direct message to ask if she was interested in sharing more information with us about the specific incidents.
Re: The man who
We don’t know this person’s identity for sure, but one of these accounts resembles a previous public accusation made against a person who used to be involved in the rationality community. He has been banned from CEA events for almost 5 years, and we understand he has been banned from some other EA spaces. He has been a critic of the EA movemen... (read more)
Brief update: I am still in the process of reading this. At this point I have given the post itself a once-over, and begun to read it more slowly (and looking through the appendices as they're linked).
I think any and all primary sources that Kat provides are good (such as the page of records of transactions). I am also grateful that they have not deanonymized Alice and Chloe.
I plan to compare the things that this post says directly against specific claims in mine, and acknowledge anything where I was factually inaccurate. I also plan to do a pass where I figure out which claims of mine this post responds to and which it doesn’t, and I want to reflect on the new info that’s been entered into evidence and how it relates to the overall picture.
It probably goes without saying that I (and everyone reading) want to believe true things and not false things about this situation. If I made inaccurate statements I would like to know that and correct them.
As I wrote in my follow-up post, I am not intending to continue spear-heading an investigation into Nonlinear. However this post makes some accusations of wrongdoing on my part, which I intend to respond to, and of course for... (read more)
I had missed that; thank you for pointing it out!
While using quotation marks for paraphrase or when recounting something as best as you recall is occasionally done in English writing, primarily in casual contexts, I think it's a very poor choice for this post. Lots of people are reading this trying to decide who to trust, and direct quotes and paraphrase have very different weight. Conflating them, especially in a way where many readers will think the paraphrases are direct quotes, makes it much harder for people to come away from this document with a more accurate understanding of what happened.
Perhaps using different markers (ex: "«" and "»") for paraphrase would make sense here?
I am one of the people mentioned in the article. I'm genuinely happy with the level of compassion and concern voiced in most of the comments on this article. Yes, while a lot of the comments are clearly concerned that this is a hard and difficult issue to tackle, I’m appreciative of the genuine desire of many people to do the right thing here. It seems that at least some of the EA community has a drive towards addressing the issue and improving from it rather than burying the issue as I had feared.
A couple of points, my spontaneous takeaways upon reading the article and the comments:
- This article covers bad actors in the EA space, and how hard it is to protect the community from them. This doesn't mean that all of EA is toxic, but rather the article is bringing to light the fact that bad actors have been tolerated and even defended in the community to the detriment of their victims. I'm sensing from the comments that non-Bay Area EA may have experienced less of this phenomenon. If you read this article and are absolutely shocked and disgusted, then I think you experienced a different selection of EA than I have. I know many of my peers will read this article and feel unc
... (read more)I feel like this post mostly doesn't talk about what feels like to me the most substantial downside of trying to scale up spending in EA, and increased availability of funding.
I think the biggest risk of the increased availability of funding, and general increase in scale, is that it will create a culture where people will be incentivized to act more deceptively towards others and that it will attract many people who will be much more open to deceptive action in order to take resources we currently have.
Here are some paragraphs from an internal memo I wrote a while ago that tried to capture this:
========
... (read more)Reading this, I guess I'll just post the second half of this memo that I wrote here as well, since it has some additional points that seem valuable to the discussion:
... (read more)Over the course of me working in EA for the last 8 years I feel like I've seen about a dozen instances where Will made quite substantial tradeoffs where he traded off both the health of the EA community, and something like epistemic integrity, in favor of being more popular and getting more prestige.
Some examples here include:
- When he was CEO while I was at CEA he basically didn't really do his job at CEA but handed off the job to Tara (who was a terrible choice for many reasons, one of which is that she then co-founded Alameda and after that went on to start another fradulent-seeming crypto trading firm as far as I can tell). He then spent like half a year technically being CEO but spending all of his time being on book tours and talking to lots of high net-worth and high-status people.
- I think Doing Good Better was already substantially misleading about the methodology that the EA community has actually historically used to find top interventions. Indeed it was very "randomista" flavored in a way that I think really set up a lot of people to be disappointed when they encountered EA and then realized that actual cause prioritization is nowhere close to this formalized an
... (read more)Fwiw I have little private information but think that:
Also thanks Habryka for writing this. I think surfacing info like this is really valuable and I guess it has personal costs to you.
James courteously shared a draft of this piece with me before posting, I really appreciate that and his substantive, constructive feedback.
1. I blundered
The first thing worth acknowledging is that he pointed out a mistake that substantially changes our results. And for that, I’m grateful. It goes to show the value of having skeptical external reviewers.
He pointed out that Kemp et al., (2009) finds a negative effect, while we recorded its effect as positive — meaning we coded the study as having the wrong sign.
What happened is that MH outcomes are often "higher = bad", and subjective wellbeing is "higher = better", so we note this in our code so that all effects that imply benefits are positive. What went wrong was that we coded Kemp et al., (2009), which used the GHQ-12 as "higher = bad" (which is usually the case) when the opposite was true. Higher equalled good in this case because we had to do an extra calculation to extract the effect [footnote: since there was baseline imbalance in the PHQ-9, we took the difference in pre-post changes], which flipped the sign.
This correction would reduce the spillover effect from 53% to 38% and reduce the cost-effectiveness comparison from 9.5... (read more)
Strong upvote for both James and Joel for modeling a productive way to do this kind of post -- show the organization a draft of the post first, and give them time to offer comments on the draft + prepare a comment for your post that can go up shortly after the post does.
Thank you Max for your years of dedicated service at CEA. Under your leadership as Executive Director, CEA grew significantly, increased its professionalism, and reached more people than it had before. I really appreciate your straightforward but kind communication style, humility, and eagerness to learn and improve. I'm sorry to see you go, and wish you the best of luck in whatever comes next.
Predictably, I disagree with this in the strongest possible terms.
If someone says false and horrible things to destroy other people's reputation, the story is "someone said false and horrible things to destroy other people's reputation". Not "in some other situation this could have been true". It might be true! But discussion around the false rumors isn't the time to talk about that.
Suppose the shoe was on the other foot, and some man (Bob), made some kind of false and horrible rumor about a woman (Alice). Maybe he says that she only got a good position in her organization by sleeping her way to the top. If this was false, the story isn't "we need to engage with the ways Bob felt harmed and make him feel valid." It's not "the Bob lied lens is harsh and unproductive". It's "we condemn these false and damaging rumors". If the headline story is anything else, I don't trust the community involved one bit, and I would be terrified to be associated with it.
I understand that sexual assault is especially scary, and that it may seem jarring to compare it to less serious accusations like Bob's. But the original post says we need to express emotions more, and I wanted to try to convey an emot... (read more)
I think a very relevant question is to ask is how come none of the showy self-criticism contests and red-teaming exercises came up with this? A good amount of time and money and energy were put into such things and if the exercises are not in fact uncovering the big problems lurking in the movement then that suggests some issues
Am I right in thinking that, if it weren't for the Time article, there's no reason to think that Owen would ever have been investigated and/or removed from the board?
While this is all being sorted and we figure out what is next, I would like to emphasize wishes of wellness and care for the many impacted by this.
Note: The original post was edited to clarify the need for compassion and to remove anything resembling “tribalism,” including a comment of thanks, which may be referenced in comments.
[Edit: this was in response to the original version of the parent comment, not the new edited version]
Strong -1, the last line in particular seems deeply inappropriate given the live possibility that these events were caused by large-scale fraud on the part of FTX, and I'm disappointed that so many people endorsed it. (Maybe because the reasons to suspect fraud weren't flagged in original post?) At a point where the integrity of leading figures in the movement has been called into question, it is particularly important that we hold ourselves to high standards rather than reflexively falling back on tribalist instincts.
I am worried and sad for all involved, but I am especially concerned for the wellbeing and prospects of the ~millions of people—often vulnerable retail investors—who may have taken on too much exposure to crypto in general.
Many people like this must be extremely stressed right now. As with many financial meltdowns, some individuals and families will endure severe hardship, such as the breakdown of relationships, the loss of life savings, even the death of loved ones.
I don't really follow crypto so I know roughly nothing about the role SBF, FTX and Alameda have played in this ecosystem. My impression is that they've been ok/good on at least some dimensions of protecting vulnerable investors. But—let's see how things look, overall, when the dust settles.
[As is always the default, but perhaps worth repeating in sensitive situations, my views are my own and by default I'm not speaking on behalf of the Open Phil. I don't do professional grantmaking in this area, haven't been following it closely recently, and others at Open Phil might have different opinions.]
I'm disappointed by ACE's comment (I thought Jakub's comment seemed very polite and even-handed, and not hostile, given the context, nor do I agree with characterizing what seems to me to be sincere concern in the OP just as hostile) and by some of the other instances of ACE behavior documented in the OP. I used to be a board member at ACE, but one of the reasons I didn't seek a second term was because I was concerned about ACE drifting away from focusing on just helping animals as effectively as possible, and towards integrating/compromising between that and human-centered social justice concerns, in a way that I wasn't convinced was based on open-minded analysis or strong and rigorous cause-agnostic reasoning. I worry about this dynamic leading to an unpleasant atmosphere for those with different perspectives, and decreasing the extent to whi... (read more)
Thanks for writing this post. It looks like it took a lot of effort that could have been spent on much more enjoyable activities, including your mainline work.
This isn’t a comment on the accuracy of the post (though it was a moderate update for me). I could imagine nonlinear providing compelling counter evidence over the next few days and I’d of course try to correct my beliefs in light of new evidence.
Posts like this one are a public good. I don’t think anyone is particularly incentivised to write them, and they seem pretty uncomfortable and effortful, but I believe they serve an important function in the community by helping to root out harmful actors and disincentivising harmful acts in the first place.
I read this post and about half of the appendix.
(1) I updated significantly in the direction of "Nonlinear leadership has a better case for themselves than I initially thought" and "it seems likely to me that the initial post indeed was somewhat careless with fact-checking."
(I'm still confused about some of the fact-checking claims, especially the specific degree to which Emerson flagged early on that there were dozens of extreme falsehoods, or whether this only happened when Ben said that he was about to publish the post. Is it maybe possible that Emerson's initial reply had little else besides "Some points still require clarification," and Emerson only later conveyed how strongly he disagreed with the overall summary once he realized that Ben was basically set on publishing on a 2h notice? If so, that's very different from Ben being told in the very first email reply that Nonlinear's stance on this is basically "good summary, but also dozens of claims are completely false and we can document that." That's such a stark difference, so it feels to me like there was miscommunication going on.)
At the same time:
(2) I still find Chloe's broad perspective credible and concerning (in a "... (read more)
I'm a professional nanny and I've also held household management positions. I just want to respond to one specific thing here that I have knowledge about.
It is upsetting to see a "lesson learned" as only hiring people with experience as an assistant, because a professional assistant would absolutely not work with that compensation structure.
It is absolutely the standard in professional assistant type jobs that when traveling with the family, that your travel expenses are NOT part of your compensation.
When traveling for work (including for families that travel for extensive periods of time) the standard for professionals is:
-Your work hours start when you arrive at the airport.(Yes, you charge for travel time)
-
-
-
... (read more)You charge your full, standard hourly rate for all hours worked.
You ALSO charge a per diem because you are leaving the comfort of being in your own home / being away from friends and pets and your life.
You are ONLY expected to work for the hours tha
This got a lot of upvotes so I want to clarify that this kind of arrangements isn't UNUSUALLY EVIL. Nanny forums are filled with younger nannies or more desperate nannies who get into these jobs only to immediately regret it.
When people ask my opinion about hiring nannies I constantly have to show how things they think are perks (live in, free tickets to go places with the kids) don't actually hold much value as perks. Because it is common for people to hold that misconception.
It is really common for parents and families to offer jobs that DON'T FOLLOW professional standards. In fact the majority of childcare jobs don't. The educated professionals don't take those jobs. The families are often confused why they can't find good help that stays.
So I look at this situation and it immediately pattern matches to what EDUCATED PROFESSIONALS recognize as a bad situation.
I don't think that means that NL folks are inherently evil. What they wanted was a common thing for people to want. The failure modes are the predictable failure modes.
I think they hold culpability. I think they "should have" known better. I don't think (based on this) that they are evil. I think some of their responses aren't the most ideal, but also shoot it's a LOT of pressure to have the whole community turning on you and they are responding way better than I would be able to.
From the way they talk, I don't think they learned the lessons I would hope they had, and that's sad. But it's hard to really grow when you're in a defensive position.
Would someone from CEA be able to comment on this incident?
'A third described an unsettling experience with an influential figure in EA whose role included picking out promising students and funneling them towards highly coveted jobs. After that leader arranged for her to be flown to the U.K. for a job interview, she recalls being surprised to discover that she was expected to stay in his home, not a hotel. When she arrived, she says, “he told me he needed to masturbate before seeing me.”'
Was this 'influential figure in EA' reported to Community Health, and if so, what were the consequences?
[Caveat: Assuming this is an influential EA, not a figure who has influence in EA but wouldn't see themselves as part of the community.]
One thing that bugged me when I first got involved with EA was the extent to which the community seemed hesitant to spend lots of money on stuff like retreats, student groups, dinners, compensation, etc. despite the cost-benefit analysis seeming to favor doing so pretty strongly. I know that, from my perspective, I felt like this was some evidence that many EAs didn't take their stated ideals as seriously as I had hoped—e.g. that many people might just be trying to act in the way that they think an altruistic person should rather than really carefully thinking through what an altruistic person should actually do.
This is in direct contrast to the point you make that spending money like this might make people think we take our ideals less seriously—at least in my experience, had I witnessed an EA community that was more willing to spend money on projects like this, I would have been more rather than less convinced that EA was the real deal. I don't currently have any strong beliefs about which of these reactions is more likely/concerning, but I think it's at least worth pointing out that there is definitely an effect in the opposite direction to the one that you point out as well.
I closely read the whole post and considered it carefully. I'm struggling to sum up my reaction to this 15,000-word piece in way that's concise and clear.
At a high level:
Even if most of what Kat says is factually true, this post still gives me really bad vibes and makes me think poorly of Nonlinear.
Let me quickly try to list some of the reasons why (if anyone wants me to elaborate or substantiate any of these, please reply and ask):
- Confusion, conflation, and prevarication between intent and impact.
- Related to the above, the self-licensing, i.e. we are generally good people and generally do good things, so we don't need to critically self-reflect on particular questionable actions we took.
- The varyingly insensitive, inflammatory, and sensationalist use of the Holocaust poem (truly offensive) and the terms "lynching" (also offensive) and "witch-burning".
- Conflation between being depressed and being delusional.
- Glib dismissal of other people's feelings and experiences.
- The ridiculous use of "photographic evidence", which feels manipulative and/or delusional to me.
- Seeming to have generally benighted views on trauma, abuse, power dynamics, boundaries, mental health, "victimhood", resilience,
... (read more)In my experience, observing someone getting dogpiled and getting dogpiled yourself feel very different. Most internet users have seen others get dogpiled hundreds of times, but may never have been dogpiled themselves.
Even if you have been dogpiled yourself, there's a separate skill in remembering what it felt like when you were dogpiled, while observing someone else getting dogpiled. For example, every time I got dogpiled myself, I think I would've greatly appreciated if someone reached out to me via PM and said "yo, are you doing OK?" But it has never occurred to me to do this when observing someone else getting dogpiled -- I just think to myself "hm, seems like a pretty clear case of unfair dogpiling" and close the tab.
In any case, I've found getting dogpiled myself to be surprisingly stressful, relative to the experience of observing it -- and I usually think of myself as fairly willing to be unpopular. (For example, I once attended a large protest as the only counter-protester, on my own initiative.)
It's very easy say in the abstract: "If I was getting dogpiled, I would just focus on the facts. I would be very self-aware and sensitive, I wouldn't dismiss anyone, I wouldn't... (read more)
I agree with this. I think overall I get a sense that Kat responded in just the sort of manner that Alice and Chloe feared*, and that the flavor of treatment that Alice and Chloe (as told by Ben) said they experienced from Kat/Emerson seems to be on display here. (* Edit: I mean, Kat could've done worse, but it wouldn't help her/Nonlinear.)
I also feel like Kat is misrepresenting Ben's article? For example, Kat says
I just read that article and don't remember any statement to that affect, and searching for individual words in this sentence didn't lead me to a similar sentence in Ben's article on in Chloe's followup. I think the closest thing is this part:
... (read more)My read on this is that a lot of the things in Ben's post are very between-the-lines rather than outright stated. For example, the financial issues all basically only matter if we take for granted that the employees were tricked or manipulated into accepting lower compensation than they wanted, or were put in financial hardship.
Which is very different from the situation Kat's post seems to show. Like... I don't really think any of the financial points made in the first one hold up, and without those, what's left? A She-Said-She-Said about what they were asked to do and whether they were starved and so on, which NL has receipts for.
[Edit after response below: By "hold up" I meant in the emotional takeaway of "NL was abusive," to be clear, not on the factual "these bank account numbers changed in these ways." To me hiring someone who turns out to be financially dependent into a position like this is unwise, not abusive. If someone ends up in the financial red in a situation where they are having their living costs covered and being paid a $1k monthly stipend... I am not rushing to pass judgement on them, I am just noting that this seems like a bad fit for this sort of position, which... (read more)
I think it is worth appreciating the number and depth of insights that FHI can claim significant credit for. In no particular order:
Note especially how much of the literal terminology was coined on (one imagines) a whiteboard in FHI. “Existential risk” isn't a neologism, but I understand it was Nick who first suggested it be used in a principled way to point to the “loss of potential” thing. “Existential hope”, “vulnerable world”, “unilateralist's curse”, “information hazard”, all (as fa... (read more)
It is very generous to characterise Torres' post as insightful and thought provoking. He characterises various long-termists as white supremacists on the flimsiest grounds imaginable. This is a very serious accusation and one that he very obviously throws around due to his own personal vendettas against certain people. e.g. despite many of his former colleagues at CSER also being long-termists he doesn't call them nazis because he doesn't believe they have slighted him. Because I made the mistake of once criticising him, he spent much of the last two years calling me a white supremacist, even though the piece of mine he cited did not even avow belief in long-termism.
A quick point of clarification that Phil Torres was never staff at CSER; he was a visitor for a couple of months a few years ago. He has unfortunately misrepresented himself as working at CSER on various media (unclear if deliberate or not). (And FWIW he has made similar allusions, albeit thinly veiled, about me).
I'm going to be leaving 80,000 Hours and joining Charity Entrepreneurship's incubator programme this summer!
The summer 2023 incubator round is focused on biosecurity and scalable global health charities and I'm really excited to see what's the best fit for me and hopefully launch a new charity. The ideas that the research team have written up look really exciting and I'm trepidatious about the challenge of being a founder but psyched for getting started. Watch this space! <3
I've been at 80,000 Hours for the last 3 years. I'm very proud of the 800+ advising calls I did and feel very privileged I got to talk to so many people and try and help them along their careers!
I've learned so much during my time at 80k. And the team at 80k has been wonderful to work with - so thoughtful, committed to working out what is the right thing to do, kind, and fun - I'll for sure be sad to leave them.
There are a few main reasons why I'm leaving now:
- New career challenge - I want to try out something that stretches my skills beyond what I've done before. I think I could be a good fit for being a founder and running something big and complicated and valuable that wouldn't exist without me - I'd like t
... (read more)Overall this post seems like a grab-bag of not very closely connected suggestions. Many of them directly contradict each other. For example, you suggest that EA organizations should prefer to hire domain experts over EA-aligned individuals. And you also suggest that EA orgs should be run democratically. But if you hire a load of non-EAs and then you let them control the org... you don't have an EA org any more. Similarly, you bemoan that people feel the need to use pseudonyms to express their opinions and a lack of diversity of political beliefs ... and then criticize named individuals for being 'worryingly close to racist, misogynistic, and even fascist ideas' in essentially a classic example of the cancel culture that causes people to choose pseudonyms and causes the movement to be monolithically left wing.
I think this is in fact a common feature of many of the proposals: they generally seek to reduce what is differentiated about EA. If we adopted all these proposals, I am not sure there would be anything very distinctive remaining. We would simply be a tiny and interchangeable part of the amorphous blob of left wing organizations.
It is true this does not apply to all of th... (read more)
Hey Maya, I'm Catherine - one of the contact people on CEA's community health team (along with Julia Wise). I'm so so sorry to hear about your experiences, and the experiences of your friends. I share your sadness and much of your anger too. I’ll PM you, as I think it could be helpful for me to chat with you about the specific problems (if you are able to share more detail) and possible steps.
If anyone else reading this comment who has encountered similar problems in the EA community, I would be very grateful to hear from you too. Here is more info on what we do.
Ways to get in touch with Julia and me :
HLI kindly provided me with an earlier draft of this work to review a couple of weeks ago. Although things have gotten better, I noted what I saw as major problems with the draft as-is, and recommended HLI take its time to fix them - even though this would take a while, and likely miss the window of Giving Tuesday.
Unfortunately, HLI went ahead anyway with the problems I identified basically unaddressed. Also unfortunately (notwithstanding laudable improvements elsewhere) these problems are sufficiently major I think potential donors are ill-advised to follow the recommendations and analysis in this report.
In essence:
- Issues of study quality loom large over this literature, with a high risk of materially undercutting the results (they did last time). The reports interim attempts to manage these problems are inadequate.
- Pub bias corrections are relatively mild, but only when all effects g > 2 are excluded from the analysis - they are much starker (albeit weird) if all data is included. Due to this, the choice to exclude 'outliers' roughly trebles the bottom line efficacy of PT. This analysis choice is dubious on its own merits, was not pre-specified in the protocol, yet is onl
... (read more)I want to take this opportunity to thank the people who kept FHI alive for so many years against such hurricane-force headwinds. But I also want to express some concerns, warnings, and--honestly--mixed feelings about what that entailed.
Today, a huge amount of FHI's work is being carried forward by dozens of excellent organizations and literally thousands of brilliant individuals. FHI's mission has replicated and spread and diversified. It is safe now. However, there was a time when FHI was mostly alone and the ember might have died from the shockingly harsh winds of Oxford before it could light these thousands of other fires.
I have mixed feelings about encouraging the veneration of FHI ops people because they made sacrifices that later had terrible consequences for their physical and mental health, family lives, and sometimes careers--and I want to discourage others from making these trade-offs in the future. At the same time, their willingness to sacrifice so much, quietly and in the background, because of their sincere belief in FHI's mission--and this sacrifice paying off with keeping FHI alive long enough for its work to spread--is something for which I am incr... (read more)
For me, it’s been stuff like:
- People (generally those who prioritize AI) describing global poverty as “rounding error”.
- From late 2017 to early 2021, effectivealtruism.org (the de facto landing page for EA) had at least 3 articles on longtermist/AI causes (all listed above the single animal welfare article), but none on global poverty.
- The EA Grants program granted ~16x more money to longtermist projects as global poverty and animal welfare projects combined. [Edit: this statistic only refers to the first round of EA Grants, the only round for which grant data has been published. ]
- The EA Handbook 2.0 heavily emphasized AI relative to global poverty and animal welfare. As one EA commented: “By page count, AI is 45.7% of the entire causes sections. And as Catherine Low pointed out, in both the animal and the global poverty articles (which I didn't count toward the page count), more than half the article was dedicated to why we might not choose this cause area, with much of that space also focused on far-future of humanity. I'd find it hard for anyone to read this and not take away that
... (read more)Some things from EA Global London 2022 that stood out for me (I think someone else might have mentioned one of them):
These things might feel small but considering this is one of the main EA conferences, having the actual conference organisers associate so strongly with the promotion of a longtermist (albeit yes, also one of the main founders of EA) book made me think "Wow, CEA is really trying to push longtermism to attendees". This seems quite reasonable given the potential significance of the book, I just wonder if CEA have done this for any other worldview-focused books recently (last 1-3 years) or would do so in the future e.g. a new book on animal farming.
Curious to get someone else's take on this or if it just felt important in my head.
Other small things:
As the ma... (read more)
Thanks for the update! Are there any plans to release the list of sub areas? I couldn't see it in this post or the blog post, and it seems quite valuable for other funders, small donors (like me!) and future grantees/org founders to know which areas might now be less well funded.
I think most people reading this thread should totally ignore this story for at least 2 weeks. Meantime: get back to work.
For >90% of readers, I suspect:
I think this is true even of most people who have a bunch of crypto and/or are FTX customers, but that's more debatable and depends on exposure.
These are the standard problems with following almost any BREAKING NEWS story (e.g. an election night, a stock market event, an ongoing tragedy).
Agree, but still find it hard to stop watching? You are glued to your screen and this is unhelpful. This is an opportunity to practice the skill of ignoring stuff that isn't action-relevant, and allocating your attention effectively.
Not actively trading crypto or related assets? Just ignore this story for a while, and get back to work.
Added 2022-11-09 2200 GMT:
If I had a good friend who has a lot of crypto and who may be concerned about losing more than they can afford to lose, I would call them.
Given what I'm seeing online, the situation looks grim for people with big exposure to crypto in general, and those with deposits at FTX in particular.
(To repeat what I said in other comments on this post: I don't follow crypto closely. My takes are not investment advice.)
Peter -- I have mixed feelings about your advice, which is well-expressed and reasonable.
I agree that, typically, it's prudent not to get caught up in news stories that involve high uncertainty, many rumors, and unclear long-term impact.
However, a crucial issue for the EA movement is whether there will be a big public relations blowback against EA from the FTX difficulties. If there's significant risk of this blowback, EA leadership better develop a pro-active plan for dealing with the PR crisis -- and quick.
The FTX crisis is a Very Big Deal in crypto -- one of the worst crises ever. Worldwide, about 300 million people own crypto. Most of them have seen dramatic losses in the value of their tokens recently. On paper, at least, they have lost a couple of hundred billion dollars in the last couple of days. Most investors are down at least 20% this week because of this crisis. Even if prices recover, we will never forget how massive this drop has been.
Sam Bankman-Fried (SBF) himself has allegedly lost about 94% of his net worth this week, down from $15 billion to under $1 billion. (I don't give much credence to these estimates, but it's pretty clear the losses have been ve... (read more)
Hmm, I don't really buy this. I think at Lightcone I am likely to delay any major expenses for a few weeks and make decisions assuming a decent chance we will have a substantial funding crunch. We have a number of very large expenses coming up, and ignoring this would I think cause us to make substantially worse choices.
In a post this long, most people are probably going to find at least one thing they don't like about it. I'm trying to approach this post as constructively as I can, i.e. "what I do find helpful here" rather than "how I can most effectively poke holes in this?" I think there's enough merit in this post that the constructive approach will likely yield something positive for most people as well.
High-Impact Athletes ➔ EA Sports for obvious reasons
I want to explain my role in this situation, and to apologize for not handling it better. The role I played was in the context of my work as a community liaison at CEA.
(All parts that mention specific people were run past those people.)
In 2021, the woman who described traveling to a job interview in the TIME piece told me about her interactions with Owen Cotton-Barratt several years before. She said she found many aspects of his interactions with her to be inappropriate.
We talked about what steps she wanted taken. Based on her requests, I had conversations with Owen and some of his colleagues. I tried to make sure that Owen understood the inappropriateness of his behavior and that steps were taken to reduce the risk of such things happening again. Owen apologized to the woman. The woman wrote to me to say that she felt relieved and appreciated my help. Later, I wrote about power dynamics based partly on this situation.
However, I think I didn’t do enough to address the risk of his behavior continuing in other settings. I didn’t pay enough attention to what other pieces might need addressing, like the fact that, by the time I learned about the situation, he was on the boar... (read more)
Julia, I really appreciate you explaining your role here. I feel uneasy about the framing of what I've read. It sounds like the narrative is "Owen messed up, Julia knew, and Julia messed up by not saying more". But I feel strongly that we shouldn't have one individual as a point of failure on issues this important, especially not as recently as 2021. I think the narrative should be something closer to "Owen messed up, and CEA didn't (and still doesn't) have the right systems in place to respond to these kinds of concerns"
I appreciate you sharing this additional info and reflections, Julia.
I notice you mention being friends with Owen, but, as far as I can tell, the post, your comment, and other comments don't highlight that Owen was on the board of (what's now called) EV UK when you learned about this incident and tried to figure out how to deal with it, and EV UK was the umbrella organization hosting the org (CEA) that was employing you (including specifically for this work).[1] This seems to me like a key potential conflict of interest, and like it may have warranted someone outside CEA being looped in to decide what to do about this incident. At first glance, I feel confused about this not having been mentioned in these comments. I'd be curious to hear whether you explicitly thought about that when you were thinking about this incident in 2021?
That is, if I understand correctly, in some sense Owen had a key position of authority in an organization that in turn technically had authority over the organization you worked at. That said, my rough impression from the outside is that, prior to November 2022, the umbrella organization in practice exerted little influence over what the organiza... (read more)
Reflections on a decade of trying to have an impact
Next month (September 2024) is my 10th anniversary of formally engaging with EA. This date marks 10 years since I first reached out to the Foundational Research Institute about volunteering, at least as far as I can tell from my emails.
Prior to that, I probably had read a fair amount of Peter Singer, Brian Tomasik, and David Pearce, who might all have been considered connected to EA, but I hadn’t actually actively tried engaging with the community. I’d been engaged with the effective animal advocacy community for several years prior, and I think I’d volunteered for The Humane League some, and had seen some of The Humane League Labs’ content online. I’m not sure if The Humane League counted as being “EA” at the time (this was a year before OpenPhil made its first animal welfare grants).
This post is me roughly trying to guess at my impact since then, and reflections on how I’ve changed as a person, both on my own and in response to EA. It’s got a lot of broad reflections about how my feelings about EA have changed. It isn’t particularly rigorously or transparently reasoned — it’s more of a reflection exercise for myself than anything... (read more)
(Hi, I'm Emily, I lead GHW grantmaking at Open Phil.)
Thank you for writing this critique, and giving us the chance to read your draft and respond ahead of time. This type of feedback is very valuable for us, and I’m really glad you wrote it.
We agree that we haven’t shared much information about our thinking on this question. I’ll try to give some more context below, though I also want to be upfront that we have a lot more work to do in this area.
For the rest of this comment, I’ll use “FAW” to refer to farm animal welfare and “GHW” to refer to all the other (human-centered) work in our Global Health and Wellbeing portfolio.
To date, we haven’t focused on making direct comparisons between GHW and FAW. Instead, we’ve focused on trying to equalize marginal returns within each area and do something more like worldview diversification to determine allocations across GHW, FAW, and Open Philanthropy’s other grantmaking. In other words, each of GHW and FAW has its own rough “bar” that an opportunity must clear to be funded. While our frameworks allow for direct comparisons, we have not stress-tested consistency for that use case. We’re also unsure conceptually whether we should be... (read more)
Hi Emily,
Thanks so much for your engagement and consideration. I appreciate your openness about the need for more work in tackling these difficult questions.
Holden has stated that "It seems unlikely that the ratio would be in the precise, narrow range needed for these two uses of funds to have similar cost-effectiveness." As OP continues researching moral weights, OP's marginal cost-effectiveness estimates for FAW and GHW may eventually differ by several orders of magnitude. If this happens, would OP substantially update their allocations between FAW and GHW?
Along with OP's neartermist cause prioritization, your comment seems to imply that OP's moral weights are 1-2 orders of magnitude lower than Rethink's. If that's true, that is a massive difference which (depending upon the details) could have big implications for how EA should allocate resources between FAW charities (e.g. chickens vs shrimp) as well as between F... (read more)
I'm not taking a position on the question of whether Nick should stay on as Director, and as noted in the post I'm on record as having been unhappy with his apology (which remains my position)*, but for balance and completeness I'd like to provide a perspective on the importance of Nick's leadership, at least in the past.
I worked closely with Nick at FHI from 2011 to 2015. While I've not been at FHI much in recent years (due to busyness elsewhere) I remember the FHI of that time being a truly unique-in-academia place; devoted to letting and helping brilliant people think about important challenges in unusual ways. That was in very large part down to Nick - he is visionary, and remarkably stubborn and difficult - with the benefits and drawbacks this comes with. It is difficult to understate the degree of pressure in academia to pull you away from doing something unique and visionary and to instead do more generic things, put time into impressing committees, keeping everyone happy etc**. - It's that stubbornness (combined with the vision) in my view that allowed FHI to come into being and thrive (at least for a time). It is (in my view) the same stubbornness and difficultness t... (read more)
I hope others will join me in saying: thank you for your years serving as the friendly voice of the Forum, and best of luck at Open Philanthropy!
As someone deeply involved in politics in Oregon (I am a house district leader in one of the districts Flynn would have been representing, I am co-chair of the county Democratic campaign committee and I am chair of a local Democratic group that focuses on policy and local electeds and that sponsored a forum that Flynn participated in ) I feel that much of the discussion on this site about Carrick Flynn lacks basic awareness of what the campaign looked like on the ground. I also have some suggestions about how the objectives you work for might be better achieved.
First, Flynn remained an enigma to the voters. In spite of more advertising than ever seen before in a race (there were often three ads in a single television hour program), his history and platform were unclear. While many of the ads came from Protect our Future PAC, Flynn had multiple opportunities to clarify these and failed. Statements such as “He directed a billion dollars to health programs to save children’s lives and removed a legal barrier that may have cost several thousand more lives.” that was featured on his website led people to come to me and ask “What did he do to accomplish this? Who was he... (read more)
I'm grateful that Cari and I met Holden when we did (and grateful to Daniela for luring him to San Francisco for that first meeting). The last fourteen years of our giving would have looked very different without his work, and I don't think we'd have had nearly the same level of impact — particularly in areas like farm animal welfare and AI that other advisors likely wouldn't have mentioned.
Update:
FLI have released a full statement on their website here, and there is an FAQ post on that statement where discussion has mostly moved to on the Forum. I will respond to these updates there, and otherwise leave this post as-is (for now).
However, it looks like an 'ignorance-based' defence is the correct interpretation of what happened here. I don't regret this post - I still think it was important, and got valuable information out there. I also think that emotional responses should not be seen as 'wrong'. Nevertheless, I do have some updating to do, and I thank all commenters in the thread below.
I have also made some retractions, with explanations in the footnotes
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Epistemic Status: Unclear, but without much reason to dispute the factual case presented by Expo. As I wrote this comment, an ignorance-based defence seemed less and less convincing, and consequently my anger rose. I apologise if this means the post is of a lower tone than the forum is used to. I will also happily correct or retract this post partially or fully if better evidence is provided.
[Clarity Edit:... (read more)
My only substantive disagreement with this comment (which I upvoted) is that I don't think FLI is a major actor in EA; they've always kind of done their own thing, and haven't been a core player within the EA community. I view them more as an independent actor with somewhat aligned goals.
Thanks so much for writing this Will! I can't emphasise enough how much I appreciate it.
Two norms that I'd really like to see (that I haven't seen enough of) are:
1. Funders being much more explicit to applicants about why things aren't funded (or why they get less funding than asked for). Even a simple tagging system like "out of our funding scope" or "seemed too expensive", "not targeted enough", or "promising (review and resubmit)" (with a short line about why) is explicit yet simple.
2. More funder diversity while maintaining close communications (e.g. multiple funders with different focus areas/approaches/epistemics, but single application form to apply to multiple funders and those funders sharing private information such as fraud allegation etc).
I know feedback is extremely difficult to do well (and there are risks in giving feedback), but I think that lack of feedback creates a lot of problems, e.g.:
- resentment and uneasiness towards funders within the community;
- the unilateralists curse is exacerbated (in cases where something is not funded because it's seen
... (read more)1) One way to see the problem is that in the past we used frugality as a hard-to-fake signal of altruism, but that signal no longer works.
I'm not sure that's an entirely bad thing, because frugality seems mixed as a virtue e.g. it can lead to:
However, we need new hard-to-fake signals of seriousn... (read more)
From the article:
This rang a bell for me, and I was able to find an old Twitter thread (link removed on David's request) naming the man in question. At least, all the details seem to match.
I'm pretty sure that the man in question (name removed on David's request) has been banned from official EA events for many years. I remember an anecdote about him showing up without a ticket at EAG in the past and being asked to leave. As far as I know, the ban is because he has a long history of harassment with at least some assault mixed in.
I don't know who introduced him to Sonia Joseph, but if she'd mentioned him to the people I know in EA, I think the average reaction would have been "oh god, don't". I guess there are still bubbles I'm not a part of where he's seen as a "prominent man in the field", though I haven't heard anything about actual work from him in many years.
Anyway, while it sounds like many people mentioned in this article behaved very badly, it also seems possible that the incidents CEA k... (read more)
Following CatGoddess, I'm going to share more detail on parts of the article that seemed misleading, or left out important context.
Caveat: I'm not an active member of the in-person EA community or the Bay scene. If there's hot gossip circulating, it probably didn't circulate to me. But I read a lot.
This is a long comment, and my last comment was a long comment, because I've been driving myself crazy trying to figure this stuff out. If the community I (digitally) hang out in is full of bad people and their enablers, I want to find a different community!
But the level of evidence presented in Bloomberg and TIME makes it hard to understand what's actually going on. I'm bothered enough by the weirdness of the epistemic environment that it drove me to stop lurking :-/
I name Michael Vassar here, even though his name wasn't mentioned in the article. Someone asked me to remove that name the last time I did this, and I complied. But now that I'm seeing the same things repeated in multiple places and used to make misleading points, I no longer think it makes sense to hide info about serial abusers who have been kicked out of the movement, especially when that info is easy to... (read more)
Thank you for writing this. It's barely been a week, take your time.
There's been a ton of posts on the forum about various failures, preventative measures, and more. As much as we all want to get to the bottom of this and ensure nothing like this ever happens again, I don't think our community benefits from hasty overcorrections. While many of the points made are undoubtedly good, I don't think it will hurt the EA community much to wait a month or two before demanding any drastic measures.
EA's should probably still be ambitious. Adopting rigorous governance and oversight mechanisms sometimes does more harm than good. Let's not throw out the baby with the bathwater.
I'm still reflecting and am far from having fully formed beliefs yet, I am confused about just how many strong views there have been expressed on the forum. Alone correctly recalling my thoughts and feelings around FTX before the event is difficult. I'm noticing a lot of finger pointing and not a lot of introspection.
I don't know about everyone else, but I'm pretty horrified at just how similar my thinking seems to have been to SBF's. If a person who seemingly agreed with me on so many moral priorities was capable of doing something so horrible, how can I be sure that I am different?
I'm going to sit with that thought for a while, and think about what type of person I want to strive to be.
I feel really quite bad about this post. Despite it being only a single paragraph it succeeds at confidently making a wrong claim, pretending to speak on behalf of both an organization and community that it is not accurately representing, communicating ambiguously (probably intentionally in order to avoid being able to be pinned on any specific position), and for some reason omitting crucial context.
Contrary to the OP it is easy to come up with examples where within the Effective Altruism framework two people do not count equally. Indeed most QALY frameworks value young people more than older people, many discussions have been had about hypothetical utility monsters, and about how some people might have more moral patienthood due to being able to experience more happiness or more suffering, and of course the moral patienthood of artificial systems immediately makes it clear that different minds likely matter differently in moral calculus.
Saying "all people count equally" is not a core belief of EA, and indeed I do not remember hearing it seriously argued for a single time in my almost 10 years in this community (which is not surprising, since it indeed doesn't really ho... (read more)
I think I do see "all people count equally" as a foundational EA belief. This might be partly because I understand "count" differently to you, partly because I have actually-different beliefs (and assumed that these beliefs were "core" to EA, rather than idiosyncratic to me).
What I understand by "people count equally" is something like "1 person's wellbeing is not more important than another's".
E.g. a British nationalist might not think that all people count equally, because they think their copatriots' wellbeing is more important than that of people in other countries. They would take a small improvement in wellbeing for Brits over a large improvement in wellbeing for non-Brits. An EA would be impartial between improvements in wellbeing for British people vs non-British people.
"most QALY frameworks value young people more than older people, many discussions have been had about hypothetical utility monsters, and about how some people might have more moral patienthood due to being able to experience more happiness or more suffering, and of course the moral patienthood of artificial systems immediately makes it clear that different minds likely matter differently in... (read more)
Sorry for the slow response.
I wanted to clarify and apologise for some things here (not all of these are criticisms you’ve specifically made, but this is the best place I can think of to respond to various criticisms that have been made):
- This statement was drafted and originally intended to be a short quote that we could send to journalists if asked for comment. On reflection, I think that posting something written for that purpose on the Forum was the wrong way to communicate with the community and a mistake. I am glad that we posted something, because I think that it’s important for community members to hear that CEA cares about inclusion, and (along with legitimate criticism like yours) I’ve heard from many community members who are glad we said something. But I wish that I had said something on the Forum with more precision and nuance, and will try to be better at this in future.
- The first sentence was not meant to imply that we think that Bostrom disagrees with this view, but we can see why people would draw this implication. It’s included because we thought lots of people might get the impression from Bostrom’s email that EA is racist and I don’t want anyone — w
... (read more)Hi Will,
It is great to see all your thinking on this down in one place: there are lots of great points here (and in the comments too). By explaining your thinking so clearly, it makes it much easier to see where one departs from it.
My biggest departure is on the prior, which actually does most of the work in your argument: it creates the extremely high bar for evidence, which I agree probably couldn’t be met. I’ve mentioned before that I’m quite sure the uniform prior is the wrong choice here and that this makes a big difference. I’ll explain a bit about why I think that.
As a general rule if you have a domain like this that extends indefinitely in one direction, the correct prior is one that diminishes as you move further away in that direction, rather than picking a somewhat arbitrary end point and using a uniform prior on that. People do take this latter approach in scientific papers, but I think it is usually wrong to do so. Moreover in your case in particular, there are also good reasons to suspect that the chance of a century being the most influential should diminish over time. Especially because there are important kinds of significant event (such as the value lock-in or an... (read more)
I agree with some of the points of this post, but I do think there is a dynamic here that is missing, that I think is genuinely important.
Many people in EA have pursued resource-sharing strategies where they pick up some piece of the problems they want to solve, and trust the rest of the community to handle the other parts of the problem. One very common division of labor here is
I think a lot of this type of trade has happened historically in EA. I have definitely forsaken a career with much greater earning potential than I have right now in order to contribute to EA infrastructure and to work on object-level problems.
I think it is quite important to recognize that in as much as a trade like this has happened, this gives the people who have done object level work a substantial amount of ownership over the funds that other people have earned, as well as the funds that other people have fundraised (I also think this applies to Open Phil, though I think the case here is bunch messier and I won't go into my models of the g... (read more)
This is probably as good a place as any to mention that whatever people say about this race could very easily get picked up by local media and affect it. As a general principle, if you have an unintuitive idea for how to help Carrick's candidacy, it might be an occasion to keep it to yourself, or discuss it privately. Generally, here, on Twitter, and everywhere, thinking twice before posting about this topic would be a reasonable policy.
I drew a random number for spot checking the short summary table. (I don't think spot checking will do justice here, but I'd like to start with something concrete.)
This seems to be about this paragraph from the original post:
There aren't any other details in the original post specifically from Chloe or specifically about her partner, including in the comment in Chloe's words below the post. The only specific detail about romantic partners I see in the original post is about Alice, and it pl... (read more)
Hey,
I’m really sorry to hear about this experience. I’ve also experienced what feels like social pressure to have particular beliefs (e.g. around non-causal decision theory, high AI x-risk estimates, other general pictures of the world), and it’s something I also don’t like about the movement. My biggest worries with my own beliefs stem around the worry that I’d have very different views if I’d found myself in a different social environment. It’s just simply very hard to successfully have a group of people who are trying to both figure out what’s correct and trying to change the world: from the perspective of someone who thinks the end of the world is imminent, someone who doesn’t agree is at best useless and at worst harmful (because they are promoting misinformation).
In local groups in particular, I can see how this issue can get aggravated: people want their local group to be successful, and it’s much easier to track success with a metric like “number of new AI safety researchers” than “number of people who have thought really deeply about the most pressing issues and have come to their own well-considered conclusions”.
One thing I’ll say is that core researchers ... (read more)
It's probably worth noting that Holden has been pretty open about this incident. Indeed, in a talk at a Leaders Forum around 2017, he mentioned it precisely as an example of "end justify the means"-type reasoning.
It's also listed under GiveWell's Our Mistakes page.
I'm a pro forecaster. I build forecasting tools. I use forecasting in a very relevant day job running an AI think tank. I would normally be very enthusiastic about Manifest. And I think Manifest would really want me there.
But I don't attend because of people there who have "edgy" opinions that might be "fun" for others but aren't fun for me. I don't want to come and help "balance out" someone who thinks that ~using they/them pronouns is worse than committing genocide~ (sorry this was a bad example as discussed in the comments so I'll stick with the pretty clear "has stated that black people are animals who need to be surveilled in mass to reduce crime"). I want to talk about forecasting.
It's your right to have your conference your way, and it's others right to attend and have fun. But I think Manifest seriously underrates how much they are losing out on here by being "edgy" and "fun", and I really don't want to be associated with it.
Because others here are unlikely to do so, I feel like I ought to explicitly defend Hanania's presence on the merits. I don't find it "fun" that he's "edgy." I go out of my way, personally, to avoid being edgy. While I tread into heated territory at times, I have always made it my goal to do so respectfully, thoughtfully, and with consideration for others' values. No, it's not edginess or fun that makes me think he belongs there. He unquestionably belongs at a prediction market conference because he has been a passionate defender of prediction markets in the public sphere and because he writes to his predominantly right-leaning audience in ways that consistently emphasize and criticize the ways they depart from reality.
Let me be clear: I emphatically do not defend all parts of his approach and worldview. He often engages in a deliberately provocative way and says insensitive or offensive things about race, trans issues, and other hot-button topics on the right. But I feel the same about many people who you would have no problem seeing attend Manifest, and he brings specific unusual and worthwhile things to the table.
My first real interaction with Hanania, as I recall, came when he ... (read more)
To recap, I thought Ben’s original post was unfair even if he happened to be right about Nonlinear because of how chilling it is for everyone else to know they could be on blast if they try to do anything. It sounded like NL made mistakes, but they sounded like very typical mistakes of EA/rationalists when they try out new or unusual social arrangements. Since the attitude around me if you don’t like contracts you entered is generally “tough shit, get more agency”, I was surprised at the responses saying Alice and Chloe should have been protected from an arrangement they willing entered (that almost anyone but EAs/rationalists would have told them was a bad idea). It made me think Ben/Lightcone had a double standard toward an org they already didn’t like because of Emerson talking about Machiavellian strategies and marketing.
Idk if Emerson talking about libel was premature. Many have taken it as an obvious escalation, but it seems like he called it exactly right because NL’s reputation is all but destroyed. Maybe if he hadn’t said that Ben would have waited for their response before publishing, and it would have been better. I think it’s naive and irresponsible for Ben/Lightcone to... (read more)
My naive moral psychology guess—which may very well be falsified by subsequent revelations, as many of my views have this week—is that we probably won’t ever find an “ends justify the means” smoking gun (eg, an internal memo from SBF saying that we need to fraudulently move funds from account A to B so we can give more to EA). More likely, systemic weaknesses in FTX’s compliance and risk management practices failed to prevent aggressive risk-taking and unethical profit-seeking and self-preserving business decisions that were motivated by some complicated but unstated mix of misguided pseduo-altruism, self-preservation instincts, hubris, and perceived business/shareholder demands.
I say this because we can and should be denouncing ends justify the means reasoning of this type, but I suspect very rarely in the heat of a perceived crisis will many people actually invoke it. I think we will prevent more catastrophes of this nature in the future by focusing more on on integrity as a personal virtue and the need for systemic compliance and risk-management tools within EA broadly and highly impactful/prominent EA orgs, especially those whose altruistic motives will be systematically in tension with perceived business demands.
Relatedly, I think a focus on ends-justify-the-means reasoning is potentially misguided because it seems super clear in this case that, even if we put zero intrinsic value on integrity, honesty, not doing fraud, etc., some of the decisions made here were pretty clearly very negative expected-value. We should expect the upsides from acquiring resources by fraud (again, if that is what happened) to be systematically worth much less than reputational and trustworthiness damage our community will receive by virtue of motivating, endorsing, or benefitting from that behavior.
I think EA currently is much more likely to fail to achieve most of its goals by ending up with a culture that is ill-suited for its aims, being unable to change direction when new information comes in, and generally fail due to the problems of large communities and other forms of organization (like, as you mentioned, the community behind NeurIPS, which is currently on track to be an unstoppable behemoth racing towards human extinction that I so desperately wish was trying to be smaller and better coordinated).
I think EA Global admissions is one of the few places where we can apply steering on how EA is growing and what kind of culture we are developing, and giving this up seems like a cost, without particularly strong commensurate benefits.
On a more personal level, I do want to be clear that I am glad about having a bigger EA Global this year, but I would probably also just stop attending an open-invite EA Global since I don't expect it would really share my culture or be selected for people I would really want to be around. I think this year's EA Global came pretty close to exhausting my ability to be thrown into a large group of people with a quite different culture ... (read more)
Some reasons I'm pretty skeptical that Trump is net good for EA causes:
- One of the strongest arguments for Republican presidents over democratic ones is that via PEPFAR, Bush has done more for global health than any Democratic president, at least while they're still in office. Arguably PEPFAR was simply a bigger deal in terms of lives saved than e.g. the Iraq War. Trump, unfortunately, has decreased PEPFAR funding. You didn't really talk about global health other than in the medical innovation section.
- I don't find it plausible that Trump will be net good for reducing the moral catastrophe that is factory farming. You didn't talk about factory farming at all.
- Trump's rhetoric and general admiration and support for strongmen seems quite bad for American leadership in the world.
- The combination of bellicose language and thinly veiled admiration I think is bad for international relations under almost any plausible model.
- Do you really think Trump or JD Vance is a better fit for being in charge of the nuclear football than Harris?
- You say "Blame, just deserts, personal character... only enter a consequentialist ana
... (read more)From Reuters:
I sincerely hope OpenPhil (or Effective Ventures, or both - I don't know the minutia here) sues over this. Read the reasoning for and details of the $30M grant here.
The case for a legal challenge seems hugely overdetermined to me:
I know OpenPhil has a pretty hands-off ethos and vibe; this shouldn't stop them from acting with integrity when hands-on legal action is clearly warranted
This situation reminded me of this post, EA's weirdness makes it unusually susceptible to bad behavior. Regardless of whether you believe Chloe and Alice's allegations (which I do), it's hard to imagine that most of these disputes would have arisen under more normal professional conditions (e.g., ones in which employees and employers don't live together, travel the world together, and become romantically entangled). A lot of the things that (no one is disputing) happened here are professionally weird; for example, these anecdotes from Ben's summary of Nonlinear's response (also the linked job ad):
- "Our intention wasn't just to have employees, but also to have members of our family unit who we traveled with and worked closely together with in having a strong positive impact in the world, and were very personally close with."
- "We wanted to give these employees a pretty standard amount of compensation, but also mostly not worry about negotiating minor financial details as we traveled the world. So we covered basic rent/groceries/travel for these people."
- "The formal employee drove without a license for 1-2 months in Puerto Rico. We taught her to drive, which she was excited about. You mi
... (read more)I think this comment will be frustrating for you and is not high quality. Feel free to disagree, I'm including it because I think it's possible many people (or at least some?) will feel wary of this post early on and it might not be clear why. In my opinion, including a photo section was surprising and came across as near completely misunderstanding the nature of Ben's post. It is going to make it a bit hard to read any further with even consideration (edit: for me personally, but I'll just take a break and come back or something). Basically, without any claim on what happened, I don't think anyone suspects "isolated or poor environment" to mean, "absence of group photos in which [claimed] isolated person is at a really pretty pool or beach doing pool yoga." And if someone is psychologically distressed, whether you believe this to be a misunderstanding or maliciously exaggerated, it feels like a really icky move to start posting pictures that add no substance, even with faces blurred, with the caption "s'mores", etc.
Hey - I’m starting to post and comment more on the Forum than I have been, and you might be wondering about whether and when I’m going to respond to questions around FTX. So here’s a short comment to explain how I’m currently thinking about things:
The independent investigation commissioned by EV is still ongoing, and the firm running it strongly preferred me not to publish posts on backwards-looking topics around FTX while the investigation is still in-progress. I don’t know when it’ll be finished, or what the situation will be like for communicating on these topics even after it’s done.
I had originally planned to get out a backwards-looking post early in the year, and I had been holding off on talking about other things until that was published. That post has been repeatedly delayed, and I’m not sure when it’ll be able to come out. If I’d known that it would have been delayed this long, I wouldn’t have waited on it before talking on other topics, so I’m now going to start talking more than I have been, on the Forum and elsewhere; I’m hoping I can be helpful for some of the other issues that are currently active topics of discussion.
Briefly, though, and as I indicated before: I had... (read more)
For what it's worth, I'm a 30 year old woman who's been involved with EA for eight years and my experience so far has been overwhelmingly welcoming and respectful. This has been true for all of my female EA friends as well. The only difference in treatment I have ever noticed is being slightly more likely to get speaking engagements.
Just posting about this anonymously because I've found these sorts of topics can lead to particularly vicious arguments, and I'd rather spend my emotional energy on other things.
[EDIT: I'd like to clarify that, strictly speaking, the comment below is gossip without hard substantiating evidence. Gossip can have an important community function - at the very least, from this comment you can conclude that things happened at Nonlinear which induced people (in fact, many people) to negatively gossip about the organization - but should also be treated as different from hard accusations, especially those backed by publicly available evidence. In the wake of the FTX fiasco, I think it's likely that people are more inclined to treat gossip of the sort I share below as decisive.
That said, I do think that the gossip below paints a basically accurate picture. I also have other reasons to distrust Nonlinear that I don't feel comfortable sharing (more gossip!). I know this is hard epistemic territory to work in, and I'm sorry. I would feel best about this situation if someone from, e.g., CEA would talk to some of the people involved, but I'm sure anyone who could deal with this is swamped right now. In the meantime, I think it's fine for this gossip to make you unsure about Nonlinear, but still e.g. consider applying to them for emergency funding. I personally wouldn't... (read more)
I’m a current intern at Nonlinear and I think It would be good to add my point of view.
I was offered an internship by Drew around 3 months ago after I contributed to a project and had some chats with him. From the first moment I was an intern he made me feel like a valuable member of the team, my feedback was always taken seriously, and I could make decisions on my own. It never felt like a boss relationship, more like coworkers and equals.
And when I started putting in less hours, I never got “hey you should work more or this is not gonna work out” but rather Drew took the time to set up a weekly 1 on 1 to help me develop personally and professionally and get to know me.
I can only speak for myself but overall I’m very happy to be working with them and there’s nothing about the situation I would call mistreatment.
Given that debating race and IQ would make EA very unwelcoming for black people, probably has the effect of increasing racism, and clearly does not help us do the most good, we shouldn’t even be debating it with ‘empathy and rigour’.
EA is a community for doing the most good, not for debating your favourite edgy topic
Yeah, I agree here. We shouldn't discuss that topic in community venues; it doesn't help our mission and is largely counterproductive.
Thanks, Alex, for writing this important contribution up so clearly and thanks, Dan, for engaging constructively. It’s good to have a proper open exchange about this. Three cheers for discourse.
While I am also excited about the potential of GivingGreen, I do share almost all of Alex’s concerns and think that his concerns mostly stand / are not really addressed by the replies. I state this as someone who has worked/built expertise on climate for the past decade and on climate and EA for the past four years (in varying capacities, now leading the climate work at FP) to help those that might find it hard to adjudicate this debate with less background.
Given that I will criticize the TSM recommendation, I should also state where I am coming from:
My climate journey started over 15 years ago as a progressive climate youth activist, being a lead organizer for Friends of the Earth in my home state, Rhineland Palatinate (in Germany).
I am a person of the center-left and get goosebumps every time I hear Bernie Sanders speak about a better society. This is to say I have nothing against progressives and I did not grow up as a libertarian techno-optimist who would be naturally incline... (read more)
First off, thank you to everyone who worked on this post. Although I don't agree with everything in it, I really admire the passion and dedication that went into this work -- and I regret that the authors feel the need to remain anonymous for fear of adverse consequences.
For background: I consider myself a moderate EA reformer -- I actually have a draft post I've been working on that argues that the community should democratically hire people to write moderately concrete reform proposals. I don't have a ton of the "Sam" characteristics, and the only thing of value I've accepted from EA is one free book (so I feel free to say whatever I think). I am not a longtermist and know very little about AI alignment (there, I've made sure I'd never get hired if I wanted to leave my non-EA law career?).
Even though I agree with some of the suggested reforms here, my main reaction to this post is to affirm that my views are toward incremental/moderate -- and not more rapid/extensive -- reform. I'm firmly in the Global Health camp myself, and that probably colors my reaction to a proposal that may have been designed more with longtermism in mind. There is too much ... (read more)
Lots of the comments here are pointing at details of the markets and whether it's possible to profit off of knowing that transformative AI is coming. Which is all fine and good, but I think there's a simple way to look at it that's very illuminating.
The stock market is good at predicting company success because there are a lot of people trading in it who think hard about which companies will succeed, doing things like writing documents about those companies' target markets, products, and leadership. Traders who do a good job at this sort of analysis get more funds to trade with, which makes their trading activity have a larger impact on the prices.
Now, when you say that:
I think what you're claiming is that market prices are substantially controlled by traders who have a probability like that in their heads. Or traders who are following an algorithm which had a probability like that in the spreadsheet. Or something thing like that. Some sort of serious cognition, serious in the way that traders treat compan... (read more)
I find it hard to believe that the number of traders who have considered crazy future AI scenarios is negligible. New AI models, semiconductor supply chains, etc. have gotten lots of media and intellectual attention recently. Arguments about transformative AGI are public. Many people have incentives to look into them and think about their implications.
I don't think this post is decisive evidence against short timelines. But neither do I think it's a "trap" that relies on fully swallowing EMH. I think there're deeper issues to unpack here about why much of the world doesn't seem to put much weight on AGI coming any time soon.
When I was asked to resign from RP, one of the reasons given was that I wrote the sentence “I don't think that EAs should fund many WAW researchers since I don't think that WAW is a very promising cause area” in an email to OpenPhil, after OpenPhil asked for my opinion on a WAW (Wild Animal Welfare) grant. I was told that this is not okay because OpenPhil is one of the main funders of RP’s WAW work. That did not make me feel very independent. Though perhaps that was the only instance in the four years I worked at RP.
Because of this instance, I was also concerned when I saw that RP is doing cause prioritization work because I was afraid that you would hesitate to publish stuff that threatens RP funding, and would more willingly publish stuff that would increase RP funding. I haven’t read any of your cause prio research though, so I can’t comment on whether I saw any of that.
EDIT: I should've said that this was not the main reason I was asked to resign and that I had said that I would quit in three months before this happened.
Brief note on why EA should be careful to remain inclusive & welcoming to neurodiverse people:
As somebody with Aspergers, I'm getting worried that in this recent 'PR crisis', EA is sending some pretty strong signals of intolerance to those of us with various kinds of neurodiversity that can make it hard for us to be 'socially sensitive', to 'read the room', and to 'avoid giving offense'. (I'm not saying that any particular people involved in recent EA controversies are Aspy; just that I've seen a general tendency for EAs to be a little Aspier than other people, which is why I like them and feel at home with them.)
There's an ongoing 'trait war' that's easy to confuse with the Culture War. It's not really about right versus left, or reactionary versus woke. It's more about psychological traits: 'shape rotators' versus 'wordcels', 'Aspies' versus 'normies', systematizers versus empathizers, high decouplers versus low decouplers.
EA has traditionally been an oasis for Aspy systematizers with a high degree of rational compassion, decoupling skills, and quantitative reasoning. One downside of being Aspy is that we occasionally, or even often, say things that normies consid... (read more)
I'm disappointed that much of this document involves attacking the people who've accused you of harmful actions, in place of a focus on disputing the evidence they provided (I appreciate that you also do the latter). I also really bounce off the distraction tactics at play here, where you encourage the reader to turn their attention back to the world's problems. It doesn't seem like you've reflected carefully and calmly about this situation; I don't see many places where you admit to making mistakes and it doesn't seem like you're willing to take ownership of this situation at all.
I don't have time to engage with all the evidence here, but even if I came away convinced that all of the original claims provided by Ben weren't backed up, I still feel really uneasy about Nonlinear; uneasy about your work culture, uneasy about how you communicate and argue, and alarmed at how forcefully you attack people who criticise you.
The vast majority of what they gave is disputing the evidence. There is a whole 135 pages of basically nothing but that. You then even refer to it saying:
How can both these be true at once? Either it's a lot so you don't have time to go through it all or they haven't done much in which case you should be able to spend some time looking at it?
My thoughts, for those who want them:
- I don't have much sympathy for those demanding a good reason why the post wasn't delayed. While I'm generally quite pro sharing posts with orgs, I think it's quite important that this doesn't give the org the right to delay or prevent the posting. This goes double given the belief of both the author and their witnesses that Nonlinear is not acting in good faith.
- There seem to be enough uncontested/incontestable claims made in this post for me to feel comfortable recommending that junior folks in the community stay away from Nonlinear. These include asking employees to carry out illegal actions they're not comfortable with, and fairly flagrantly threatening employees with retaliation for saying bad things about them (Kat's text screenshotted above is pretty blatant here).
- Less confidently, I would be fairly surprised if I come out of the other end of this, having seen Nonlinear's defence/evidence, and don't continue to see the expenses-plus-tiny-salary setup as manipulative and unhealthy.
- More confidently than anything on this list, Nonlinear's threatening to sue Lightcone for Ben's post is completely unacceptable, decreases my sympathy for them by
... (read more)Recently, I've encountered an increasing number of misconceptions, in rationalist and effective altruist spaces, about what Open Philanthropy's Global Catastrophic Risks (GCR) team does or doesn't fund and why, especially re: our AI-related grantmaking. So, I'd like to briefly clarify a few things:
- Open Philanthropy (OP) and our largest funding partner Good Ventures (GV) can't be or do everything related to GCRs from AI and biohazards: we have limited funding, staff, and knowledge, and many important risk-reducing activities are impossible for us to do, or don't play to our comparative advantages.
- Like most funders, we decline to fund the vast majority of opportunities we come across, for a wide variety of reasons. The fact that we declined to fund someone says nothing about why we declined to fund them, and most guesses I've seen or heard about why we didn't fund something are wrong. (Similarly, us choosing to fund someone doesn't mean we endorse everything about them or their work/plans.)
- Very often, when we decline to do or fund something, it's not because we don't think it's good or important, but because we aren't the right team or organization to do or fund it, or we're pri
... (read more)I can't speak for Open Philanthropy, but I can explain why I personally was unmoved by the Rethink report (and think its estimates hugely overstate the case for focusing on tiny animals, although I think the corrected version of that case still has a lot to be said for it).
Luke says in the post you linked that the numbers in the graphic are not usable as expected moral weights, since ratios of expectations are not the same as expectations of ratios.
[Edited for clarity] I was not satisfied with Rethink's attempt to address that central issue, that you get wildly different results from assuming the moral value of a fruit fly is fixed and reporting possible ratios to elephant welfare as opposed to doing it the other way around.
It is not unthinkably improbable that an elephant brain where reinforcement from a positive or negative stimulus adjust millions of times as many neural computations could be seen as vastly more morally important than a fruit fly, just as one might think that a f... (read more)
I agree that a low-weirdness EA would have fewer weird scandals. I'm not sure whether these would just be replaced by more normal scandals. It probably depends a lot on exactly what changes you make? A surprisingly large fraction of the "normal" communities I've observed are perpetually riven by political infighting, personal conflicts, allegations of bad behavior, etc., to a far greater degree than is true for EA.
Choosing the right target depends on understanding what EA is doing right in addition to understanding what it's doing wrong, and protecting and cultivating the former at the same time we combat the latter.
I'm skeptical that optimizing against marginal weirdness is a good way to reduce rates of sexual misconduct, mostly for two reasons:
- The proposal is basically to regress EA to the mean, but I haven't seen evidence that EA is worse than the mean of the populations we'd realistically move toward. This actually matters; it would be PlayPump levels of tragicomic if EA put a ton of effort into Becoming More Normal for the sake of making sex and gender minorities safer in EA, only to find out that the normal demographic w
... (read more)Rob - I strongly agree with your take here.
EA prides itself on quantifying the scope of problems. Nobody seems to be actually quantifying the alleged scope of sexual misconduct issues in EA. There's an accumulation of anecdotes, often second or third hand, being weaponized by mainstream media into a blanket condemnation of EA's 'weirdness'. But it's unclear whether EA has higher or lower rates of sexual misconduct than any other edgy social movement that includes tens of thousands of people.
In one scientific society I'm familiar with, a few allegations of sexual conduct were made over several years (out of almost a thousand members). Some sex-negative activists tried to portray the society as wholly corrupt, exploitative, sexist, unwelcoming, and alienating. But instead of taking the allegations reactively as symptomatic of broader problems, the society ran a large-scale anonymous survey of almost all members. And it found that something less than 2% of female or male members had ever felt significantly uncomfortable, unwelcome, or exploited. That was the scope of the problem. 2% isn't 0%, but it's a lot better than 20% or 50%. In response to this scope information, the socie... (read more)
I am very bothered specifically by the frame "I wish we had resolved [polyamory] "internally" rather than it being something exposed by outside investigators."
I am polyamorous; I am in committed long-term relationships (6 years and 9 years) with two women, and occasionally date other people. I do not think there is anything in my relationships for "the community" to "resolve internally". It would not be appropriate for anyone to tell me to break up with one of my partners. It would not be appropriate for anyone to hold a community discussion about how to 'resolve' my relationships, though of course I disclose them when they are relevant to conflict-of-interest considerations, and go out of my way to avoid such conflicts. I would never ask out a woman who might rely on me as a professional mentor, or a woman who is substantially less professionally established.
There are steps that can be taken, absolutely should be taken, and for the most part to my knowledge have been taken to ensure that professional environments aren't sexualized and that bad actors are unwelcome. Asking people out or flirting with them in professional contexts should be considered unacceptable. People who ... (read more)
I think some of us owe FLI an apology for assuming heinous intentions where a simple (albeit dumb) mistake was made.
I can imagine this must have been a very stressful period for the entire team, and I hope we as a community become better at waiting for the entire picture instead of immediately reacting and demanding things left and right.
Thanks for running this survey. I find these results extremely implausibly bearish on public policy -- I do not think we should be even close to indifferent between improving the AI policy of the country that can make binding rules on all of the leading labs plus many key hardware inputs and has a $6 trillion budget and the most powerful military on earth by 5% and having $8.1 million more dollars for a good grantmaker, or having 32.5 "good video explainers," or having 13 technical AI academics. I'm biased, of course, but IMO the surveyed population is massively overrating the importance of the alignment community relative to the US government.
The FTX Future Fund recently finished a large round of regrants, meaning a lot of people are approved for grants that have not yet been paid out. At least one person has gotten word from them that these payouts are on hold for now. This seems very worrisome and suggests the legal structure of the fund is not as robust or isolated as you might have thought. I think a great community support intervention would be to get clarity on this situation and communicate it clearly. This would be helpful not only to grantees but to the EA community as a whole, since what is on many people's minds is not as much what will happen to FTX but what will happen to the Future Fund. (From the few people I have talked to, many were under the impression that funds committed to the Future Fund were actually committed in a strict sense, e.g. transferred to a separate entity. If that turns out not to be the case, it's really bad.)
On talking about this publicly
A number of people have asked why there hasn’t been more communication around FTX. I’ll explain my own case here; I’m not speaking for others. The upshot is that, honestly, I still feel pretty clueless about what would have been the right decisions, in terms of communications, from both me and from others, including EV, over the course of the last year and a half. I do, strongly, feel like I misjudged how long everything would take, and I really wish I’d gotten myself into the mode of “this will all take years.”
Shortly after the collapse, I drafted a blog post and responses to comments on the Forum. I was also getting a lot of media requests, and I was somewhat sympathetic to the idea of doing podcasts about the collapse — defending EA in the face of the criticism it was getting. My personal legal advice was very opposed to speaking publicly, for reasons I didn’t wholly understand; the reasons were based on a general principle rather than anything to do with me, as they’ve seen a lot of people talk publicly about ongoing cases and it’s gone badly for them, in a variety of ways. (As I’ve learned more, I’ve come to see that this view has a lot of m... (read more)
I was one of the people who helped draft the constitutional amendment and launch the initiative. My quick takes:
(* An initiative passing doesn't just require a majority of the voters, but also a majority of the voters in a majority of cantons (states), which is a target that's much harder to hit for non-conservative initiatives. Even if >50% of the voters w... (read more)
I thought the paper itself was poorly argued, largely as a function of biting off too much at once. Several times the case against the TUA was not actually argued, merely asserted to exist along with one or two citations for which it is hard to evaluate if they represent a consensus. Then, while I thought the original description of TUA was accurate, the TUA response to criticisms was entirely ignored. Statements like "it is unclear why a precise slowing and speeding up of different technologies...across the world is more feasible or effective than the simpler approach of outright bans and moratoriums" were egregious, and made it seem like you did not do your research. You spoke to 20+ reviewers, half of which were sought out to disagree with you, and not a single one could provide a case for differential technology? Not a single mention of the difficulty of incorporating future generations into the democratic process?
Ultimately, I think the paper would have been better served by focusing on a single section, leaving the rest to future work. The style of assertions rather than argument and skipping over potential responses comes across as more polemical than evidence-seeking. I bel... (read more)
I want to push back on this post. I think sadly this post suffers from the same problem that 99% of all legal advice that people receive suffers from, which is that it is not actually a risk analysis that helps you understand the actual costs of different decisions.
The central paragraph that I think most people will react to is this section:
Sadly, this post does not indicate what the actual expected time a witness might be expected to spend in lit... (read more)
I consider your attempt at a quantified expected cost analysis a helping hand, not pushback, and I appreciate it.
Accepting it as a data point, a few quick points in response:
- Your comment only addresses the self-interest angle, which was a relatively small part of my post. It (understandably) ignores the impacts on others and the systemic impacts that I tried to highlight, which I don't think can be disentangled from the self-interest analysis so easily. I’m not sure those additional impacts are amenable to quantified expectation analysis (though I’d be happy to be proven wrong on that), but we shouldn't just ignore them.
- I think your numbers are low at the outset, but I don’t think any tweaking I’d do would cause us to be off by an OOM. That said, I think you’ve established a floor, and one that only applies to a witness with no potential liability. Accepting your numbers for the sake of discussion, the time estimates sit at the bottom of towering error bars. And that’s assuming we’re talking about an individual and not an organization that might have orders of magnitude more documents to review than an individual would, attorney-client privilege and other concerns that compli
... (read more)I feel sorely misunderstood by this post and I am annoyed at how highly upvoted it is. It feels like the sort of thing one writes / upvotes when one has heard of these fabled "longtermists" but has never actually met one in person.
That reaction is probably unfair, and in particular it would not surprise me to learn that some of these were relevant arguments that people newer to the community hadn't really thought about before, and so were important for them to engage with. (Whereas I mostly know people who have been in the community for longer.)
Nonetheless, I'm writing down responses to each argument that come from this unfair reaction-feeling, to give a sense of how incredibly weird all of this sounds to me (and I suspect many other longtermists I know). It's not going to be the fairest response, in that I'm not going to be particularly charitable in my interpretations, and I'm going to give the particularly emotional and selected-for-persuasion responses rather than the cleanly analytical responses, but everything I say is something I do think is true.
None of it? Current suffering is still bad! You don't get the pri... (read more)
I don't see how asking for higher standards for criticism makes EA defenseless against "bullshit."
I actually would argue the opposite: if we keep encouraging and incentivizing any kind of criticism, and tolerate needlessly acrimonious personal attacks, we end up in an environment where nobody proposes anything besides the status quo, and the status quo becomes increasingly less transparent.
Three recent examples that come to mind:
I think Holly_Elmore herself is another example: she used to write posts like "We are in triage every second of every day", which I think are very useful to make EA less "bullshit", but now mostly doesn't post on this forum, partly because of the bad qua... (read more)
I think EAs could stand to learn something from non-EAs here, about how not to blame the victim even when the victim is you.
CEA's elaborate adjustments confirm everyone's assertions: constantly evolving affiliations cause extreme antipathy. Can everyone agree, current entertainment aside, carefully examining acronyms could engender accuracy?
Clearly, excellence awaits: collective enlightenment amid cost effectiveness analysis.
I have no personal insight on Nonlinear, but I want to chime in to say that I've been in other communities/movements where I both witnessed and directly experienced the effects of defamation-focused civil litigation. It was devastating. And I think the majority of the plaintiffs, including those arguably in the right, ultimately regretted initiating litigation. I sincerely hope this does not occur in the EA community. And I hope that threats of litigation are also discontinued. There are alternatives that are dramatically less monetarily and time-intensive, and more likely to lead to productive outcomes. I think normalizing (threats of) defmation-focused civil litigation is extremely detrimental to community functioning and community health.
(Jan 16 text added at the end)
Here's an official statement from FLI on rejecting the Nya Dagbladet Foundation grant proposal:
For those of you unfamiliar with the Future of Life Institute (FLI), we are a nonprofit charitable organization that works to reduce global catastrophic and existential risks facing humanity, particularly those from nuclear war and future advanced artificial intelligence. These risks are growing. Last year, FLI received scores of grant applications from across the globe for the millions of dollars in funding we distributed to support research, outreach and other important work in furtherance of FLI’s mission. One of these grant proposals came from the Nya Dagbladet Foundation (NDF, not to be confused with the eponymous newspaper) for a media project directly related to FLI's goals. Although we were initially positive about the proposal and its prospects, we ultimately decided to reject it because of what our subsequent due diligence uncovered. We have given Nya Dagbladet and their affiliates zero funding and zero support of any kind, and will not fund them in the future. These final de... (read more)
Man, this interview really broke my heart. I think I used to look up to Sam a lot, as a billionaire whose self-attested sole priority was doing as much as possible to help the most marginalized + in need, today and in the future.
But damn... "I had to be good [at talking about ethics]... it's what reputations are made of."
Just unbelievable.
I hope this is a strange, pathological reaction to the immense stress of the past week for him, and not a genuine unfiltered version of the true views he's held all along. It all just makes me quite sad, to be honest.
Hi, this is something we’re already exploring, but we are not in a position to say anything just yet.
Mildly against the Longtermism --> GCR shift
Epistemic status: Pretty uncertain, somewhat rambly
TL;DR replacing longtermism with GCRs might get more resources to longtermist causes, but at the expense of non-GCR longtermist interventions and broader community epistemics
Over the last ~6 months I've noticed a general shift amongst EA orgs to focus less on reducing risks from AI, Bio, nukes, etc based on the logic of longtermism, and more based on Global Catastrophic Risks (GCRs) directly. Some data points on this:
My guess is these changes are (almost entirely) driven by PR concerns about longtermism. I would also guess these changes increase the number of people donation / working on GCRs, which is (by longtermist lights) a positive thing. After all, no-one wants a... (read more)
Back to earning the give I guess, I’ll see you guys at the McKinsey office
Hey Scott - thanks for writing this, and sorry for being so slow to the party on this one!
I think you’ve raised an important question, and it’s certainly something that keeps me up at night. That said, I want to push back on the thrust of the post. Here are some responses and comments! :)
The main view I’m putting forward in this comment is “we should promote a diversity of memes that we believe, see which ones catch on, and mould the ones that are catching on so that they are vibrant and compelling (in ways we endorse).” These memes include both “existential risk” and “longtermism”.
What is longtermism?
The quote of mine you give above comes from Spring 2020. Since then, I’ve distinguished between longtermism and strong longtermism.
My current preferred slogan definitions of each:
In WWOTF, I promote the weak... (read more)
I appreciate that Larks sent a draft of this post to CEA, and that we had the chance to give some feedback and do some fact-checking.
I agree with many of the concerns in this post. I also see some of this differently.
In particular, I agree that a climate of fear — wherever it originates— silences not only people who are directly targeted, but also others who see what happened to someone else. That silencing limits writers/speakers, limits readers/listeners who won’t hear the ideas or information they have to offer, and ultimately limits our ability to find ways to do good in the world.
These are real and serious costs. I’ve been talking with my coworkers about them over the last months and seeking input from other people who are particularly concerned about them. I’ll continue to do that.
But I think there are also real costs to pushing groups to go forward with events they don’t want to hold. I’m still thinking through how I see the tradeoffs between these costs and the costs above, but here’s one I think is relevant:
It makes it more costly to be an organizer. In one discussion amongst group organizers after the Munich situatio... (read more)
Can you say more about your plans to bring additional trustees on the boards?
I note that, at present, all of EV (USA)'s board are current or former members of Open Philanthropy: Nick Beckstead, Zachary Robinson and Nicole Ross are former staff, Eli Rose is a current staffmember. This seems far from ideal; I'd like the board to be more diverse and representative of the wider EA community. As it stands, this seems like a conflict of interest nightmare. Did you discuss why this might be a problem? Why did you conclude it wasn't?
Others may disagree, but in my perspective, EV/CEA's role is to act as a central hub for the effective altruism community, and balance the interests of different stakeholders. It's difficult to see how it could do that effectively if all of its board were or are members of the largest donor.
FWIW, I wouldn't say I'm "dumb," but I dropped out of a University of Minnesota counseling psychology undergrad degree and have spent my entire "EA" career (at MIRI then Open Phil) working with people who are mostly very likely smarter than I am, and definitely better-credentialed. And I see plenty of posts on EA-related forums that require background knowledge or quantitative ability that I don't have, and I mostly just skip those.
Sometimes this makes me insecure, but mostly I've been able to just keep repeating to myself something like "Whatever, I'm excited about this idea of helping others as much as possible, I'm able to contribute in various ways despite not being able to understand half of what Paul Christiano says, and other EAs are generally friendly to me."
A couple things that have been helpful to me: comparative advantage and stoic philosophy.
At some point it would also be cool if there was some kind of regular EA webzine that published only stuff suitable for a general audience, like The Economist or Scientific American but for EA topics.
Something I personally would like to see from this contest is rigorous and thoughtful versions of leftist critiques of EA, ideally translated as much as possible into EA-speak. For example, I find "bednets are colonialism" infuriating and hard to engage with, but things like "the reference class for rich people in western countries trying to help poor people in Africa is quite bad, so we should start with a skeptical prior here" or "isolationism may not be the good-maximizing approach, but it could be the harm-minimizing approach that we should retreat to when facing cluelessness" make more sense to me and are easier to engage with.
That's an imaginary example -- I myself am not a rigorous and thoughtful leftist critic and I've exaggerated the EA-speak for fun. But I hope it points at what I'd like to see!
My coworkers got me a mug that said "Sorry, I'm not Julia Galef" to save me from having to say it so much at conferences. Maybe I should have just gone this route instead.
I generally believe that EAs should keep their identities small. Small enough so it wouldn't really matter what Julia you are
Whatever people think about this particular reply by Nonlinear, I hope it's clear to most EAs that Ben Pace could have done a much better job fact-checking his allegations against Nonlinear, and in getting their side of the story.
In my comment on Ben Pace's original post 3 months ago, I argued that EAs & Rationalists are not typically trained as investigative journalists, and we should be very careful when we try to do investigative journalism -- an epistemically and ethically very complex and challenging profession, which typically requires years of training and experience -- including many experiences of getting taken in by individuals and allegations that seemed credible at first, but that proved, on further investigation, to have been false, exaggerated, incoherent, and/or vengeful.
EAs pride ourselves on our skepticism and our epistemic standards when we're identifying large-scope, neglected, tractable causes areas to support, and when we're evaluating different policies and interventions to promote sentient well-being. But those EA skills overlap very little with the kinds of investigative journalism skills required to figure out who's really telling the truth, in contexts... (read more)
Having read the full TIME article, what struck me was if I replaced each mention of ‘EA’ with ‘the Classical Music industry’ it would still read just as well, and just as accurately (minus some polyamory).
I worked in the Arts for a decade, and witnessed some appalling behaviour and actions as a young woman. It makes me incredibly sad to learn that people have had similar experiences within the EA community. While it is something that should be challenged by us all, it is with regret that I say it is by no means unique to the EA community.
I admire the people who have spoken out, it's an incredibly hard thing to do, I hope that they are receiving all the care and support that they need. But, I also know this community is full of people trying really hard, and actually doing good.
I have been saddened to learn of similarly bad behaviour in other communities I have been involved in. However it's important not to let the commonness of abuse and harassment in broader society as an excuse not to improve. (I'm 100% not accusing you of this by the way, it's just a behavior I've seen in other places).
EA should not be aiming for a passing grade when it comes to sexual harassment. The question is not "is EA better than average", but "is EA as good as it could be". And the answer to that question is no. I deeply hope that the concerns of the women in the article will be listened to.
I agree that EA should aim to be as good as it could be, but comparisons to other communities are still helpful. If the EA community is worse than others at this kind of thing then maybe:
Someone considering joining should seek out other communities of people trying to do good. (Ex: animal-focused work in EA spaces vs the broader animal advocacy world.)
We should start an unaffiliated group ("Impact Maximizers") that tries to avoid these problems. (Somewhat like the "Atheism Plus" split.)
We should be figuring what we're doing differently from most other communities and do more normal things instead. (Ex: this post)
[EDIT: this also feeds into how ashamed people should feel about their association with EA given what's described here.]
FWIW:
1) agree with everything Nick said
2) I am really proud of what the team has done on net, although obviously nothing's perfect!
3) We really do love feedback! If you have some on a specific grant we made you can submit here, or feel free to separately ping me/Nick/etc. :)
This is an interesting post! I agree with most of what you write. But when I saw the graph, I was suspicious. The graph is nice, but the world is not.
I tried to create a similar graph to yours:
In this case, fun work is pretty close to impactful toll. In fact, the impact value for it is only about 30% less than the impact value of impactful toll. This is definitely sizable, and creates some of the considerations above. But mostly, everywhere on the pareto frontier seems like a pretty reasonable place to be.
But there's a problem: why is the graph so nice? To be more specific: why are the x and y axes so similarly scaled?
Why doesn't it look like this?
Here I just replaced x in the ellipse equation with log(x). It seems pretty intuitive that our impact would be power law distributed, with a small number of possible careers making up the vast majority of our possible impact. A lot of the time when people are trying to maximize something it ends up power law distributed (money donated, citations for researchers, lives saved, etc.). Multiplicative processes, as Thomas Kwa alluded to, will also make something power law distributed. This doesn't really ... (read more)
'- Alice has accused the majority of her previous employers, and 28 people - that we know of - of abuse. She accused people of: not paying her, being culty, persecuting/oppressing her, controlling her romantic life, hiring stalkers, threatening to kill her, and even, literally, murder.'
The section of doc linked to here does not in fact provide any evidence whatsoever of Alice making wild accusations against anyone else, beyond plain assertions (i.e. there are no links to other people saying this).
I'll briefly comment on a few parts of this post since my name was mentioned (lack of comment on other parts does not imply any particular position on them). Also, thanks to the authors for their time writing this (and future posts)! I think criticism is valuable, and having written criticism myself in the past, I know how time-consuming it can be.
I'm worried that your method for evaluating research output would make any ambitious research program look bad, especially early on. Specifically:
I think for any ambitious research project that fails, you could tell a similarly convincing story about how it's "obvious in hindsight" it would fail. A major point of research is to find ideas that other people don't think will work and then show that they do work! For many of my most successful research projects, people gave me advice not to work on them because they thought it would predictably fail, and if I had failed then they could have said something similar to... (read more)
Nathan - thanks for sharing the Time article excerpts, and for trying to promote a constructive and rational discussion.
For now, I don't want to address any of the specific issues around SBF, FTX, or EA leadership. I just want to make a meta-comment about the mainstream media's feeding frenzy around EA, and its apparently relentless attempts to discredit EA.
There's a classic social/moral psychology of 'comeuppance' going on here: any 'moral activists' who promote new and higher moral standards (such as the EA movement) can make ordinary folks (including journalists) feel uncomfortable, resentful, and inadequate. This can lead to a public eagerness to detect any forms of moral hypocrisy, moral failings, or bad behavior in the moral activist groups. If any such moral failings are detected, they get eagerly embraced, shared, signal-amplified, and taken as gospel. This makes it easier to dismiss the moral activists' legitimate moral innovations (e.g. focusing on scope-sensitivity, tractability, neglectedness, long-termism), and allows a quicky, easy return to the status quo ante (e.g. national partisan politics + scope-insensitive charity as usual).
We see this 'psychology of comeuppanc... (read more)
I think some critiques of GVF/OP in this comments section could have been made more warmly and charitably.
The main funder of a movement's largest charitable foundation is spending hours seriously engaging with community members' critiques of this strategic update. For most movements, no such conversation would occur at all.
Some critics in the comments are practicing rationalist discussion norms (high decoupling & reasoning transparency) and wish OP's communications were more like that too. However, it seems there's a lot we don't know about what caused GFV/OP leadership to make this update. Dustin seems very concerned about GFV/OP's attack surface and conserving the bandwidth of their non-monetary resources. He's written at length about how he doesn't endorse rationalist-level decoupling as a rule of discourse. Given all of this, it's understandable that from Dustin's perspective, he has good reasons for not being as legible as he could be. Dishonest outside actors could quote statements or frame actions far more uncharitably than anything we'd see on the EA Forum.
Dustin is doing the best he can to balance between explaining his reasoning and adhering to legibility constraints ... (read more)
This feels complicated to say, because it's going to make me seem like I don't care about abuse and harassment described in the article. I do. It's really bad and I wish it hadn't happened, and I'm particularly sad that it's happened within my community, and (more) that people in my community seemed often to not support the victims.
But I honestly feel very upset about the anti-polyamory vibe of all this. Polyamory is a morally neutral relationship structure that's practiced happily by lots of people. It doesn't make you an abuser, or not-an-abuser. It's not accepted in the wider community, so I value its acceptance in EA. I'd be sad if there was a community backlash against it because of stuff like this, because that would hurt a lot of people and I don't think it would solve the problem.
I think the anti-poly vibe also makes it kind of...harder to work out what's happening, and what exactly is bad, or something? Like, the article describes lots of stuff that's unambiguously bad, like grooming and assault. But it says stuff like 'Another told TIME a much older EA recruited her to join his polyamorous relationship while she was still in college'. Like, what do... (read more)
Quite off-topic but I think it's quite remarkable that RP does crisis management and simulation exercises like this! I'm glad that RP is stable financially and legally (at least in the short-term), and put a significant chunk of that down to your collective excellent leadership.
It doesn't quite ring true to me that we need an investigation into what top EA figures knew. What we need is an investigation more broadly into how this was allowed to happen. We need to ask:
It's not totally unreasonable to ask what EA figures knew, but it's not likely that they knew about the fraud, based on priors (it's risky to tell people beyond your inner circle about fraudulent plans), and insider reports. (And for me personally, based on knowledge of their character, although obviously that's not going to convince a sceptic.)
There's value in giving the average person a broadly positive impression of EA, and I agree with some of the suggested actions. However, I think some of them risk being applause lights-- it's easy to say we need to be less elitist, etc., but I think the easy changes you can make sometimes don't address fundamental difficulties, and making sweeping changes have hidden costs when you think about what they actually mean.
This is separate from any concern about whether it's better for EA to be a large or small movement.
Edit: big tent actually means "encompassing a broad spectrum of views", not "big movement". I now think this section has some relevance to the OP but does not centrally address the above point.
As I understand it, this means spending more resources on people who are "less elite" and less committed to maximizing their impact. Some of these people will go on to make career changes and have lots of impact, but it seems clear that their average impact will be lower. Right now, EA has limited community-building capacity, so the opportunity cost is huge. If we allocate more resources to "big tent" efforts, ... (read more)
Parents in EA ➔ Raising for Effective Giving
Slogan: Shut up and multiply!
Seeing the discussion play out here lately, and in parallel seeing the topic either not be brought up or be totally censored on LessWrong, has made the following more clear to me:
A huge fraction of the EA community's reputational issues, DEI shortcomings, and internal strife stem from its proximity to/overlap with the rationalist community.
Generalizing a lot, it seems that "normie EAs" (IMO correctly) see glaring problems with Bostrom's statement and want this incident to serve as a teachable moment so the community can improve in some of the respects above, and "rationalist-EAs" want to debate race and IQ (or think that the issue is so minor/"wokeness-run-amok-y" that it should be ignored or censored). This predictably leads to conflict.
(I am sure many will take issue with this, but I suspect it will ring true/help clarify things for some, and if this isn't the time/place to discuss it, I don't know when/where that would be)
[Edit: I elaborated on various aspects of my views in the comments, though one could potentially agree with this comment/not all the below etc.]
Thanks for this review, Richard.
In the section titled, "The Bad," you cite a passage from my essay--"Diversifying Effective Altruism's Longshots in Animal Advocacy"--and then go on to say the following:
"Another author tells us (p. 81):
(Of course, no argument is offered in support of this short-sighted thinking. It’s just supposed to be obvious to all right-thinking individuals. This sort of vacuous moralizing, in the total absence of any sort of grappling with—or even recognition of—opposing arguments, is found throughout the volume.)"
It sounds from your framing like you take it that I assert the claim in question, believe that the alleged claim is obvious, and hold this belief "in the total absence of any sort of grappling with--or even recognition of--opposing arguments."
With respect, I don't think your reading is fair on any of these fronts.
First, I don't assert the claim in quest... (read more)
I agree with the central thrust of this post, and I'm really grateful that you made it. This might be the single biggest thing I want to change about EA leaders' behavior. And relatedly, I think "be more candid, and less nervous about PR risks" is probably the biggest thing I want to change about rank-and-file EAs' behavior. Not because the risks are nonexistent, but because trying hard to avoid the risks via not-super-honest tactics tends to cause more harm than benefit. It's the wrong general policy and mindset.
This seems like an unusually good answer to me! I'm impressed, and this updates me positively about Ben Todd's honesty and precision in answering questions like these.
I think a good description of EA is "the approach that behaves sort of like utilitarianism, when decisions are sufficiently high-stakes and there aren't ethical injunctions in play". I don't think utilitarianism is true, and it's obvious that many EAs aren't utilitarians, and obvious that utilitarianism isn't required for working on EA cause areas, or for being quantitative, systematic, and rigorous in your moral reasoning, etc. Yet it's remarkabl... (read more)
I have been community building in Cambridge UK in some way or another since 2015, and have shared many of these concerns for some time now. Thanks so much for writing them much more eloquently than I would have been able to, thanks!
To add some more anecdotal data, I also hear the 'cult' criticism all the time. In terms of getting feedback from people who walk away from us: this year, an affiliated (but non-EA), problem-specific table coincidentally ended up positioned downstream of the EA table at a freshers' fair. We anecdotally overheard approx 10 groups of 3 people discussing that they thought EA was a cult, after they had bounced from our EA table. Probably around 2000-3000 people passed through, so this is only 1-2% of people we overheard.
I managed to dig into these criticisms a little with a couple of friends-of-friends outside of EA, and got a couple of common pieces of feedback which it's worth adding.
- We are giving away many free books lavishly. They are written by longstanding members of the community. These feel like doctrine, to some outside of the community.
- Being a member of the EA community is all or nothing. My best guess is we haven't thought of anything less intensi
... (read more)The absolute strongest answer to most critiques or problems that has been mentioned recently is—strong object level work.
If EA has the best leaders, the best projects and the most success in executing genuinely altruistic work, especially in a broad range of cause areas, that is a complete and total answer to:
If we have the world leaders in global health, animal welfare, pandemic prevention, AI safety, and each say, “Hey, EA has the strongest leaders and its ideas and projects are reliably important and successful”, no one will complain about how many free books are handed out.
The FTX and Alameda estates have filed an adversary complaint against the FTX Foundation, SBF, Ross Rheingans-Yoo, Nick Beckstead, and some biosciences firms, available here. I should emphasize that anyone can sue over anything, and allege anything in a complaint (although I take complaints signed by Sullivan & Cromwell attorneys significantly more seriously than I take the median complaint). I would caution against drawing any adverse inferences from a defendant's silence in response to the complaint.
The complaint concerns a $3.25MM "philantrophic gift" made to a biosciences firm (PLS), and almost $70MM in non-donation payments (investments, advance royalties, etc.) -- most of which were also to PLS. The only count against Beckstead relates to the donation. The non-donation payments were associated with Latona, which according to the complaint "purports to be a non-profit, limited liability company organized under the laws of the Bahamas[,] incorporated in May 2022 for the purported purpose of investing in life sciences companies [which] held itself out as being part of the FTX Foundation."
The complaint does not allege that either Beckstead or Rheingans-Yoo knew of the fraud a... (read more)
This interview is crazy.
One overarching theme is SBF lying about many things in past interviews for PR. Much of what he said in this one also looks like that.
How about Caring Tuna? This would surely get support from Open Phil
This doesn't really match my (relatively little) experience. I think it might be because we disagree on what counts as "EA Leadership": we probably have a different idea of what counts as "EA" and/or we have a different idea of what counts as "Leadership".
I think you might be considering as "EA leadership" senior people working in "meta-EA" orgs (e.g. CEA) and "only-EA experience" to also include people doing more direct work (e.g. GiveWell). So the CEO of OpenPhilanthropy would count as profile #1 having mostly previous experience at OpenPhilanthropy and GiveWell, but the CEO of GiveWell wouldn't count as profile #2 because they're not in an "EA leadership position". Is that correct?
I think the easiest way would be to compile a list of people in leadership positions and check their LinkedIn profiles.
Working on the assumption above for what you mean by "EA Leadership", while there is no canonical list of “meta-EA leaders”, a non-random sample could be this public list of some Meta Coordination Forum participants.[1]
Here's a quick (and inaccurate) short summar... (read more)
The closing remarks about CH seem off to me.
So I don't expect disbanding CH to improve justice, particularly since you yourself have shown the job to be exhausting and ambiguous at best.
You have, though, rightly received gratitude and praise - which they don't often, maybe just because we don't often praise people for doing their jobs. I hope the net effect of your work is to inspire people to speak up.
I disagree. Or at least I think the reasons in this post are not very good reasons for Bostrom to step down (it is plausible to me he could pursue more impactful plans somewhere else, potentially by starting a new research institution with less institutional baggage and less interference by the University of Oxford).
Bostrom is as far as I can tell the primary reason why FHI is a successful and truth-oriented research organization. Making a trustworthy research institution is exceptionally difficult, and its success is not primarily measured in the operational quality of its organization, but in the degree to which it produces important, trustworthy and insightful research. Bostrom has succeeded at this, and the group of people (especially the early FHI cast including Anders Sandberg, Eric Drexler, Andrew Snyder Beattie, Owain Evans, and Stuart Armstrong) he has assembled under the core FHI research team have made great contributions to many really important questions that I care about, and I cannot think of any other individual who would have been able to do the same (Sean gives a similar perspective in his comment).
I think Bostrom overstretched himself when he let FHI grow to doze... (read more)
I am opposed to this.
I am also not an EA leader in any sense of the word, so perhaps my being opposed to this is moot. But I figured I would lay out the basics of my position in case there are others who were not speaking up out of fear [EDIT: I now know of at least one bona fide EA leader who is not voicing their own objection, out of something that could reasonably be described as "fear"].
Here are some things that are true:
- Racism is harmful and bad
- Sexism is harmful and bad
- Other "isms" such as homophobia or religious oppression are harmful and bad.
- To the extent that people can justify their racist, sexist, or otherwise bigoted behavior, they are almost always abusing information, in a disingenuous fashion. e.g. "we showed a 1% difference in the medians of the bell curves for these two populations, thereby 'proving' one of those populations to be fundamentally superior!" This is bullshit from a truth-seeking perspective, and it's bullshit from a social progress perspective, and in most circumstances it doesn't need to be entertained or debated at all. In practice, it is already the case that the burden of proof on someone wanting to have a discussion about these things is ove
... (read more)What do EA and the FTX Future Team think of a claim by Kerry Vaughan that Sam Bankman-Fried did severely unethical behavior before and EA and FTX covered it up and laundered his reputation, effectively getting away with it.
I'm posting because of true, this suggests big changes to EA norms are necessary to deal with bad actors like him, and that Sam Bankman-Fried should be outright banned from the forum and EA events.
Link to tweets here:
https://twitter.com/KerryLVaughan/status/1590807597011333120
I want to clarify the claims I'm making in the Twitter thread.
I am not claiming that EA leadership or members of the FTX Future fund knew Sam was engaging in fraudulent behavior while they were working at FTX Future Fund.
Instead, I am saying that friends of mine in the EA community worked at Alameda Research during the first 6 months of its existence. At the end of that period, many of them suddenly left all at once. In talking about this with people involved, my impression is:
1) The majority of staff at Alameda were unhappy with Sam's leadership of the company. Their concerns about Sam included concerns about him taking extreme and unnecessary risks and losing large amounts of money, poor safeguards around moving money around, poor capital controls, including a lack of distinction between money owned by investors and money owned by Alameda itself, and Sam generally being extremely difficult to work with.
2) The legal ownership structure of Alameda did not reflect the ownership structure that had been agreed to by the parties involved. In particular, Sam registered Alameda under his sole ownership and not as jointly owned by him and his cofounders. This was not thought t... (read more)
I was one of the people who left at the time described. I don't think this summary is accurate, particularly (3).
(1) seems the most true, but anyone who's heard Sam on a podcast could tell you he has an enormous appetite for risk. IIRC he's publicly stated they bet the entire company on FTX despite thinking it had a <20% chance of paying off. And yeah, when Sam plays league of legends while talking to famous investors he seems like a quirky billionaire; when he does it to you he seems like a dick. There are a lot of bad things I can say about Sam, but there's no elaborate conspiracy.
Lastly, my severance agreement didn't have a non-disparagement clause, and I'm pretty sure no one's did. I assume that you are not hearing from staff because they are worried about the looming shitstorm over FTX now, not some agreement from four years ago.
When said shitstorm dies down I might post more and under my real name, but for now the phrase "wireless mouse" should confirm me as someone who worked there at the time to anyone else who was also there.
I'm the person that Kerry was quoting here, and am at least one of the reasons he believed the others had signed agreements with non-disparagement clauses. I didn't sign a severance agreement for a few reasons: I wanted to retain the ability to sue, I believed there was a non-disparagement clause, and I didn't want to sign away rights to the ownership stake that I had been verbally told I would receive. Given that I didn't actually sign it, I could believe that the non-disparagement clauses were removed and I didn't know about it, and people have just been quiet for other reasons (of which there are certainly plenty).
I think point 3 is overstated but not fundamentally inaccurate. My understanding was that a group of senior leadership offered Sam to buy him out, he declined, and he bought them out instead. My further understanding is that his negotiating position was far stronger than it should have been due to him having sole legal ownership (which I was told he obtained in a way I think it is more than fair to describe as backstabbing). I wasn't personally involved in those negotiations, in part because I clashed with Sam probably worse than anyone else at the company, which likel... (read more)
I think it is very important to understand what was known about SBF's behaviour during the initial Alameda breakup, and for this to be publicly discussed and to understand if any of this disaster was predictable beforehand. I have recently spoken to someone involved who told me that SBF was not just cavalier, but unethical and violated commonsense ethical norms. We really need to understand whether this was known beforehand, and if so learn some very hard lessons.
It is important to distinguish different types of risk-taking here. (1) There is the kind of risk taking that promises high payoffs but with a high chance of the bet falling to zero, without violating commonsense ethical norms, (2) Risk taking in the sense of being willing to risk it all secretly violating ethical norms to get more money. One flaw in SBF's thinking seemed to be that risk-neutral altruists should take big risks because the returns can only fall to zero. In fact, the returns can go negative - eg all the people he has stiffed, and all of the damage he has done to EA.
In 2021 I tried asking about SBF among what I suppose you could call "EA leadership", trying to distinguish whether to put SBF into the column of "keeps compacts but compact very carefully" versus "un-Lawful oathbreaker", based on having heard that early Alameda was a hard breakup. I did not get a neatly itemized list resembling this one on either points 1 or 2, just heard back basically "yeah early Alameda was a hard breakup and the ones who left think they got screwed" (but not that there'd been a compact that got broken) (and definitely not that they'd had poor capital controls), and I tentatively put SBF into column 1. If "EA leadership" had common knowledge of what you list under items 1 or 2, they didn't tell me about it when I asked. I suppose in principle that I could've expended some of my limited time and stamina to go and inquire directly among the breakup victims looking for one who hadn't signed an NDA, but that's just a folly of perfect hindsight.
My own guess is that you are mischaracterizing what EA leadership knew.
Huh, I am surprised that no one responded to you on this. I wonder whether I was part of that conversation, and if so, I would be interested in digging into what went wrong.
I definitely would have put Sam into the "un-lawful oathbreaker" category and have warned many people I have been working with that Sam has a reputation for dishonesty and that we should limit our engagement with him (and more broadly I have been complaining about an erosion of honesty norms among EA leadership to many of the current leadership, in which I often brought up Sam as one of the sources of my concern directly).
I definitely had many conversations with people in "EA leadership" (which is not an amazingly well-defined category) where people told me that I should not trust him. To be clear, nobody I talked to expected wide-scale fraud, and I don't think this included literally everyone, but almost everyone I talked to told me that I should assume that Sam lies substantially more than population-level baseline (while also being substantially more strategic about his lying than almost everyone else).
I do want to add to this that in addition to Sam having a reputation for dishonesty, he also had a reputation for being vindictive, and almost everyone who told me about their concerns about Sam did so while seeming quite visibly afraid of retribution from Sam if they were to be identified as the source of the reputation, and I was never given details without also being asked for confidentiality.
I knew about Sam's bad character early on, and honestly I'm confused about what people would have expected me to do.
I should have told people that Sam has a bad character and can't be trusted, that FTX is risky? Well, I did those things, and as far as I can tell, that has made the current situation less bad than it would have been otherwise (yes, it could have been worse!). In hindsight I should have done more of this though.
Should I have told the authorities that Sam might be committing fraud? All I had were vague suspicions about his character and hints that he might be dishonest, but no convincing evidence or specific worries about fraud. (Add jurisdictional problems, concerns about the competence of regulators, etc)
Should I not have "covered up" the early scandal? Well, EAs didn't, and I think Kerry's claim is wrong.
Should I have publicly spread concerns about SBF's character? That borders on slander. Also, I was concerned that SBF would permanently hate me after that (you might say I'm a coward, but hey, try it yourself).
Should I have had SBF banned from EA? Personally, I'm all for a tough stance, but the community is usually against complete bans of bad actors, so it just wasn't feasible. (EG, if I were in charge, Jacy and Kerry would be banned, but many wouldn't like that.)
SBF was powerful and influential. EA didn't really have power over him.
What could have been done better? I am sincerely curious to get suggestions.
My current, extremely tentative, sense of the situation is not that individuals who were aware of some level of dishonesty and shadiness were not open enough about it. I think individuals acted in pretty reasonable ways, and I heard a good amount of rumors.
I think the error likely happened at two other junctions:
I think if we had some kind of e.g. EA newspaper where people try to actively investigate various things that seem concerning, then I think this would have helped a bunch. This kind of thing could even be circulated privately, though a public version seems also good.
I separately also think... (read more)
I'm unclear how to update on this, but note that Kerry Vaughan was at CEA for 4 years, and a managing director there for one year before, as I understand it, being let go under mysterious circumstances. He's now the program manager at a known cult that the EA movement has actively distanced itself from. So while his comments are interesting, I wouldn't treat him as a particularly credible source, and he may have his own axe to grind.
From CEA's guiding principles:
I used to expect 80,000 Hours to tell me how to have an impactful career. Recently, I've started thinking it's basically my own personal responsibility to figure it out. I think this shift has made me much happier and much more likely to have an impactful career.
80,000 Hours targets the most professionally successful people in the world. That's probably the right idea for them - giving good career advice takes a lot of time and effort, and they can't help everyone, so they should focus on the people with the most career potential.
But, unfortunately for most EAs (myself included), the nine priority career paths recommended by 80,000 Hours are some of the most difficult and competitive careers in the world. If you’re among the 99% of people who are not Google programmer / top half of Oxford / Top 30 PhD-level talented, I’d guess you have slim-to-none odds of succeeding in any of them. The advice just isn't tailored for you.
So how can the vast majority of people have an impactful career? My best answer: A lot of independent thought and planning. Your own personal brainstorming and reading and asking around and exploring, not just following stoc... (read more)
I wish I could be as positive as everyone else, but there are some yellow flags for me here.
Firstly, as Zachary said, these seem to be exactly the same principles CEA has stated for years. If nothing about them is changing, then it doesn't give much reason to think that CEA will improve in areas it has been deficient to date. To quote probably-not-Albert-Einstein, ‘Insanity is doing the same thing over and over again and expecting different results.’
Secondly, I find the principles themselves quite handwavey, and more like applause lights than practical statements of intent. What does 'recognition of tradeoffs' involve doing? It sounds like something that will just happen rather than a principle one might apply. Isn't 'scope sensitivity' basically a subset of the concerns implied by 'impartiality'? Is something like 'do a counterfactually large amount of good' supposed to be implied by impartiality and scope sensitivity? If not, why is it not on the list? If so, why does 'scout mindset' need to be on the list, when 'thinking through stuff carefully and scrupulously' is a prerequisite to effective counterfactual actions? On reading this post, I'm genuinely confused about what a... (read more)
Note: I had drafted a longer comment before Arepo's comment, given the overlap I cut parts that they already covered and posted the rest here rather than in a new thread.
I agree with Arepo that both halves of this claim seem wrong. Four of CEA's five programs, namely Groups, Events, Online, and Community Health, have theories of change that directly route through serving the community. This is often done by quite literally providing them with services that are free, discounted, or just hard to acquire elsewhere. Sure, they are serving the community in order to have a positive impact on the wider world, but that's like saying a business provides a service in order to make a profit; true but irrelevant to the question of whether the directly-served party is a customer.
I speculate that what's going on here is:
- CEA doesn't want to coordinate the community the way any leader or ma
... (read more)EDIT: I've now written up my own account of how we should do epistemic deference in general, which fleshes out more clearly a bunch of the intuitions I outline in this comment thread.
I think that a bunch of people are overindexing on Yudkowsky's views; I've nevertheless downvoted this post because it seems like it's making claims that are significantly too strong, based on a methodology that I strongly disendorse. I'd much prefer a version of this post which, rather than essentially saying "pay less attention to Yudkowsky", is more nuanced about how to update based on his previous contributions; I've tried to do that in this comment, for example. (More generally, rather than reading this post, I recommend people read this one by Paul Christiano, which outlines specific agreements and disagreements. Note that the list of agreements there, which I expect that many other alignment researchers also buy into, serves as a significant testament to Yudkowsky's track record.)
The part of this post which seems most wild to me is the leap from "mixed track record" to
... (read more)I disagree that the sentence is false for the interpretation I have in mind.
I think it's really important to seperate out the question "Is Yudkowsky an unusually innovative thinker?" and the question "Is Yudkowsky someone whose credences you should give an unusual amount of weight to?"
I read your comment as arguing for the former,... (read more)
I am not under any non-disparagement obligations to OpenAI.
It is important to me that people know this, so that they can trust any future policy analysis or opinions I offer.
I have no further comments at this time.
I don't think it's witchhunty at all. The fact is we really have very little knowledge about how Will and Nick are involved with FTX. I really don't think they did any fraud or condoned any fraud, and I do genuinely feel bad for them, and I want to hope for the best when it comes to their character. I'm pretty substantially unsure if Will/Nick/others made any ex ante mistakes, but they definitely made severe ex post mistakes and lost a lot of trust in the community as a result.
I think this means three things:
1.) I think Nathan is right about the prior. If we're unsure bout whether they made severe ex ante mistakes, we should remove them. I'd only keep them if I was sure they did not make severe ex ante mistakes. I think this applies more forcefully the more severe the mistake was and the situation with FTX makes me suspect that any mistakes could've been about as severe as you would get.
2.) I think in order to be on EVF's board it's a mandatory job requirement you to maintain the trust of the community and removing people over this makes sense.
3.) I think a traditional/"normie" board would've 100% removed Will and Nick back in November. Though I don't think that we should always d... (read more)
Thanks for the update.
I'd like to recommend that part of the process review for providing travel grant funding includes consideration of the application process timing for CEA-run or supported events. In my experience, key dates in the process (open, consideration/decision, notification of acceptance, notification of travel grant funding) happen much closer to the date of the event than other academic or trade conferences.
For example, in 2022, several Australian EAs I know applied ~90 days in advance of EAG London or EAG SF, but were accepted only around 30-40 days before the event.
A slow application process creates several issues for international attendees:
- Notice is needed for employment leave. Prospective attendees who are employed usually need to submit an application for leave with 1+ months notice, especially for a trip of ~1 week or longer needed for international travel. Shorter notice can create conflict or ill-feeling between the employee and employer.
- Flight prices increase as the travel date approaches. An Australian report recommended booking international flights 6 months ahead of the date of travel. A Google report recommended booking internati
... (read more)Since the time I have started looking into this, you have:
-
-
-
-
-
... (read more)incorrectly described the nature of people you talked with around Nonlinear, for which you subsequently apologized.
incorrectly claimed Nonlinear might be sponsored by Rethink Priorities, which you subsequently retracted. (EDIT: While as per below he did text a board member to check here, I think the example still has some value)
made likely-incorrect assumptions about libel law, which I subsequently clarified.
incorrectly predicted what journalists would think of your investigative process, after which we collaborated on a hypothetical to ask journalists, all of whom disagreed with your decision.
in our direct messages about this post prior to publication, provided a snippet of a private conversation about the ACX meetup board decision where you took a maximally broad interpretation of something I had limited ways of verifying, pressured me to add it as context to this post in a way that would have led to a substantially false statement on my part, then admitted greater confusion to a board member while saying nothing to me about the same, after which I reconfirmed with the same board member that the wording I ch
leopold - my key question here would be, if the OpenAI Preparedness team concluded in a year or two that the best way to mitigate AGI risk would be for OpenAI to simply stop doing AGI research, would anyone in OpenAI senior management actually listen to them, and stop doing AGI research?
If not, this could end up being just another example of corporate 'safety-washing', where the company has already decided what they're actually going to do, and the safety team is just along for the ride.
I'd value your candid view on this; I can't actually tell if there are any conditions under which OpenAI would decide that what they've been doing is reckless and evil, and they should just stop.
Some thoughts on the general discussion:
(1) some people are vouching for Kat's character. This is useful information, but it's important to note that behaving badly is very compatible with having many strengths, treating one's friends well, etc. Many people who have done terrible things are extremely charismatic and charming, and even well-meaning or altruistic. It's hard to think bad things about one's friends, but unfortunately it's something we all need to be open to. (I've definitely in the past not taken negative allegations against someone as seriously as I should have, because they were my friend).
(2) I think something odd about the comments claiming that this post is full of misinformation, is that they don't correct any of the misinformation. Like, I get that assembling receipts, evidence etc can take a while, and writing a full rebuttal of this would take a while. But if there are false claims in the post, pick one and say why it's false.
This makes these interventions seem less sincere to me, because I think if someone posted a bunch of lies about me, in my first comments/reactions I would be less concerned about the meta appropriateness of the post having been post... (read more)
Just to clarify, nonlinear has now picked one claim and provided screen shots relevant to it, I’m not sure if you saw that.
I also want to clarify that I gave Ben a bunch of very specific examples of information in his post that I have evidence are false (responding to the version he sent me hours before publication). He hastily attempted to adjust his post to remove or tweak some of his claims right before publishing based on my discussing these errors with him. It’s a lot easier (and vastly less time consuming) to provide those examples in a private one-on-one with Ben than to provide them publicly (where, for instance, issues of confidentially become much more complicated, and where documentation and wording need to be handled with extreme care, quite different than the norms of conversation).
The easiest to explain example is that Ben claimed a bunch of very bad sounding quotes from Glassdoor were about Emerson that clearly weren’t (he hadn’t been at the company for years when those complaints were written). Ben acknowledged somewhere in the comments that those were indeed not about Emerson and so that was indeed false information in the original version of the post.
My understand... (read more)
This seems like a strange position to me. Do you think people have to have a background in climate science to decide that climate change is the most important problem, or development economics to decide that global poverty is the moral imperative of our time? Many people will not have a background relevant to any major problem; are they permitted to have any top priority?
I think (apologies if I am mis-understanding you) you try to get around this by suggesting that 'mainstream' causes can have much higher priors and lower evidential burdens. But that just seems like deference to wider society, and the process by which mainstream causes became dominant does not seem very epistemically reliable to me.
People have some strong opinions about things like polyamory, but I figured I’d still voice my concern as someone who has been in EA since 2015, but has mostly only interacted with the community online (aside from 2 months in the Bay and 2 in London):
I have nothing against polyamory, but polyamory within the community gives me bad vibes. And the mixing of work and fun seems to go much further than I think it should. It feels like there’s an aspect of “free love” and I am a little concerned about doing cuddle puddles with career colleagues. I feel like all these dynamics lead to weird behaviour people do not want to acknowledge.
I repeat, I am not against polyamory, but I personally do not expect some of this bad behaviour would happen as much if in a monogamous setting since I expect there would be less sliding into sexual actions.
I’ve avoided saying this because I did not want to criticize people for being polyamorous and expected a lot would disagree with me and it not leading to anything. But I do think the “free love” nature of polyamory with career colleagues opens the door for things we might not want.
Whatever it is (poly within the community might not be part of the issue at all!), I feel like there needs to be a conversation about work and play (that people seem to be avoiding).
Yes, I at least strongly support people reaching out to my staff about opportunities that they might be more excited about than working at Lightcone, and similarly I have openly approached other people working in the EA community at other organizations about working at Lightcone. I think the cooperative atmosphere between different organizations, and the trust that individuals are capable of making the best decisions for themselves on where they can have the best impact, is a thing I really like about the EA community.
I want to share the following, while expecting that it will probably be unpopular.
I feel many people are not being charitable enough to Nonlinear here.
I have only heard good things about Nonlinear, outside these accusations. I know several people who have interacted with them - mainly with Kat - and had good experiences. I know several people who deeply admire her. I have interacted with Kat occasionally, and she was helpful. I have only read good things about Emerson.
As far as I can tell from this and everything I know/have read, it seems reasonable to assume that the people at Nonlinear are altruistic people. They have demonstrably made exceptional commitments to doing good; started organisations, invested a lot of time and money in EA causes, and helped a lot of people.
Right now, on the basis of what could turn out to have been a lot of lies, their reputations, friendship futures and careers are at risk of being badly damaged (if not already so).
This may have been (more) justified if the claims in the original post were all found and believed to be clearly true. However, that was, and is not, clearly the case at this point in time.
At present, ... (read more)
I think it is entirely possible that people are being unkind because they updated too quickly on claims from Ben's post that are now being disputed, and I'm grateful that you've written this (ditto chinscratch's comment) as a reminder to be empathetic. That being said, there are also some reasons people might be less charitable than you are for reasons that are unrelated to them being unkind, or the facts that are in contention:
Without commenting on whether Ben's original post should have been approached better or worded differently or was misleading etc, this comment from the Community Health/Special Projects team might add some useful additional context. There are also previous allegations that have been raised.[1]
Perhaps you are including both of these as part of the same set of allegations, but some may suggest that not being permitted to run sessions / recruit at EAGs and considering blocking attendance (especially given the reference class of ... (read more)
Since Frances is not commenting more:
This rhetorical strategy is analogous to a prosecutor showing smiling photos of a couple on vacation to argue that he couldn’t have possibly murdered her, or showing flirty texts between a man and woman to argue that he couldn’t have raped her, etc. This is a bad rhetorical strategy when prosecutors use it—and it’s a bad rhetorical strategy here—because it perpetuates misinformation about what abusive relationships look like; namely, that they are uniformly bad, with no happy moments or mitigating qualities.
As anyone who has been in an abusive relationship will tell you, this is rarely what abuse looks like. And you insinuating that Chloe and Alice are lying because there were happy-appearing moments is exactly the kind of thing that makes many victims afraid to come forward.
To be clear: I do not think these photos provide any evidence against the allegations in Ben’s post because no one is contesting that the group hung out in tropical locations. Additionally, having hung out in tropical locations is entirely compatible with the allegations made in the initial post. Ironically, this rhetorical strategy—the photos, the assertion that this was a ... (read more)
Hi everyone,
To fully disclose my biases: I’m not part of EA, I’m Greg’s younger sister, and I’m a junior doctor training in psychiatry in the UK. I’ve read the comments, the relevant areas of HLI’s website, Ozler study registration and spent more time than needed looking at the dataset in the Google doc and clicking random papers.
I’m not here to pile on, and my brother doesn’t need me to fight his corner. I would inevitably undermine any statistics I tried to back up due to my lack of talent in this area. However, this is personal to me not only wondering about the fate of my Christmas present (Greg donated to Strongminds on my behalf), but also as someone who is deeply sympathetic to HLI’s stance that mental health research and interventions are chronically neglected, misunderstood and under-funded. I have a feeling I’m not going to match the tone here as I’m not part of this community (and apologise in advance for any offence caused), but perhaps I can offer a different perspective as a doctor with clinical practice in psychiatry and on an academic fellowship (i.e. I have dedicated research time in the field of mental health).
The conflict seems to be that, on one hand, HLI has im... (read more)
Hello Michael,
Thanks for your reply. In turn:
1:
HLI has, in fact, put a lot of weight on the d = 1.72 Strongminds RCT. As table 2 shows, you give a weight of 13% to it - joint highest out of the 5 pieces of direct evidence. As there are ~45 studies in the meta-analytic results, this means this RCT is being given equal or (substantially) greater weight than any other study you include. For similar reasons, the Strongminds phase 2 trial is accorded the third highest weight out of all studies in the analysis.
HLI's analysis explains the rationale behind the weighting of "using an appraisal of its risk of bias and relevance to StrongMinds’ present core programme". Yet table 1A notes the quality of the 2020 RCT is 'unknown' - presumably because Strongminds has "only given the results and some supporting details of the RCT". I don't think it can be reasonable to assign the highest weight to an (as far as I can tell) unpublished, not-peer reviewed, unregistered study conducted by Strongminds on its own effectiveness reporting an astonishing effect size - before it has even been read in full. It should be dramatically downweighted or wholly discounted until then, rather than included a... (read more)
In 2018, I collected data about several types of sexual harassment on the SSC survey, which I will report here to help inform the discussion. I'm going to simplify by assuming that only cis women are victims and only cis men are perpetrators, even though that's bad and wrong.
Women who identified as EA were less likely report lifetime sexual harassed at work than other women, 18% vs. 20%. They were also less likely to report being sexually harassed outside of work, 57% vs. 61%.
Men who identified as EA were less likely to admit to sexually harassing people at work (2.1% vs. 2.9%) or outside of work (16.2% vs. 16.5%)
The sample was 270 non-EA women, 99 EA women, 4940 non-EA men, and 683 EA men. None of these results were statistically significant, although all of them trended in the direction of EAs experiencing less sexual harassment.
This doesn't prove that EA environments have less harassment than the average environment, since it could be that EAs are biased to have less sexual harassment for other reasons, and whatever additional harassment they get in EA isn't enough to make up for it; the vast majority of EAs have the vast majority of interactions in non-EA environmen... (read more)
I thank you for apologizing publicly and loudly. I imagine that you must be in a really tough spot right now.
I think I feel a bit conflicted on the way you presented this.
I treat our trust in FTX and dealings with him as bureaucratic failures. Whatever measures we had in place to deal with risks like this weren't enough.
This specific post reads a bit to me like it's saying, "We have some blog posts showing that we said these behaviors are bad, and therefore you could trust both that we follow these things and that we encourage others to, even privately." I'd personally prefer it, in the future, if you wouldn't focus on the blog posts and quotes. I think they just act as very weak evidence, and your use makes it feel a bit like otherwise.
Almost every company has lots of public documents outlining their commitments to moral virtues.
I feel pretty confident that you were ignorant of the fraud. I would like there to be more clarity of what sorts of concrete measures were in place to prevent situations like this, and what measures might change in the future to help make sure this doesn't happen again.
There might also be many other concrete things that could be don... (read more)
EA forum content might be declining in quality. Here are some possible mechanisms:
- Newer EAs have worse takes on average, because the current processes of recruitment and outreach produce a worse distribution than the old ones
- Newer EAs are too junior to have good takes yet. It's just that the growth rate has increased so there's a higher proportion of them.
- People who have better thoughts get hired at EA orgs and are too busy to post. There is anticorrelation between the amount of time people have to post on EA Forum and the quality of person.
- Controversial content, rather than good content, gets the most engagement.
- Although we want more object-level discussion, everyone can weigh in on meta/community stuff, whereas they only know about their own cause areas. Therefore community content, especially shallow criticism, gets upvoted more. There could be a similar effect for posts by well-known EA figures.
- Contests like the criticism contest decrease average quality, because the type of person who would enter a contest to win money on average has worse takes than the type of person who has genuine deep criticism. There were 232 posts for the criticism contest, and 158 for the Cause Explora
... (read more)Thanks for writing this! I'd been putting something together, but this is much more thorough.
Here are the parts of my draft that I think still add something:
I'm interested in two overlapping questions:
While I've previously advocated giving friendly organizations a chance to review criticism and prepare a response in advance, primarily as a question of politeness, that's not the issue here. As I commented on the original post, the norm I've been pushing is only intended for cases where you have a neutral or better relationship with the organization, and not situations like this one where there are allegations of mistreatment or you don't trust them to behave cooperatively. The question here instead is, how do you ensure the accusations you're signal-boosting are true?
Here's my understanding of the timeline of 'adversarial' fact checking before publication: timeline. Three key bits:
- LC first shared the overview of claims 3d before posting.
- LC first shared the draft 21hr before posting, which included additional accusations
- NL responded to both by asking for a week to gather evidence that they claime
... (read more)For those who agree with this post (I at least agree with the author's claim if you replace most with more), I encourage you to think what you personally can do about it.
I think EAs are far too willing to donate to traditional global health charities, not due to them being the most impactful, but because they feel the best to donate to. When I give to AMF I know I'm a good person who had an impact! But this logic is exactly what EA was founded to avoid.
I can't speak for animal welfare organizations outside of EA, but at least for the ones that have come out of Effective Altruism, they tell me that funding is a major issue. There just aren't that many people willing to make a risky donation a new charity working on fish welfare, for example.
Those who would be risk-willing enough to give to eccentric animal welfare or global health interventions, tend to also be risk-willing enough with their donations to instead give it to orgs working on existential risks. I'm not claiming this is incorrect of them to do, but this does mean that there is a dearth of funding for high-risk interventions in the neartermist space.
I donated a significant part of my personal runway to help fund a new animal welfare org, which I think counterfactually might not have gotten started if not for this. If you, like me, think animal welfare is incredibly important and previously have donated to Givewell's top charities, perhaps consider giving animal welfare a try!
Should we fund people for more years at a time? I've heard that various EA organisations and individuals with substantial track records still need to apply for funding one year at a time, because they either are refused longer-term funding, or they perceive they will be.
For example, the LTFF page asks for applications to be "as few as possible", but clarifies that this means "established organizations once a year unless there is a significant reason for submitting multiple applications". Even the largest organisations seem to only receive OpenPhil funding every 2-4 years. For individuals, even if they are highly capable, ~12 months seems to be the norm.
Offering longer (2-5 year) grants would have some obvious benefits:
The biggest benefit, though, I think, is that:
Job security is something people value immensely. This is especially true as you get older (something I've noticed tbh), and would be even moreso for someone trying to raise kids. In the EA economy, many people get by on short-term gr... (read more)
I thought the previous article by Charlotte Alter on sexual misconduct in EA was pretty misleading in a lot of ways, as the top comments have pointed out, since it omitted a lot of crucial context, primarily used examples from the fringes of the community, and omitted various enforcement actions that were taken against the people mentioned in the article, which I think overall produced an article that had some useful truths in it, but made it really quite hard for readers to come to a good map of what is actually going on with that kind of stuff in EA.
This article, in contrast, does not have, as far as I can tell, any major misrepresentations in it. I do not know the details about things like conversations between Will and Tara, of course, since I wasn't there, and I have a bit of a feeling there is some exageration in the quotes by Naia here, but having done my own investigation and having talked to many people about this, the facts and rough presentation of what happened here seems basically correct.
It still has many of the trapping of major newspaper articles, and think continues to not be amazingly well-optimized for people to come to a clear understanding of the details,... (read more)
Thank you sharing this. As a distinct matter, the specific way FTX failed also makes me more concerned about the viability of a certain type of mindset that seems somewhat common and normalized amongst some in the EA community.
I believe Sam's adherence to the above referenced beliefs played a critical role in FTX's story. I don't think that any one of these beliefs is inherently problematic, but I have adjusted downwards against those who hold all of them.
While I agree with the substance of this comment to a great extent, I want to note that EA also has a problem of being much more willing to tolerate abstract criticism than concrete criticism.
If I singled out a specific person in EA and accused them of significant conflicts of interest or of being too unqualified and inexperienced to work on whatever they are currently working on, the reaction in the forum would be much more negative than it was to this comment.
If you really believe the issues raised in the comment are important, take it seriously when people raise these concerns in concrete cases.
This is Alex Cohen, GiveWell senior researcher, responding from GiveWell's EA Forum account.
Joel, Samuel and Michael — Thank you for the deep engagement on our deworming cost-effectiveness analysis.
We really appreciate you prodding us to think more about how to deal with any decay in benefits in our model, since it has the potential to meaningfully impact our funding recommendations.
We agree with HLI that there is some evidence for benefits of deworming declining over time and that this is an issue we haven’t given enough weight to in our analysis.
We’re extremely grateful to HLI for bringing this to our attention and think it will allow us to make better decisions on recommending funding to deworming going forward.
We would like to encourage more of this type of engagement with our research. We’re planning to announce prizes for criticism of our work in the future. When we do, we plan to give a retroactive prize to HLI.
We’re planning to do additional work to incorporate this feedback into an updated deworming cost-effectiveness estimate. In the meantime, we wanted to share our initial thoughts. At a high level:
- We agree with HLI that there is some evidence for benefits of deworming
... (read more)I previously gave a fair bit of feedback to this document. I wanted to quickly give my take on a few things.
Overall, I found the analysis interesting and useful. However, I overall have a somewhat different take than Nuno did.
On OP:
- Aaron Gertler / OP were given a previous version of this that was less carefully worded. To my surprise, he recommended going forward with publishing it, for the sake of community discourse. This surprised me and I’m really thankful.
- This analysis didn’t get me to change my mind much about Open Philanthropy. I thought fairly highly of them before and after, and expect that many others who have been around would think similarly. I think they’re a fair bit away from being an “idealized utilitarian agent” (in part because they explicitly claim not to be), but still much better than most charitable foundations and the like.
On this particular issue:
- My guess is that in the case of criminal justice reform, there were some key facts of the decision-making process that aren’t public and are unlikely to ever be public. It’s very common in large organizations for compromises to be made for various political or social reasons, for exampl... (read more)
I share the surprise and dismay other commenters have expressed about the experience you report around drafting this preprint. While I disagree with many claims and framings in the paper, I agreed with others, and the paper as a whole certainly doesn't seem beyond the usual pale of academic dissent. I'm not sure what those who advised you not to publish were thinking.
In this comment, I'd like to turn attention to the recommendations you make at the end of this Forum post (rather than the preprint itself). Since these recommendations are somewhat more radical than those you make in the preprint, I think it would be worth discussing them specifically, particularly since in many cases it's not clear to me exactly what is being proposed.
Having written what follows, I realise it's quite repetitive, so if in responding you wanted to pick a few points that are particularly important to you and focus on those, I wouldn't blame you!
You claim that EA needs to...
Can you explain what this would mean in practice? Most of the big funding bodies are private foundations. How would we go about breaking these up? Who even is "we" in th... (read more)
I'm a POC, and I've been recruited by multiple AI-focused longtermist organizations (in both leadership and research capacities) but did not join for personal reasons. I've participated in online longtermist discussions since the 1990s, and AFAICT participants in those discussions have always skewed white. Specifically I don't know anyone else of Asian descent (like myself) who was a frequent participant in longtermist discussions even as of 10 years ago. This has not been a problem or issue for me personally – I guess different groups participate at different rates because they tend to have different philosophies and interests, and I've never faced any racism or discrimination in longtermist spaces or had my ideas taken less seriously for not being white. I'm actually more worried about organizations setting hiring goals for themselves that assume that everyone do have the same philosophies and interests, potentially leading to pathological policies down the line.
Nick is being so characteristically modest in his descriptions of his role here. He was involved in EA right from the start — one of the members of Giving What We Can at launch in 2009 — and he soon started running our first international chapter at Rutgers, before becoming our director of research. He contributed greatly to the early theory of effective altruism and along with Will and I was one of the three founding trustees of the Centre for Effective Altruism. I had the great pleasure of working with him in person for a while at Oxford University, before he moved back to the States to join Open Philanthropy. He was always thoughtful, modest, and kind. I'm excited to see what he does next.
Thanks for sharing this; I especially appreciate the transparency that she resigned because of strategic disagreements.
I realize I am quite repetitive about this, but I really think EV/CEA would benefit from being more transparent with the community, especially about simple issues like 'who is currently in charge'. In this case I noticed the change on the website 18 days ago, and the actual handover may(?) have taken place prior to that point. My impression is that most normal organizations with public stakeholders announce leadership changes more or less immediately and I don't understand why EV doesn't.
In this "quick take", I want to summarize some my idiosyncratic views on AI risk.
My goal here is to list just a few ideas that cause me to approach the subject differently from how I perceive most other EAs view the topic. These ideas largely push me in the direction of making me more optimistic about AI, and less likely to support heavy regulations on AI.
(Note that I won't spend a lot of time justifying each of these views here. I'm mostly stating these points without lengthy justifications, in case anyone is curious. These ideas can perhaps inform why I spend significant amounts of my time pushing back against AI risk arguments. Not all of these ideas are rare, and some of them may indeed be popular among EAs.)
- Skepticism of the treacherous turn: The treacherous turn is the idea that (1) at some point there will be a very smart unaligned AI, (2) when weak, this AI will pretend to be nice, but (3) when sufficiently strong, this AI will turn on humanity by taking over the world by surprise, and then (4) optimize the universe without constraint, which would be very bad for humans.
... (read more)By comparison, I find it more likely that no individual AI will ever be strong enough to take over
Here's a post with me asking the question flat out: Why hasn't EA done an SBF investigation and postmortem?
This seems like an incredibly obvious first step from my perspective, not something I'd have expected a community like EA to be dragging its heels on years after the fact.
We're happy to sink hundreds of hours into fun "criticism of EA" contests, but when the biggest disaster in EA's history manifests, we aren't willing to pay even one investigator to review what happened so we can get the facts straight, begin to rebuild trust, and see if there's anything we should change in response? I feel like I'm in crazytown; what the heck is going on?
Update Apr. 4: I’ve now spoken with another EA who was involved in EA’s response to the FTX implosion. To summarize what they said to me:
- They thought that the lack of an investigation was primarily due to general time constraints and various exogenous logistical difficulties. At the time, they thought that setting up a team who could overcome the various difficulties would be extremely hard for mundane reasons such as:
- thorough, even-handed investigations into sensitive topics are very hard to do (especially if you start out low-context);
- this is especially true when they are vaguely scoped and potentially involve a large number of people across a number of different organizations;
- “professional investigators” (like law firms) aren’t very well-suited to do the kind of investigation that would actually be helpful;
- legal counsels were generally strongly advising people against talking about FTX stuff in general;
- various old confidentiality agreements would make it difficult to discuss what happened in some relevant instances (e.g. meetings that had Chatham House Rules);
- it would be hard to guarantee confidentiality in the investigation when info might be subpoenaed or something like that;
- a
... (read more)I’d like to chime in here. I can see how you might think that there’s a coverup or the like, but the Online team (primarily Ben Clifford and I, with significant amounts of input from JP and others on the team) made the decision to run this test based on feedback we’d been hearing for a long time from a variety of people, and discussions we’d had internally (also for a long time). And I didn’t know about Owen’s actions or resignation until today. (Edited to add: no one on the Online team knew about this when we were deciding to go forward with the test.)
We do think it’s important for people in EA to hear this news, and we’re talking about how we might make sure that happens. I know I plan on sharing one or both of these posts in the upcoming Digest, and we expect one or both of the posts to stay at the top of the Community page for at least a few days. If the posts drift down, we’ll probably pin one somehow. We’re considering moving them out of the section, but we’re conflicted; we do endorse the separation of Community and other content, and keeping the test going, and moving them out would violate this. We’ll keep talking about it, but I figured I would let you know what our thoughts are at the moment.
I really want to be in favor of having a less centralized media policy, and do think some level of reform is in-order, but I also think "don't talk to journalists" is just actually a good and healthy community norm in a similar way that "don't drink too much" and "don't smoke" are good community norms, in the sense that I think most journalists are indeed traps, and I think it's rarely in the self-interest of someone to talk to journalists.
Like, the relationship I want to have to media is not "only the sanctioned leadership can talk to media", but more "if you talk to media, expect that you might hurt yourself, and maybe some of the people around you".
I think almost everyone I know who has taken up requests to be interviewed about some community-adjacent thing in the last 10 years has regretted their choice, not because they were punished by the community or something, but because the journalists ended up twisting their words and perspective in a way both felt deeply misrepresentative and gave the interviewee no way to object or correct anything.
So, overall, I am in favor of some kind of change to our media policy, but also continue to think that the honest and true advice for talking to media is "don't, unless you are willing to put a lot of effort into this".
This is a quick post to talk a little bit about what I’m planning to focus on in the near and medium-term future, and to highlight that I’m currently hiring for a joint executive and research assistant position. You can read more about the role and apply here! If you’re potentially interested, hopefully the comments below can help you figure out whether you’d enjoy the role.
Recent advances in AI, combined with economic modelling (e.g. here), suggest that we might well face explosive AI-driven growth in technological capability in the next decade, where what would have been centuries of technological and intellectual progress on a business-as-usual trajectory occur over the course of just months or years.
Most effort to date, from those worried by an intelligence explosion, has been on ensuring that AI systems are aligned: that they do what their designers intend them to do, at least closely enough that they don’t cause catastrophically bad outcomes.
But even if we make sufficient progress on alignment, humanity will have to make, or fail to make, many hard-to-reverse decisions with important and long-lasting consequences. I call these decisions Grand C... (read more)
DM conversation I had with Eliezer in response to this post. Since it was a private convo and I was writing quickly I had somewhat exaggerated in a few places that I've now indicated with edits.
... (read more)It's not accurate that the key ideas of Superintelligence came to Bostrom from Eliezer, who originated them. Rather, at least some of the main ideas came to Eliezer from Nick. For instance, in one message from Nick to Eliezer on the Extropians mailing list, dated to Dec 6th 1998, inline quotations show Eliezer arguing that it would be good to allow a superintelligent AI system to choose own its morality. Nick responds that it's possible for an AI system to be highly intelligent without being motivated to act morally. In other words, Nick explains to Eliezer an early version of the orthogonality thesis.
Nick was not lagging behind Eliezer on evaluating the ideal timing of a singularity, either - the same thread reveals that they both had some grasp of the issue. Nick said that the fact that 150,000 people die per day must be contextualised against "the total number of sentiences that have died or may come to live", foreshadowing his piece on Astronomical Waste, that would be published five years later. Eliezer said that having waited billions of years, the probability of a... (read more)
This post uses an alarmist tone to trigger emotions ("the vultures are circling"). I'd like to see more light and less heat. How common is this? What's the evidence?
People have strong aversions to cheating and corruption, which is largely a good thing - but it can also lead to conversations on such issues getting overly emotional in a way that's not helpful.
I might be in the minority view here but I liked the style this post was written in, emotive language and all. It was flowery language but that made it fun to read it and I did not find it to be alarmist (e.g. it clearly says “this problem has yet to become an actual problem”).
And more importantly I think the EA Forum is already a daunting place and it is hard enough for newcomers to post here without having to face everyone upvoting criticisms of their tone / writing style / post title. It Is not the perfect post (I think there is a very valid critique in what Stefan says that the post could have benefited from linking to some examples / evidence) but not everything here needs to be in the perfect EA-speak. Especially stuff from newcomers.
So welcome CitizenTen. Nice to have you here and to hear your views. I want to say I enjoyed reading the post (don’t fully agree tho) and thank you for it. :-)
While SBF presents himself here as incompetent rather than malicious and fraudulent, his account here contradicts previous reporting in (at least) two nontrivial ways.
A quick note from a moderator (me) about discussions about recent events related to FTX:
- It’s really important for us to be able to discuss all perspectives on the situation with an open mind and without censoring any perspectives.
- And also:
- Our discussion norms are still important — we won’t suspend them for this topic.
- It’s a stressful topic for many involved, so people might react more emotionally than they usually do.
- The situation seems very unclear and likely to evolve, so I expect that we’ll see conclusions made from partial information that will turn out to be false fairly soon.
- That’s ok (errors happen), but…
- We should be aware that this is the case, caveat statements appropriately, avoid deferring or updating too much, and be prepared to say “I was wrong here.”
- So I’d like to remind everyone:
- Please don’t downvote comments simply or primarily because you disagree with them (that’s what “disagree-voting” is for!). You can downvote if you think a comment is particularly low-quality, actively harmful, or seriously breaks discussion norms (if it’s the latter, consider flagging it to the moderation team).
- Please keep an open and gener
... (read more)I think we should think carefully about the norm being set by the comments here.
This is an exceptionally transparent and useful grant report (especially Oliver Habryka's). It's helped me learn a lot about how the fund thinks about things, what kind of donation opportunities are available, and what kind of things I could (hypothetically if I were interested) pitch the LTF fund on in the future. To compare it to a common benchmark, I found it more transparent and informative than a typical GiveWell report.
But the fact that Habryka now must defend all 14 of his detailed write-ups against bikeshedding, uncharitable, and sometimes downright rude commenters seems like a strong disincentive against producing such reports in the future, especially given that the LTF fund is so time constrained.
If you value transparency in EA and want to see more of it (and you're not a donor to the LTF fund), it seems to me like you should chill out here. That doesn't mean don't question the grants, but it does mean you should:
I really appreciate the time people have taken to engage with this post (and actually hope the attention cost hasn’t been too significant). I decided to write some post-discussion reflections on what I think this post got right and wrong.
The reflections became unreasonably long - and almost certainly should be edited down - but I’m posting them here in a hopefully skim-friendly format. They cover what I see as some mistakes with the post, first, and then cover some views I stand by.
Things I would do differently in a second version of the post:
1. I would either drop the overall claim about how much people should defer to Yudkowsky — or defend it more explicitly
At the start of the post, I highlight the two obvious reasons to give Yudkowsky's risk estimates a lot of weight: (a) he's probably thought more about the topic than anyone else and (b) he developed many of the initial AI risk arguments. I acknowledge that many people, justifiably, treat these as important factors when (explicitly or implicitly) deciding how much to defer to Yudkowsky.
Then the post gives some evidence that, at each stage of his career, Yudkowsky has made a dramatic, seemingly overconfident prediction about tec... (read more)
In this spirit, here are some x-risk sceptical thoughts:
- You could reasonably think human extinction this century is very unlikely. One way to reach this conclusion is simply to work through the most plausible causes of human extinction, and reach low odds for each. Vasco Grilo does this for (great power) conflict and nuclear winter, John Halstead suggests extinction risk from extreme climate change is very low here, and the background rate of extinction from natural sources can be bounded by (among other things) observing how long humans have already been around for. That leaves extinction risk from AI and (AI-enabled) engineered pandemics, where discussion is more scattered and inconclusive. Here and here are some reasons for scepticism about AI existential risk.
- Even if the arguments for AI x-risk are sound, then it's not clear how they are arguments for expecting literal human extinction over outcomes like ‘takeover’ or ‘disempowerment’. It's hard to see why AI takeover would lead to smouldering ruins, versus continued activity and ‘life’, just a version not guided by humans or their values.
- So “existential catastrophe” probably shouldn't just mean "human extinction". But then it
... (read more)A meta- norm I'd like commentators[1] to have is to Be Kind, When Possible. Some subpoints that might be helpful for enacting what I believe to be the relevant norms:
- Try to understand/genuinely grapple with the awareness that you are talking to/about actual humans on the other side, not convenient abstractions/ideological punching bags.
- For example, most saliently to me, the Manifest organizers aren't an amorphous blob of bureaucratic institutions.
- They are ~3 specific people, all of whom are fairly young, new to organizing large events, and under a lot of stress as it is.
- Rachel in particular played a (the?) central role in organizing, despite being 7(?) months pregnant. Organizing a new, major multiday event under such conditions is stressful enough as it is, and I'm sure the Manifest team in general, and Rachel in particular, was hoping they can relax a bit at the end.
- It seems bad enough that a hit piece in the Guardian is written about them, but it's worse when "their" community wants to pile on, etc.
- I'm not saying that you shouldn't criticize people. Criticism can be extremely valuable! But there are constructive, human, ways to criticize, and then th
... (read more)Elon Musk
Stuart Buck asks:
“[W]hy was MacAskill trying to ingratiate himself with Elon Musk so that SBF could put several billion dollars (not even his in the first place) towards buying Twitter? Contributing towards Musk's purchase of Twitter was the best EA use of several billion dollars? That was going to save more lives than any other philanthropic opportunity? Based on what analysis?”
Sam was interested in investing in Twitter because he thought it would be a good investment; it would be a way of making more money for him to give away, rather than a way of “spending” money. Even prior to Musk being interested in acquiring Twitter, Sam mentioned he thought that Twitter was under-monetised; my impression was that that view was pretty widely-held in the tech world. Sam also thought that the blockchain could address the content moderation problem. He wrote about this here, and talked about it here, in spring and summer of 2022. If the idea worked, it could make Twitter somewhat better for the world, too.
I didn’t have strong views on whether either of these opinions were true. My aim was just to introduce the two of them, and let them have a conversation and take it from th... (read more)
Thank you for your answer Marcus.
What bothers me is that if I said that I was excited about funding WAW research, no one would have said anything. I was free to say that. But to say that I’m not excited, I have to go through all these hurdles. This introduces a bias because a lot of the time researchers won’t want to go through hurdles and opinions that would indirectly threaten RP’s funding won’t be shared. Hence, funders would have a distorted view of researchers' opinions.
Put yourself into my shoes. OpenPhil sends an email to multiple people asking for opinions on a WAW grant. What I did was that I wrote a list of pros and cons about funding that grant, recommended funding it, and pressed “send”. It took like 30 minutes. Later OpenPhil said that it helped them to make the decision. Score! I felt energized. I probably had more impact in those 30 minutes than I had in three months of writing about aquatic noise.
Now imagine I knew that I had to inform the management about saying that I’m not excited about WAW. My manager was new to RP, he would’ve needed to escalate to directors. Writing my manager’s manager’s manager a message like “Can I write this thing that threatens... (read more)
I've confirmed with a commenter here, whom left a comment positive of non-linear, that they were asked to leave that comment by nonlinear. I think this is low-integrity behaviour on behalf of nonlinear, and an example of brigading. I would appreciate the forum team looking into this.
Edit: I have been asked to clarify that they were encouraged to comment ’by nonlinear, rather than asked to comment positively (or anything in particular).
I think asking your friends to vouch for you is quite possibly okay, but that people should disclose there was a request.
It's different evidence between "people who know you who saw this felt motivated to share their perspective" vs "people showed up because it was requested".
I'm not sure that should count as brigading or unethical in these circumstances as long as they didn't ask people to vote a particular way.
Remember that even though Ben is only a single author, he spent a bunch of time gathering negative information from various sources[1]. I think that in order to be fair, we need to allow them to ask people to present the other side of the story. Also consider: if Kat or Emerson had posted a comment containing a bunch of positive comments from people, then I expect that everyone would be questioning why those people hadn't made the comments themselves.
I think it might also be helpful to think about it from the opposite perspective. Would anyone accuse me of brigading if I theoretically knew other people who had negative experiences with Nonlinear and suggested that they might want to chime in?
If not, then we've created an asymmetry where people are allowed to do things in terms of criticism, but not in terms of defense, which seems like a mistake to me.
That said, it is useful for us to know that some of these comments were solicited.
Disclaimer: I formerly interned at Nonlinear. I don't want my meta-level stance to be taken as support of the actio... (read more)
Rohit - if you don't believe in epistemic integrity regarding controversial views that are socially stigmatized, you don't actually believe in epistemic integrity.
You threw in some empirical claims about intelligence research, e.g. 'There's plenty of well reviewed science in the field that demonstrates that, varyingly, there are issues with measurements of both race and intelligence, much less how they evolve over time, catch up speeds, and a truly dizzying array of confounders.'
OK. Ask yourself the standard epistemic integrity checks: What evidence would convince you to change your mind about these claims? Can you steel-man the opposite position? Are you applying the scout mindset to this issue? What were your Bayesian priors about this issue, and why did you have those priors, and what would update you?
It's OK for EAs to see a highly controversial area (like intelligence research), to acknowledge that learning more about it might be a socially handicapping infohazard, and to make a strategic decision not to touch the issue with a 10-foot-pole -- i.e. to learn nothing more about it, to say nothing about it, and if asked about it, to respond 'I haven't studied thi... (read more)
Hi, thank you for starting this conversation! I am an EA outsider, so I hope my anecdata is relevant to the topic. (This is my first post on the forums.) I found my way to this post during an EA rabbit hole after signing up for the "Intro to EA" Virtual Program.
To provide some context, I heard about EA a few years ago from my significant other. I was/am very receptive to EA principles and spent several weeks browsing through various EA resources/material after we first met. However, EA remained in my periphery for around three years until I committed to giving EA a fair shake several weeks ago. This is why I decided to sign up for the VP.
I'm mid-career instead of enrolled in university, so my perspective is not wholly within the scope of the original post. However, I like to think that I have many qualities the EA community would like to attract:
- I (dramatically) changed careers to pursue a role with a more significant positive impact and continue to explore how I can apply myself to do the "most good".
- I'm well-educated (1 bachelor's degree & 2 master's degrees)
- As a scientist for many years, I value evidence-based decision-making and rationalit
... (read more)Note: I discuss Open Phil to some degree in this comment. I also start work there on January 3rd. These are my personal views, and do not represent my employer.
Epistemic status: Written late at night, in a rush, I'll probably regret some of this in the morning but (a) if I don't publish now, it won't happen, and (b) I did promise extra spice after I retired.
It seems valuable to separate "support for the action of writing the paper" from "support for the arguments in the paper". My read is that the authors had a lot of the former, but less of the latter.
From the original post:
While "invalid" seems like too strong a word for a critic to use (and I'd be disappointed in any critic who did use it), this sounds like people were asked to review/critique the paper and then offered reviews and critiques of the paper.
Still, to the degree that ther... (read more)
I think people are overcomplicating this. You should generally follow the law, but to shield against risks that you are being such a stickler in unreasonable ways (trying to avoid "3 felonies a day"), you can just imagine whether uninvolved peers hearing about your actions would think a situation is obviously okay. Some potential ways to think about such peer groups:
- What laws people in the country you live in think are absolutely normal and commonplace to break.
- For example, bribing police officers is generally illegal, but iiuc in some countries approximately everybody bribes police officers at traffic stops
- What laws people in your home country think is illegitimate and thus worth breaking
- For example some countries ban homosexuality, but your typical American would not consider it blameworthy to be gay.
- What laws other EAs (not affiliated in any way with your organization) think is okay to break.
- So far, candidates people gave include ag-gag laws and taking stimulants for undiagnosed ADHD.
- FWIW I'm not necessarily convinced that the majority of EAs agree here; I'd like to see polls.
- What laws your non-EA friends think is totally okay to break
- For example, most college-educated Mil
... (read more)TL;DR
Lots of good critical points in this post. However I would want readers to note that:
INTRODUCTION
Thank you for posting. (And thank you for sharing a draft of your post with me before posting so I could start drafting this reply).
I have been the main person from the EA community working on the bill campaign. I have never been in the driving seat for the bill but I have had some influence over it.
I agree with many of the points raised. At a high level I agree for example that there is no "compelling evidence" that this bill wo... (read more)
Adding some more data from my own experience last year.
Personally, I'm glad about some aspects of it and struggled with others, and there are some things I wish I had done differently, at least in hindsight. But here I just mean to quickly provide data I have collected anyway in a 'neutral' way, without implying anything about any particular application.
Total time I spent on 'career change' in 2018: at least 220h, of which at least about 101h were for specific applications. (The rest were things like: researching job and PhD opportunities; interviewing people about their jobs and PhD programs; asking people I've worked with for input and feedback; reflection before I decided in January to quit my previous job at the EA Foundation by April.) This does neither include 1 week I spent in in San Francisco to attend EAG SF and during which I was able to do little other work nor 250h of self-study that seems robustly useful but which I might not have done otherwise. (Nor 6 full weeks plus about 20h afterwards I spent doing an internship at an EA org, which overall I'm glad I did but might not have done otherwise.)
- Open Phil Research Analyst - rejected af
... (read more)I mentioned a few months ago that I was planning to resign from the board of EV UK: I’ve now officially done so.
Since last November, I’ve been recused from the board on all matters associated with FTX and related topics, which has ended up being a large proportion of board business. (This is because the recusal affected not just decisions that were directly related to the collapse of FTX, but also many other decisions for which the way EV UK has been affected by the collapse of FTX was important context.) I know I initially said that I’d wait for there to be more capacity, but trustee recruitment has moved more slowly than I’d anticipated, and with the ongoing recusal I didn’t expect to add much capacity for the foreseeable future, so it felt like a natural time to step down.
It’s been quite a ride over the last eleven years. Effective Ventures has grown to a size far beyond what I expected, and I’ve felt privileged to help it on that journey. I deeply respect the rest of the board, and the leadership teams at EV, and I’m glad they’re at the helm.
Some people have asked me what I’m currently working on, and what my plans are. This year has been quite spread over... (read more)
Will - of course I have some lingering reservations but I do want to acknowledge how much you've changed and improved my life.
You definitely changed my life by co-creating Centre for Effective Altruism, which played a large role in organizations like Giving What We Can and 80,000 Hours, which is what drew me into EA. I was also very inspired by "Doing Good Better".
To get more personal -- you also changed my life when you told me in 2013 pretty frankly that my original plan to pursue a Political Science PhD wasn't very impactful and that I should consider 80,000 Hours career coaching instead, which I did.
You also changed my life by being open about taking antidepressants, which is ~90% of the reason why I decided to also consider taking antidepressants even though I didn't feel "depressed enough" (I definitely was). I felt like if you were taking them and you seemed normal / fine / not clearly and obviously depressed all the time yet benefitted from them that maybe I would also benefit them (I did). It really shattered a stereotype for me.
You're now an inspiration for me in terms of resilience. Having an impact journey isn't always everything up and up and up all the time. 2022 and 2023 were hard for me. I imagine they were much harder for you -- but you persevere, smile, and continue to show your face. I like that and want to be like that too.
Do you remember how animal rights was pre-EA? The first Animal Rights National Conference I went to, Ingrid Newkirk dedicated her keynote address to criticizing scope sensitivity, and arguing that animal rights activists should not focus on tactics which help more animals. And my understanding is that EA deserves a lot of the credit for removing and preventing bad actors in the animal rights space (e.g. by making funding conditional on organizations following certain HR practices).
It's useful to identify ways to improve EA, but we have to be honest that imaginary alternatives largely seem better because they are imaginary, and actual realistic alternatives also have lots of flaws.
(Of course, it's possible that those flawed alternatives are still better than EA, but figuring this out requires act... (read more)
Hi all, I wanted to chime in because I have had conversations relevant to this post with just about all involved parties at various points. I've spoken to "Alice" (both while she worked at nonlinear and afterward), Kat (throughout the period when the events in the post were alleged to have happened and afterward), Emerson, Drew, and (recently) the author Ben, as well as, to a much lesser extent, "Chloe" (when she worked at nonlinear). I am (to my knowledge) on friendly terms with everyone mentioned (by name or pseudonym) in this post. I wish well for everyone involved. I also want the truth to be known, whatever the truth is.
I was sent a nearly final draft of this post yesterday (Wednesday), once by Ben and once by another person mentioned in the post.
I want to say that I find this post extremely strange for the following reasons:
(1) The nearly final draft of this post that I was given yesterday had factual inaccuracies that (in my opinion and based on my understanding of the facts) are very serious despite ~150 hours being spent on this investigation. This makes it harder for me to take at face value the parts of the post that I have no knowledge of. &nb... (read more)
(Copying over the same response I posted over on LW)
I don't have all the context of Ben's investigation here, but as someone who has done investigations like this in the past, here are some thoughts on why I don't feel super sympathetic to requests to delay publication:
In this case, it seems to me that there is a large and substantial threat of retaliation. My guess is Ben's sources were worried about Emerson hiring stalkers, calling their family, trying to get them fired from their job, or threatening legal action. Having things be out in the public can provide a defense because it is much easier to ask for help if the conflict happens in the open.
As a concrete example, Emerson has just sent me an email saying:
For the record, ... (read more)
I personally have no stake in defending Conjecture (In fact, I have some questions about the CoEm agenda) but I do think there are a couple of points that feel misleading or wrong to me in your critique.
1. Confidence (meta point): I do not understand where the confidence with which you write the post (or at least how I read it) comes from. I've never worked at Conjecture (and presumably you didn't either) but even I can see that some of your critique is outdated or feels like a misrepresentation of their work to me (see below). For example, making recommendations such as "freezing the hiring of all junior people" or "alignment people should not join Conjecture" require an extremely high bar of evidence in my opinion. I think it is totally reasonable for people who believe in the CoEm agenda to join Conjecture and while Connor has a personality that might not be a great fit for everyone, I could totally imagine working with him productively. Furthermore, making a claim about how and when to hire usually requires a lot of context and depends on many factors, most of which an outsider probably can't judge.
Given that you state early on that you are an experienced member of ... (read more)
I'm not very compelled by this response.
It seems to me you have two points on the content of this critique. The first point:
I'm pretty confused here. How exactly do you propose that funding decisions get made? If some random person says they are pursuing a hits-based approach to research, should EA funders be obligated to fund them?
Presumably you would want to say "the team will be good at hits-based research such that we can expect a future hit, for X, Y and Z reasons". I think you should actually say those X, Y and Z reasons so that the authors of the critique can engage with them; I assume that the authors are implicitly endorsing a claim like "there aren't any particularly strong reasons to expect Conjecture to do more impactful work in the future".
The second point:
Hmm, it... (read more)
Why are you doing critiques instead of evaluations? This seems like you're deliberately only looking for bad things instead of trying to do a balanced investigation into the impact of an organization.
This seems like bad epistemics and will likely lead to a ton of not necessarily warranted damage to orgs that are trying to do extremely important work. Not commenting on the content of your criticisms of Redwood or Conjecture, but your process.
Knowing there's a group of anonymous people who are explicitly looking to find fault with orgs feels like an instance of EA culture rewarding criticism to the detriment of the community as a whole. Generally, I can see that you're trying to do good, but your approach makes me feel like the EA community is hostile and makes me not want to engage with it.
I don't actually think that's necessarily messed up? That sometimes your role conflicts with a relationship you'd like to have is unfortunate, but not really avoidable:
A company telling its managers that they can't date their reports .
A person telling their partner that they can't date other people.
A person telling their partner that they can't date a specific other person.
A school telling professors they can't date their students.
A charity telling their donor services staff that they can't date major donors.
The person has the option of giving up their role (the manager and report can work with HR to see if either can change roles to remove the conflict, the poly partner can dump the mono one, etc.) but the role's gatekeeper saying you both can't keep the role and date the person seems fine in many cases?
... (read more)What about the parts of EA that isn't Peter Singer and classical GiveWell-style EA? If those parts of EA were somewhat responsible, would it be reasonable to call that EA as well?
I don't think the analogy is helpful. Naomi Novik presumably does not claim to emphasize the importance of understanding tail risks. Naomi presumably didn't meet Caroline and encourage her to earn a lot of money so she can donate to fantasy authors, nor did Caroline say "I'm earning all of this money so I can fund Naomi Novik's fantasy writing". Naomi Novik did not have Caroline on her website as a success story of "this is why you should earn money to buy fantasy books or support other fantasy writers". Naomi didn't have a "Fantasy writer's fund" with the FTX brand on it.
I think it's reasonable to preach patience if you think people are jumping too quickly to blame themselves. I think it's reasonable to think that EA is actually less responsible than the current state of discourse on the forum. And I'm not making a claim about the extent EA is in fact responsible for the events. But the analogy as written is pretty poor, and... (read more)
I work (indirectly) in financial risk management. Paying special attention to special categories of risk - like romantic relationships - is very fundamental to risk management. It is not that institutions are face with a binary choice of 'manage risk' or 'don't manage risk' where people in romantic relationships are 'managed' and everyone else is 'not'. Risk management is a spectrum, and there are good reasons to think that people with both romantic and financial entanglements are higher risk than those with financial entanglements only. For example:
-
-
... (read more)Romantic relationships inspire particularly strong feelings, not usually characterising financial relationships. People in romantic relationships will take risks on each other's behalf that people in financial relationships will not. We should be equally worried about familial relationships, which also inspire very strong feelings.
Romantic relationships inspire different feelings from financial relationships. Whereas with a business partner you might be tempted to act badly to make money, with a romantic partner you might be tempted to act badly for many other reasons. For example, to make your partner feel good, or to spare your
Adult film star Abella Danger apparently took an class on EA at University of Miami, became convinced, and posted about EA to raise $10k for One for the World. She was PornHub's most popular female performer in 2023 and has ~10M followers on instagram. Her post has ~15k likes, comments seem mostly positive.
I think this might be the class that @Richard Y Chappell🔸 teaches?
Thanks Abella and kudos to whoever introduced her to EA!
That's not right: You listed these people as special guests — many of them didn't do a talk. Importantly, Hanania didn't. (According to the schedule.)
I just noticed this. And it makes me feel like "if someone rudely seeks out controversy, don't list them as a special guest" is such a big improvement over the status quo.
- Hanania was already not a speaker. (And Nathan Young suggests that last year, this was partly a conscious decision rather than him not just feeling like he wanted to give a talk.)
- If you just had open ticket sales and allowed Hanania to buy a ticket (or not) just like everyone else, then I think that would be a lot better in the eyes of most people who don't like that Hanania is listed as a special guest (including me). My guess would be that it's a common conference policy to "Have open ticket sales, and only refuse people if you think they might actively break-norms-and-harm-people during the events (not based on their views on twitter)". (Though I could be off-base here — I haven't actually read many conferences' policies.)
- I think people who are concerned about preserving the "open expression of
... (read more)Protests are by nature adversarial and high-variance actions prone to creating backlash, so I think that if you're going to be organizing them, you need to be careful to actually convey the right message (and in particular, way more careful than you need to be in non-adversarial environments—e.g. if news media pick up on this, they're likely going to twist your words). I don't think this post is very careful on that axis. In particular, two things I think are important to change:
"Meta’s frontier AI models are fundamentally unsafe."
I disagree; the current models are not dangerous on anywhere near the level that most AI safety people are concerned about. Since "current models are not dangerous yet" is one of the main objections people have to prioritizing AI safety, it seems really important to be clearer about what you mean by "safe" so that it doesn't sound like the protest is about language models saying bad things, etc.
Suggestion: be very clear that you're protesting the policy that Meta has of releasing model weights because of future capabilities that models could have, rather than the previous decisions they made of releasing model weights.
"Stop free-riding on the goodwill of ... (read more)
One aspect of the framing here that annoyed me, both in the OP and in some of the comments: the problem is not controversial beliefs, it is exclusionary beliefs. Here are some controversial beliefs that I think would pose absolutely no problem at this event or any other:
The problem with racism and transphobia is not that people disagree about them! The problem is that these beliefs, in their content on the object level, hurt people and exclude people from the discussion.
Let's avoid using "controversial" as a euphemism for "toxic and exclusionary". Let's celebrate the debate and discussion of all controversies that threaten no-one and exclude no-one. Suggesting any of that is at stake is totally unnecessary.
What I heard from former Alameda people
A number of people have asked about what I heard and thought about the split at early Alameda. I talk about this on the Spencer podcast, but here’s a summary. I’ll emphasise that this is me speaking about my own experience; I’m not speaking for others.
In early 2018 there was a management dispute at Alameda Research. The company had started to lose money, and a number of people were unhappy with how Sam was running the company. They told Sam they wanted to buy him out and that they’d leave if he didn’t accept their offer; he refused and they left.
I wasn’t involved in the dispute; I heard about it only afterwards. There were claims being made on both sides and I didn’t have a view about who was more in the right, though I was more in touch with people who had left or reduced their investment. That included the investor who was most closely involved in the dispute, who I regarded as the most reliable source.
It’s true that a number of people, at the time, were very unhappy with Sam, and I spoke to them about that. They described him as reckless, uninterested in management, bad at managing conflict, and being unwilling to accept a lower... (read more)
I broadly agree with the picture and it matches my perception.
That said, I'm also aware of specific people who held significant reservations about SBF and FTX throughout the end of 2021 (though perhaps not in 2022 anymore), based on information that was distinct from the 2018 disputes. This involved things like:
- predicting a 10% annual risk of FTX collapsing with
- [edit: I checked my prediction logs and I actually did predict a 10% annual risk of loss of customer funds in November 2021, though I lowered that to 5% in March 2022. Note that I predicted hacks and investment losses, but not fraud.]
- recommending in favor of 'Future Fund' and against 'FTX Future Fund' or 'FTX Foundation' branding, and against further affiliation with SBF,
- warnings that FTX was spending its US dollar assets recklessly, including propping up the price of its own tokens by purchasing large amounts of them on open markets (separate from the official buy & burns),
- concerns about Sam continuing to employ very risky and reckless business practices throu
... (read more)FTX investors and the Future Fund (though not customers)FTX investors, the Future Fund, and possibly customers losing all of their money,I found it a bit hard to discern what constructive points he was trying to make amidst all the snark. But the following seemed like a key passage in the overall argument:
Putting aside the implicit status games and weird psychological projectio... (read more)
The vast majority of people should probably be withholding judgment and getting back to work for the next week until Nonlinear can respond.
I'm contributing to it now, but it's a bit of a shame that this post has 183 comments at the time of writing when the post is not even a day old and not being on the front page. EA seems drawn to drama and controversy and it would accomplish its goals much better if it were more able to focus on more substantive posts.
The accusations are public and have already received substantial exposure. TIME itself seems to be leveraging this request for confidentiality in order to paint an inaccurate picture of what is actually going on and also making it substantially harder for people to orient towards the actual potential sources of risk in the surrounding community.
I don't currently see a strong argument for not linking to evidence that I was easily able to piece together publicly, and also like, probably the accused can also figure out. The cost here is really only born by the people who lack context who I feel like are being substantially mislead by the absence of information here.
I'll by-default repost the links and guess at identity of the person in-question in 24 hours unless some forum admin objects or someone makes a decent counterargument.
The discussion of Bostrom's Vulnerable World Hypothesis seems very uncharitable. Bostrom argues that on the assumption that technological development makes the devastation of civilisation extremely likely, extreme policing and surveillance would be one of the few ways out. You give the impression that he is arguing for this now in our world ("There is little evidence that the push for more intrusive and draconian policies to stop existential risk is either necessary or effective"). But this is obviously not what he is proposing - the vulnerable world hypothesis is put forward as a hypothesis and he says he is not sure whether it is true.
Moreover, in the paper, Bostrom discusses at length the obvious risks associated with increasing surveillance and policing:
"It goes without saying that a mechanism that enables unprecedentedly intense forms of surveillance, or a global governance institution capable of imposing its will on any nation, could also have bad consequences. Improved capabilities for social control could help despotic regimes protect themselves from rebellion. Ubiquitous surveillance could enable a hegemonic ideology or an intolerant majority view to impose itself on... (read more)
Bill Gates just endorsed GiveWell!
I don't know how much we should update on this, but I'm now personally a bit less concerned about the "self-recommending" issues of EA resources being mostly recommended by people in the EA social community.[1]
I think this is a good sign for the effective giving ecosystem, and will make my relatives much less worried about how I spend my money.
Not that I was super concerned after digging deeper into things in the past year, but I remember being really concerned about it ~2 years ago, and most people don't have that much time to look into things.
Thanks so much for writing this, and even more for all you've done to help those less fortunate than yourself.
I'm glad I did that Daily Politics spot! It was very hard to tell in the early days how impactful media work was (and it still is!) so examples like this are very interesting.
>Since then, all the major actors in effective altruism’s global health and wellbeing space seem to have come around to it (e.g., see these comments by GiveWell, Founders Pledge, Charity Entrepreneurship, GWWC, James Snowden).
I don't think this is an accurate representation of the post linked to under my name, which was largely critical.
In light of this discussion about whether people would find this article alienating, I sent it to four very smart/reasonable friends who aren't involved in EA, don't work on AI, and don't live in the Bay Area (definitely not representative TIME readers, but maybe representative of the kind of people EAs want to reach). Given I don't work on AI/have only ever discussed AI risk with one of them, I don't think social desirability bias played much of a role. I also ran this comment by them after we discussed. Here's a summary of their reactions:
Friend 1: Says it's hard for them to understand why AI would want to kill everyone, but acknowledges that experts know much more about this than they do and takes seriously that experts believe this is a real possibility. Given this, they think it makes sense to err on the side of caution and drastically slow down AI development to get the right safety measures in place.
Friend 2: Says it's intuitive that AI being super powerful, not well understood, and rapidly developing is a dangerous combination. Given this, they think it makes sense to implement safeguards. But they found the article overwrought, especially given missing links in the argumen... (read more)
Thanks for everyone's contributions. I am learning a lot. I see that the author made significant mistakes and am glad he is taking action to correct them and that the community is taking them seriously, but I want to make a small comment on the sentence "She was in a structural position where it was (I now believe) unreasonable to expect honesty about her experience." I don't know enough about the specific relationship in the post to comment on it directly, but felt it could describe enough dynamics that it could use a diverse array of perspectives from women in the structural positions described.
I want to encourage other women in early stages of their careers like myself to continue striving to overcome shyness. I don't think it's too much to expect us to be honest if we dislike a higher status man flirting with us who doesn't have direct power over us, or if we dislike any other thing they do. I hope this post encourages shy lower status women to feel like they would be heard if they were assertive about behaviors they don't like, that one way of making the behaviors stop could be to be more direct.
I also think in general the Ask Culture norm prevalent in EA is very ... (read more)
[EDIT: I was assuming from the content of the conversation Sam and Kelsey had some preexisting social connection that made a "talking to a friend" interpretation reasonable. From Kelsey's tweets people linked elsewhere in this thread it sounds like they didn't, and all their recent interactions had been around her writing about him as a journalist. I think that makes the ethics much less conflicted.]
I'm conflicted on the ethics of publishing this conversation. I read this as if Sam's is talking to Kelsey this way because he thought he was talking casually with a friend in her personal capacity. And while the normal journalistic ethics is something like "things are on the record unless we agree otherwise", that's only true for professional conversations, right? Like, if Kelsey were talking with a housemate over dinner and then that ended up in a Vox article I would expect everyone would see that as unfair to the housemate? Surely the place you end up isn't "journalists can't have honest friendships", right? Perhaps Kelsey doesn't think of herself as Sam's friend, but I can't see how Kelsey could have gone through that conversation thinking "Sam thinks he's talking to me as a journalist".
On the other hand, Sam's behavior has been harmful enough that I could see an argument that he doesn't deserve this level of consideration, and falling back on a very technical reading of journalistic ethics is ok?
Copying what I posted in the LW thread:
Sam has since tweeted "25) Last night I talked to a friend of mine. They published my messages. Those were not intended to be public, but I guess they are now."
His claims are hard to believe. Kelsey is very well-known as a journalist in EA circles. She says she interviewed him for a piece in May. Before Sam's tweet, she made a point of saying that she avoids secretly pulling "but I never said it would be off-the-record, you just asked for that" shenanigans. She confirmed the conversation with an email from her work account. She disputes the "friend" claim, and says they've never had any communication in any platform she can find, other than the aforementioned interview.
The only explanations that make sense to me are:
I'm honestly more than a bit surprised to see there being doubts on the propriety of publishing this. Like on the facts that Kelsey gives, it seems obvious that their relationship is journalist-subject (particularly given how experienced SBF is with the press). But even if you were to assume that they had a more casual social relationship than is being disclosed (which I do not), if you just blew up your company in a (likely) criminal episode that is the most damaging and public event in the history of the social movement you're a part of, and your casual friend the journalist just wants to ask you a series of questions over DM, the idea that you have an expectation of privacy (without your ever trying to clarify that the conversation is private) does not seem very compelling to me.
Like, your therapist/executive coach just gave an interview on the record to the New York Times. You are front page news around the world. You know your statements are newsworthy. Why is the baseline here "oh this is just a conversation between friends?" (Particularly where one of the parties is like "no we are totally not friends")
I don't mean for my tone to be too harsh here, but I think this article is clearly in the public interest and I really just don't see the logic for not publishing it.
Hey, crypto insider here.
sbf actions seem to be directly inspired by his effective altruism believes. He mentioned a few times on podcasts that his philosophy was: Make the most money possible, whatever the way, and then donate it all in the best way to improve the world. He was only in crypto because he thought this was the place where he could make the most money.
sbf was first a trader for Alameda and then started FTX
some actions that Alameda/FTX was known for:
*Using exchange data to trade against their own customers
*Paying twitter users money to post tweets with the intention of promoting ftx, hurting competitors, and manipulating markets
*Creating ponzi coins with no usage with the only intention of selling these for the highest price possible to naive users. Entire ecosystems were created for this goal.
The typical plan was:
1.Fund a team to create a new useless token. 2% of coins to public, 98% to investors who get it year later. 2. Create manipulation story for why this project is useful. 3.Release news item: Alameda invested in x coin (because alameda had a good reputation at first). 4. pump up the price as high as they can using twitter influence... (read more)
I think more EAs should consider operations/management/doer careers over research careers, and that operations/management/doer careers should be higher status within the community.
I get a general vibe that in EA (and probably the world at large), that being a "deep thinking researcher"-type is way higher status than being an "operations/management/doer"-type. Yet the latter is also very high impact work, often higher impact than research (especially on the margin).
I see many EAs erroneously try to go into research and stick to research despite having very clear strengths on the operational side and insist that they shouldn't do operations work unless they clearly fail at research first.
I've personally felt this at times where I started my career very oriented towards research, was honestly only average or even below-average at it, and then switched into management, which I think has been much higher impact (and likely counterfactually generated at least a dozen or more researchers).
Sorry Oli, but what is up with this (and your following) comment?
From what I've read from you[1] seem to value what you call "integrity" almost as a deontological good above all others. And this has gained you many admirers. But to my mind high integrity actors don't make the claims you've made in both of these comments without bringing examples or evidence. Maybe you're reacting to Sean's use of 'garden variety incompetence' which you think is unfair to Bostrom's attempts to tow the fine line between independence and managing university politics but still, I feel you could have done better here.
To make my case:
- When you talk about "other organizations... become a hollow shell of political correctness and vapid ideas" you have to be referring to CSER & Leverhulme here right, like it's the only context that makes sense.
- If not, I feel like that's very misleadingly phrased.
- But if it is, then calling those organisations 'hollow shells' of 'vapid ideas' is like really rude, and if you're going to go there at least have the proof to back it up?
- Now that just might be you having very different politics from CSER & Leverhulme people. But then you say "he [Bostrom] didn't comprom
... (read more)To put my money where my mouth is, I will be cutting my salary back to "minimum wage" in October.
Some thoughts about this --
I genuinely thought SBF spoke to me with the knowledge I was a journalist covering him, knew we were on the record, and knew that an article quoting him was going to happen.*** The reasons I thought that were:
- I knew SBF was very familiar with how journalism works. At the start of our May interview I explained to him how on the record/off the record works, and he was (politely) impatient because he knew it because he does many interviews.
- I knew SBF had given on the record interviews to the New York Times and Washington Post in the last few days, so while it seemed to me like he clearly shouldn't be talking to the press, it also seemed like he clearly was choosing to do so for some reason and not at random. Edited to add: additionally, it appears that immediately after our conversation concluded he called another journalist to talk on the record and say among other things that he'd told his lawyer to "go fuck himself" and that lawyers "don’t know what they’re talking about". I agree it is incredibly bizarre that Sam was knowingly saying things like this on the record to journalists.
- Obviously SBF's communications right now are g... (read more)
I strongly agree with some parts of this post, in particular:
On the other hand, I disagree with some of it -- and thought I'd push back especially given that there isn't much pushback in the comments here:
I think this is misleading in that I’d guess the strongest current we face is toward greater moderation and pluralism, rather than radicalism. As a community and as individuals, some sources of pressure in a ‘moderation’ direction include:
-
-
... (read more)As individuals, the desire to be liked by and get along with others, including people inside and outside of EA
As individuals that
I think Abie Rohrig and the broader team have been crushing it with the launch of What We Owe The Future. So so much media coverage and there are even posters popping up in tube stations across London!
This post doesn't seem screamingly urgent. Why didn't you have the chance to share a draft with ACE?
It seems like there are several points here where clarification from ACE would be useful, even if the bulk of your complaints stand.
Hi Will, thanks for your comment.
The idea of sending a draft to ACE didn't occur to me until I was nearly finished writing the post. I didn't like the idea of dwelling on the post for much longer, especially given some time commitments I have in the coming weeks.
Though to be honest, I don't think this reason is very good, and upon reflection I suspect I should have send a draft to ACE before posting to clear up any misunderstandings.
Having written a similar post in the past, it's worth keeping in mind the amount of time they take to write is huge. Hypatia seems to have done a very good job expressing the facts in a way which communicates why they are so concerning while avoiding hyperbole. While giving organisations a chance to read a draft can be a good practice to reduce the risk of basic factual mistakes (and one I try to follow generally), it's not obligatory. Note that we generally do not afford non-EA organisations this privilege, and indeed I would be surprised if ACE offered Connor the chance to review their public statement which pseudonymously condemned him. Doing so adds significantly to the time commitment and raises anonymity risks[1], especially if one is worried about retaliation from an organisation that has penalized people for political disagreements in the past.
[1] As an example, here is something I very nearly messed up and only thought of at the last minute: you need to make a fresh copy of the google doc to share without the comments, or you will reveal the identity of your anonymous reviewers, even if you are personally happy to be known.
My reflections on 5 criticisms of FarmKind’s bonus system:
Hello!
After receiving impassioned criticisms on our announcement post last week, I decided to use a plane trip (I’ve been on leave) to reflect on them with a scout mindset to make sure Thom and I aren’t missing anything important that would mean we should change our approach. I’m glad I did, because on my way back from leave I noticed this new post. I thought it would help to share my reflections.
To set expectations: We won’t be able to continue engaging in the discussion on this here. This is not because we consider the “case closed”, but because we are a team of 2 running a brand new organization so we need to prioritise how we use our time. It’s important (and a good use of time) for us to make sure we consider criticisms and that we are confident we are doing the right thing.[1] But there is a limit to how much time we can dedicate to this particular discussion. Please enjoy the ongoing discussion, and apologies that we can’t prioritise further engagement with it :)
Before I get to my reflections, I want to point out an unhelpful equivocation I’ve seen in the discourse: Some of the comments speak of donation matching... (read more)
I wasn't at Manifest, though I was at LessOnline beforehand. I strongly oppose attempts to police the attendee lists that conference organizers decide on. I think this type of policing makes it much harder to have a truth-seeking community. I've also updated over the last few years that having a truth-seeking community is more important than I previously thought - basically because the power dynamics around AI will become very complicated and messy, in a way that requires more skill to navigate successfully than the EA community has. Therefore our comparative advantage will need to be truth-seeking.
Why does enforcing deplatforming make truth-seeking so much harder? I think there are (at least) three important effects.
First is the one described in Scott's essay on Kolmogorov complicity. Selecting for people willing to always obey social taboos also selects hard against genuinely novel thinkers. But we don't need to take every idea a person has in board in order to get some value from them - we should rule thinkers in, not out.
Secondly, a point I made in this tweet: taboo topics tend to end up expanding, for structural reasons (you can easily appeal to taboos to win arguments). So ov... (read more)
My personal reaction: I know you are scared and emotional, I am too. This post however, crossed my boundary.
I'm a woman, I'm in my late 20s and I'm going to do what you call sleeping around in the community if it's consensual from both sides. Obviously, I'm going to do my absolute best to be mature in my behaviors and choices in every way. I also believe that as the community we should do better job in protecting people from unwanted sexual behavior and abuse. But I will not be a part of community which treats conscious and consensual behavior of adult people as their business, because it hell smells like purity culture for me. And it won't do the job in protecting anybody.
I'm super stressed by this statement.
Hi Constance,
I was sad to read your initial post and recognize how disappointed you are about not getting to come to this EAG. And I see you’ve put a lot of work into this post and your application. I’m sorry that the result wasn’t what you were hoping for.
After our call (I’m happy to disclose that I am “X”), I was under the impression that you understood our decision, and I was happy to hear that you started getting involved with the in-person community after we spoke.
As I mentioned to you, I recommend that you apply to an EAGx event, which might be a better fit for you at this stage.
It’s our policy to not discuss the specifics of people’s applications with other people besides them. I don’t think it would be appropriate for me to give more detail about why you were rejected publicly, so it is hard to really reply to the substance of this post, and share the other side of this story.
I hope that you continue to find ways to get involved, deepen your EA thinking, and make contributions to EA cause areas. I’m sorry that this has been a disappointing experience for you. At this point, given our limited capacity, and the time we’ve spent engaging on calls, email, ... (read more)
For people who consider taking or end up taking this advice, some things I might say if we were having a 1:1 coffee about it:
- Being away from home is by its nature intense, this community and the philosophy is intense, and some social dynamics here are unusual, I want you to go in with some sense of the landscape so you can make informed decisions about how to engage.
- The culture here is full of energy and ambition and truth telling. That's really awesome, but it can be a tricky adjustment. In some spaces, you'll hear a lot of frank discussion of talent and fit (e.g. people might dissuade you from starting a project not because the project is a bad idea but because they don't think you're a good fit for it). Grounding in your own self worth (and your own inside views) will probably be really important.
- People both are and seem really smart. It's easy to just believe them when they say things. Remember to flag for yourself things you've just heard versus things you've discussed at length vs things you've really thought about yourself. Try to ask questions about the gears of people's models, ask for credences and cruxes. Remember that people disagree, including about very bi
... (read more)"Don't be fanatical about utilitarian or longtermist concerns and don't take actions that violate common sense morality" is a message that longtermists have been emphasizing from the very beginnings of this social movement, and quite a lot.
Some examples:
More generally, there's often at least a full paragraph devoted to this topic when someone writes a longer overview article on longtermism or writes about particularly dicey implications with outsized moral stakes. I also remember this being a presentation or conversation topic at many EA conferences.
I haven't read the corresponding section in the paper that the OP refers to, yet, but I skimmed the literature section and found none of the sources I linked to above. If the paper criticizes longtermism on grounds of this sort of implication and fails to mention that longtermists have been aware of this and are putting in a lot o... (read more)
Although I am on the board of Animal Charity Evaluators, everything I say on this thread is my own words only and represents solely my personal opinion of what may have been going on. Any mistakes here are my own and this should not be interpreted as an official statement from ACE.
I believe that the misunderstanding going on here might be a false dilemma. Hypatia is acting as though the two choices are to be part of the social justice movement or to be in favor of free open expression. Hypatia then gives evidence that shows that ACE is doing things like the former, and thus concludes that this is dangerous because the latter is better for EA.
But this is a false dichotomy. ACE is deliberately taking a nuanced position that straddles both sides. ACE is not in danger of becoming an org that just goes around canceling free thought thinkers. But nor is ACE is danger of ignoring the importance of providing safe spaces for black, indigenous, and people of the global majority (BIPGM) in the EAA community. ACE is doing both, and I think rightly so.
Many who read this likely don't know me, so let me start out by saying that I wholeheartedly endorse the spirit of the quoted comment from Anna S... (read more)
Can you explain more about this part of ACE's public statement about withdrawing from the conference:
If ACE was not trying to deplatform the speaker in question, what were these messages about and what kind of compromise were you trying to reach with CARE?
[Own views]
If an issue is important to a lot of people, private follow-ups seem a poor solution. Even if you wholly satisfy Buck, he may not be able to relay what reassured him to all concerned parties, and thus likely duplication of effort on your part as each reaches out individually.
Of course, this makes more sense as an ill-advised attempt to dodge public scrutiny - better for PR if damning criticism remains in your inbox rather than on the internet-at-large. In this, alas, Leverage has a regrettable track record: You promised 13 months ago to write something within a month to better explain Leverage better, only to make a much more recent edit (cf.) that you've "changed your plans" and enco... (read more)
I worry there's a negative example bias in the section about working with AI companies/accumulating power and influence, vs. working outside the system.
You point to cases where something bad happened, and say that some of the people complacent in the bad thing didn't protest because they wanted to accumulate power/influence within the system.
But these should be matched by looking for cases where something good happened because people tried to accumulate power/influence within a system.
I think this is a significant percent of all good things that have ever happened. Just to give a trivial example, slavery ended because people like Abraham Lincoln successfully accumulated power within the federal government, which at the time was pro-slavery and an enforcer of slavery. If abolitionists had tried to "stay pure" by refusing to run for office, they probably would have gotten nowhere.
Or an even clearer example: Jimmy Carter ended segregation in Georgia by pretending to be extremely racist, winning the gubernatorial election on the strength of the racist vote, then showing his true colors and ending segregation.
(is it cheating to use government as an example? I don't think so - you mentio... (read more)
I feel like this post is doing something I really don't like, which I'd categorize as something like "instead of trying to persuade with arguments, using rhetorical tricks to define terms in such a way that the other side is stuck defending a loaded concept and has an unjustified uphill battle."
For instance:
I mean, no, that's just not how the term is usually used. It's misleading to hide your beliefs in that way, and you could argue it's dishonest, but it's not generally what people would call a "lie" (or if they did, they'd use the phrase "lie by omission"). One could argue that lies by omission are no less bad than lies by commission, but I think this is at least nonobvious, and also a view that I'm pretty sure most people don't hold. You could have written this post with words like "mislead" or "act coyly about true beliefs" instead of "lie", and I think that would have made this post substantially better.
I also feel like the piece weirdly implies that it's dishonest to advocate for a policy ... (read more)
A couple of other examples, both of which have been discussed on LessWrong before:
- In Eliezer's book Inadequate Equilibria, he gives a central anecdote that by reading econ bloggers he confidently realized the Bank of Japan was making mistakes worth trillions of dollars. He further claimed that a change in leadership meant that the Bank of Japan soon after pursued his favored policies, immediately leading to "real GDP growth of 2.3%, where the previous trend was for falling RGDP" and validating his analysis.
- If true, this is really remarkable. Let me reiterate: He says that by reading econ blogs, he was able to casually identify an economic policy of such profound importance that the country of Japan was able to reverse declining GDP immediately.
- In fact, one of his central points in the book is not just that he was able to identify this opportunity, but that he could be justifiably confident in his knowledge despite not having any expertise in economic policy. His intention with the book is to explain how and why he can be correct about things like this.
- The problem? His anecdote falls apart at the slightest fact check.
- Japan's GDP was not falling when he says i
... (read more)I haven't thought about it much but removing people from boards after a massive miscalculation seems reasonable.
Like our prior should be to replace at least Nick and Will right?
Thank you so much for your time, dedication, and efforts.
It seems like, for many of us, difficult times lay ahead. Let us not forget the power of our community - a community of brilliant, kind-hearted, caring people trying to do good better together.
This is a crisis - but we have the ability to overcome it.
Quick response to comments about potential clawbacks: OP expects to put out an explainer about clawbacks tomorrow. It'll be written by our outside counsel and probably won't contain much in the way of specifics, but I think generally FTX grantees should avoid spending additional $$ on legal advice about this just yet.
Also, please don't take this as evidence that we expect clawbacks to happen, just that we know it's an issue of community concern.
Hi Jack,
Just a quick response on the CEA’s groups team end.
We are processing many small grants and other forms of support for CB and we do not have the capacity to publish BOTECs on all of them.
However, I can give some brief heuristics that we use in the decision-making.
Institutions like Facebook, Mckinsey, and Goldman spend ~ $1 million per school per year at the institutions they recruit from trying to pull students into lucrative careers that probably at best have a neutral impact on the world. We would love for these students to instead focus on solving the world’s biggest and most important problems.
Based on the current amount available in EA, its projected growth, and the value of getting people working in EA careers, we currently think that spending at least as much as McKinsey does on recruiting pencils out in expected value terms over the course of a student’s career. There are other factors to consider here (i.e. double-counting some expenses) that mean we actually spend significantly less than this. However, as Thomas said - even small chances that dinners could have an effect on career changes make them seem like effective uses of money. (We do have a fair a... (read more)
I was the primary grant evaluator for this project. I think this kind of stuff is of course extremely high variance to fund, and I probably wouldn't make the grant today (both based on the absence of high-quality outputs, and based on changes in the funding landscape).
Note that this grant was made at the very peak of the period of very abundant (partially FTX-driven) EA funding where finding good funding opportunities was extremely hard.
I think video games are a pretty promising medium to explain a bunch of safety ideas in. I agree the creator doesn't seem to have done very much with the grant (though to be clear, they might still publish something), which makes the grant bad in-retrospect.
In my experience this is the default outcome of founding almost any early-stage entrepreneurial project like this, which does sure make this area hard to think about. But like, yeah, most software startups and VC investments don't really produce anything. $100k is not a lot for a competent softwar... (read more)
Our data suggests that the highest impact scandals are several times more impactful than other scandals (bear in mind that this data is probably not capturing the large number of smaller scandals).
Quantitatively how large do you think the non-response bias might be? Do you have some experience or evidence in this area that would help estimate the effect size? I don't have much to go on, so I'd definitely welcome pointers.
Let's consider the 40% of people who put a 10% probability on extinction or similarly bad outcomes (which seems like what you are focusing on). Perhaps you are worried about something like: researchers concerned about risk might be 3x more likely to answer the survey than those who aren't concerned about risk, and so in fact only 20% of people assign a 10% probability, not the 40% suggested by the survey.
Changing from 40% to 20% would be a significant revision of the results, but honestly that's probably comparable to other sources of error and I'm not sure you should be trying to make that precise an inference.
But more importantly a 3x selection effect seems implausibly large to me. The survey was presented as being about "progress in AI" and there's not an obvious mechanism for huge selection effects on these questions. I haven't seen literature that would help estimate the effect size, but based on a general sense of correlation sizes in other domains I'd... (read more)
I’m one of the Community Liaisons for CEA’s Community Health and Special Projects team. The information shared in this post is very troubling. There is no room in our community for manipulative or intimidating behaviour.
We were familiar with many (but not all) of the concerns raised in Ben’s post based on our own investigation. We’re grateful to Ben for spending the time pursuing a more detailed picture, and grateful to those who supported Alice and Chloe during a very difficult time.
We talked to several people currently or formerly involved in Nonlinear about these issues, and took some actions as a result of what we heard. We plan to continue working on this situation.
From the comments on this post, I’m guessing that some readers are trying to work out whether Kat and Emerson’s intentions were bad. However, for some things, intentions might not be very decision-relevant. In my opinion, meta work like incubating new charities, advising inexperienced charity entrepreneurs, and influencing funding decisions should be done by people with particularly good judgement about how to run strong organisations, in addition to having admirable intentions.
I’m looking forward to seeing what information Nonlinear shares in the coming weeks.
The low number of human-shrimp connections may be due to the attendance dip in 2020. Shrimp have understandably a difficult relationship with dips.
I'm missing a lot of context here, but my impression is that this argument doesn't go through, or at least is missing some steps:
Instead, the argument which would go through would be:
- Open Philanthropy spent $20M on Redwood Research
- That $20
... (read more)I think I found the crux.
I treat EA as a community. And by "community" I mean "a group of friends who have common interests". In the same time, I treat some parts of EA as "companies". "Companies" have hierarchy, structure, money and very obvious power dynamics. I separate the two.
I'm not willing to be a part of community, which treats conscious and consensual behavior of adult people as their business (as stated under the other post). In the same time, I'd be more than happy to work for a company which has such norms. I actually prefer it this way, as long as they are reasonable and not i.e. sexist, polyphobic and so on.
I think a tricky part is, EA is quite complex with this regard. I don't think the same rules should apply to interest groups, grant-makers, companies. I think a power dynamic between grant-maker and grantee is quite different from the one which applies to university EA group leader and group's member. I believe, that the community should function as a group of friends, and companies/interest groups should create their own, internal rules. But maybe it won't work for the EA. Happy to update here, I, however, want to mention that for a lot of people EA is their whole life and the main social group. I would be very careful while setting the general norms.
(When it comes to "EA celebrities", I think it's a separate discussion, so I'm not mentioning them here as I would like to focus on community/workplace differences and definitions first. )
ok, an incomplete and quick response to the comments below (sry for typos). thanks to the kind person who alerted me to this discussion going on (still don't spend my time on your forum, so please do just pm me if you think I should respond to something)
1.
- regarding blaming Will or benefitting from the media attention
- i don't think Will is at fault alone, that would be ridiculous, I do think it would have been easy for him to make sure something is done, if only because he can delegate more easily than others (see below)
- my tweets are reaction to his tweets where he says he believes he was wrong to deprioritise measures
- given that he only says this after FTX collapsed, I'm saying, it's annoying that this had to happen before people think that institutional incentive setting needs to be further prioritised
- journalists keep wanting me to say this and I have had several interviews in which I argue against this simplifying position
2.
- i'm rather sick of hearing from EAs that i'm arguing in bad faith
- if I wanted to play nasty it wouldn't be hard (for anyone) to find attack lines, e.g. i have not spoken about my experience of sexual misconduct in EA and i continue to refuse t... (read more)
Hi hi :) Are you involved in the Magnify Mentoring community at all? I've been poorly for the last couple of weeks so I'm a bit behind but I founded and run MM. Personally, I'd also love to chat :) Feel free to reach out anytime. Super Warmly, Kathryn
Thanks Magnus for your more comprehensive summary of our population ethics study.
You mention this already, but I want to emphasize how much different framings actually matter. This surprised me the most when working on this paper. I’d thus caution anyone against making strong inferences from just one such study.
For example, we conducted the following pilot study (n = 101) where participants were randomly assigned to two different conditions: i) create a new happy person, and ii) create a new unhappy person. See the vignette below:
The response scale ranged from 1 = Extremely bad to 7 = Extremely good.
Creating a happy person was rated as only marginally better than neutral (mean = 4.4), whereas creating an unhappy person was rated as extremely bad (mean = 1.4). So this would lead one to believe that there is stro... (read more)
The EA Mindset
This is an unfair caricature/ lampoon of parts of the 'EA mindset' or maybe in particular, my mindset towards EA.
Importance: Literally everything is at stake, the whole future lightcone astronomical utility suffering and happiness. Imagine the most important thing you can think of, then times that by a really large number with billions of zeros on the end. That's a fraction of a fraction of what's at stake.
Special: You are in a special time upon which the whole of everything depends. You are also one of the special chosen few who understands how important everything is. Also you understand the importance of rationality and evidence which everyone else fails to get (you even have the suspicion that some of the people within the chosen few don't actually 'really get it').
Heroic responsiblity: "You could call it heroic responsibility, maybe,” Harry Potter said. “Not like the usual sort. It means that whatever happens, no matter what, it’s always your fault. Even if you tell Professor McGonagall, she’s not responsible for what happens, you are. Following the school rules isn’t an excuse, someone else being in charge isn’t an excus... (read more)
So, I downvoted this post, and wanted to explain why.
First though, I'd like to acknowledge that Manifest sure seems by far the most keen to invite "edgy" speakers out of any Lighthaven guests. Some of them seem like genuinely curious academics with an interest bound to get them into trouble (like Steve Hsu), whereas others seem like they're being edgy for edges sake, in a way that often makes me cringe (like Richard Hanania last year). Seems totally fair to discuss what's up with that speaker choice.
However, the way you engage in that discussion gives me pause.
I'm happy to cut you some slack, because having a large community discussion about these topics in a neutral and detached way is super hard. Sometimes you just gotta get your thoughts out there, and can't be held to everything under a microscope. And in general, that's ok. Nonetheless, I feel kind of obliged to point out a bunch of things that make me uncomfortable about your post.
The title itself describes Manifest as controversial as though it was an objectively verifiable descriptive term (such as "green"). This gives me an immune reaction, feeling som... (read more)
I'm Isaak, the lead organizer of Future Forum. Specifically addressing the points regarding Future Forum:
I don't know whether retroactive funding happens in other cases. However, all grants made to Future Forum were committed before the event. The event and the organization received three grants in total:
Applications for the grants were usually sent 1-3 weeks before approval. While we had conversations with funders throughout, all applications went through official routes and application forms.
I received the specific grant application approval emails on:
The event ran from August 4-7th. I.e., we never had a grant committed "retroactively".
Knowing that the event was experimental and that the core team didn't have much operat... (read more)
I hope you don't take this the wrong way, but this press release is badly written, and it will hurt your cause.
I know you say you're talking about more than extinction risks, but when you put: "The probability of AGI causing human extinction is greater than 99%" in bold and red highlight, that's all anyone will see. And then they can go on to check what experts think, and notice that only a fringe minority, even among those concerned with AI risk, believe that figure.
By declaring your own opinion as the truth, over that of experts, you come off like an easily dismissible crank. One of the advantages of the climate protest movements is that they have a wealth of scientific work to point to for credibility. I'm glad you are pointing out current day harms later on in the article, but by then it's too late and everyone will have written you off.
In general, there are too many exclamation points! It comes off as weird and offputting! and RANDOMLY BREAKING INTO ALLCAPS makes you look like you're arguing on an internet forum. And there's way too long paragraphs full of confusing phrases that are not understandable by a layperson.
I suggest you find some people who have absolutely zero exposure to AI safety or EA at all, and run these and future documents by them for ideas on improvements.
EDIT: this is going a bit viral, and it seems like many of the readers have missed key parts of the reporting. I wrote this as a reply to Wei Dai and a high-level summary for people who were already familiar with the details; I didn't write this for people who were unfamiliar, and I'm not going to reference every single claim in it, as I have generally referenced them in my prior comments/tweets and explained the details & inferences there. If you are unaware of aspects like 'Altman was trying to get Toner fired' or pushing out Hoffman or how Slack was involved in Sutskever's flip or why Sutskever flip-flopped back, still think Q* matters, haven't noticed the emphasis put on the promised independent report, haven't read the old NYer Altman profile or Labenz's redteam experience etc., it may be helpful to catch up by looking at other sources; my comments have been primarily on LW since I'm not a heavy EAF user, plus my usual excerpts.
It was a pretty weak hand. There is this pervasive attitude that Sam Altman could have been dispensed with easily by the OA Board if it had been m... (read more)
I think this post is missing how many really positive relationships started with something casual, and how much the 'plausible deniability' of a casual start can remove pressure. If you turn flirting with someone from an "I'm open to seeing where this goes" into "I think you might the the one" that's a high bar. Which means despite the definition of 'sleeping around' you're using looking like it wouldn't reduce the number of EA marriages and primary relationships I expect it would. Since a lot of EAs in those relationships (hi!) are very happy with them (hi!), this is a cost worth explicitly weighing.
(Writing this despite mostly agreeing with the post and having upvoted it. And also as someone who's done very little dating and thought I was going to marry everyone I dated.)
Max is a phenomenal leader, and I’m very sad to see him go. He’s one of the most caring and humble people I’ve ever worked with, and his management and support during a very difficult few months has been invaluable. He’s also just a genuine delight to be around.
It’s deeply unfair that this job has taken a toll on him, and I’m very glad that he’s chosen the right thing for him.
Max has taught me so much, and I’ll be forever grateful for that. And I’m looking forward to continuing to work with him as an advisor — I know he’ll continue to be a huge help.
[For context, I'm definitely in the social cluster of powerful EAs, though don't have much actual power myself except inside my org (and my ability to try to persuade other EAs by writing posts etc); I had more power when I was actively grantmaking on the EAIF but I no longer do this. In this comment I’m not speaking for anyone but myself.]
This post contains many suggestions for ways EA could be different. The fundamental reason that these things haven't happened, and probably won't happen, is that no-one who would be able to make them happen has decided to make them happen. I personally think this is because these proposals aren't very good. And so:
And so I wish that posts like this were clearer about their theory of change (henceforth abbreviated ToC). You've laid out a long list of ways that you wish EA orgs behaved differently. You've also made the (IMO broadly correct) point that a lot of EA organizations are led and influenced ... (read more)
I strongly downvoted this response.
The response says that EA will not change "people in EA roles [will] ... choose not to", that making constructive critiques is a waste of time "[not a] productive ways to channel your energy" and that the critique should have been better "I wish that posts like this were clearer" "you should try harder" "[maybe try] politely suggesting".
This response seems to be putting all the burden of making progress in EA onto those trying to constructively critique the movement, those who are putting their limited spare time into trying to be helpful, and removing the burden away from those who are actively paid to work on improving this movement. I don’t think you understand how hard it is to write something like this, how much effort must have gone into making each of these critiques readable and understandable to the EA community. It is not their job to try harder, or to be more polite. It is our job, your job, my job, as people in EA orgs, to listen and to learn and to consider and if we can to do better.
Rather than saying the original post should be better maybe the response should be that those reading the original post should be better at conside... (read more)
I think the criticism of the theory of change here is a good example of an isolated demand for rigour (https://slatestarcodex.com/2014/08/14/beware-isolated-demands-for-rigor/), which I feel EAs often apply when it comes to criticisms.
It’s entirely reasonable to express your views on an issue on the EA forum for discussion and consideration, rather than immediately going directly to relevant stakeholders and lobbying for change. I think this is what almost every EA Forum post does and I have never before seen these posts criticised as ‘complaining’.
One thing I think this comment ignores is just how many of the suggestions are cultural, and thus do need broad communal buy in, which I assume is why they sent this publicly. Whilst they are busy, I'd be pretty disappointed if the core EAs didn't read this and take the ideas seriously (ive tried tagging dome on twitter), and if you're correct that presenting such a detailed set of ideas on the forum is not enough to get core EAs to take the ideas seriously I'd be concerned about where there was places for people to get their ideas taken seriously. I'm lucky, I can walk into Trajan house and knock on peoples doors, but others presumably aren't so lucky, and you would hope that a forum post that generated a lot of discussion would be taken seriously. Moreover, if you are concerned with the ideas presented here not getting fair hearing, maybe you could try raising salient ideas to core EAs in your social circles?
(This is an annoyed post. Having re-read it, I think it's mostly not mean, but please downvote it if you think it is mean and I'll delete it.)
I have a pretty negative reaction to this post, and a number of similar others in this vein. Maybe I should write a longer post on this, but my general observation is that many people have suddenly started looking for the "adults in the room", mostly so that they can say "why didn't the adults prevent this bad thing from happening?", and that they have decided that "EA Leadership" are the adults.
But I'm not sure "EA Leadership" is really a thing, since EA is a movement of all kinds of people doing all kinds of things, and so "EA Leadership" fails to identify specific people who actually have any responsibility towards you. The result is that these kinds of questions end up either being vague or suggesting some kind of mysterious shadowy council of "EA Leaders" who are secretly doing naughty things.
It gets worse! When people do look for an identifiable figure to blame, the only person who looks vaguely like a leader is Will, so they pick on him. But Will is not the CEO of EA! He's a philosopher who writes books about EA and has received ... (read more)
Thanks, I thought this was the best-written and most carefully argued of the recent posts on this theme.
Here's where I see this association coming from. People vary in many ways, some directly visible (height, facial structure, speed, melanin) and some less so (compassion, facility with mathematics, creativity, musicality). Most directly visible ones clearly have a genetic component: you can see the differences between populations, cross-group adoptees are visibly much more similar to their birth parents than their adoptive parents, etc. With the non-visible variation it's harder to tell how much is genetic, but evidence from situations like twins raised apart tells us that some is.
Getting closer to the edge, it's likely that there are population-level genetic differences on non-visible traits: different populations have been under different selection pressures in ways that impacted visible traits, and it would be surprising if these pressures didn't impact non-visible traits. One c... (read more)
It is sometimes hard for communities with very different beliefs to communicate. But it would be a shame if communication were to break down.
I think it is worth trying to understand why people from very different perspectives might disagree with effective altruists on key issues. I have tried on my blog to bring out some key points from the discussions in this volume, and I hope to explore others in the future.
I hope we can bring the rhetoric down and focus on saying as clearly as possible what the main cruxes are and why a reasonable person might stand on one side or another.
I would just add to this that it’s worth taking a few minutes to really think if there is anyone you might possibly know who lives in the district — or even a second-degree connection like a friend’s sister who you’ve never met. “Relational” communications are much more high-impact than calling strangers if it’s at all possible to find someone you have any connection with.
Red teaming papers as an EA training exercise?
I think a plausibly good training exercise for EAs wanting to be better at empirical/conceptual research is to deep dive into seminal papers/blog posts and attempt to identify all the empirical and conceptual errors in past work, especially writings by either a) other respected EAs or b) other stuff that we otherwise think of as especially important.
I'm not sure how knowledgeable you have to be to do this well, but I suspect it's approachable for smart people who finish high school, and certainly by the time they finish undergrad^ with a decent science or social science degree.
I think this is good career building for various reasons:
- you can develop a healthy skepticism of the existing EA orthodoxy
- I mean skepticism that's grounded in specific beliefs about why things ought to be different, rather than just vague "weirdness heuristics" or feeling like the goals of EA conflict with other tribal goals.
- (I personally have not found high-level critiques of EA, and I have read many, to be particularly interesting or insightful, but this is just a personal take).
- you actually deeply understand at least one topic well enough
... (read more)I also get a lot of this vibe from (parts of) the EA community, and it drives me a little nuts. Examples:
Being epistemically modest by, say, replacing your own opinions with the average opinion of everyone around you, might improve the epistemics of the majority of people (in fact it almost must by definition), but it is a terrible idea on a group level: it's a recipe for information cascades, groupthink... (read more)
Is there anything we can do to help?
People
1. Anyone you'd like an introduction to?
2. Any roles you're having a hard time filling?
3. If you had 100 Oxford graduates dying to help you, what tasks might you give them?
Money
5. Any videos you'd like to make, that help people in non-western countries, that aren't quite commercially viable enough? How larger donation would push them over the line?
6. What would you do if you had additional $100k to spend on global poverty videos?
Expertise
7. What are the biggest challenges you're facing right now and what are your plans for tackling them?
8. Got any issues that a very skilled volunteer (e.g. professional with 5+ years experience) could help with?
9. Do you have any big unanswered questions that meaningfully affect your philanthropy?
From my personal perspective: While the additional context makes the interaction itself seem less bad, I think the fact that it involved Owen (rather than, say, a more tangentially involved or less influential community member) made it a lot worse than what I would have expected. In addition, this seems the
firstsecond time (after this one*) I hear about a case that the community health team didn't address forcefully enough, which wasn't clear to me based on the Time article.* edited based on feedback that someone sent me via DM, thank you
(edit: I think you acknowledge this elsewhere in your comment)
Hi, thanks for raising these questions. I lead Open Philanthropy’s biosecurity and pandemic prevention work and I was the investigator of this grant. For context, in September last year, I got an introduction to Helena along with some information about work they were doing in the health policy space. Before recommending the grant, I did some background reference calls on the impact claims they were making, considered similar concerns to ones in this post, and ultimately felt there was enough of a case to place a hits-based bet (especially given the more permissive funding bar at the time).
Just so there’s no confusion: I think it’s easy to misread the nepotism claim as saying that I or Open Phil have a conflict of interest with Helena, and want to clarify that this is not the case. My total interactions with Helena have been three phone calls and some email, all related to health security work.
Just noting that this reply seems to be, to me, very close to content-free, in terms of addressing object-level concerns. I think you could compress it to "I did due diligence" without losing very much.
If you're constrained in your ability to discuss things on the object-level, i.e. due to promises to keep certain information secret, or other considerations like "discussing policy work in advance of it being done tends to backfire", I would appreciate that being said explicitly. As it is, I can't update very much on it.
ETA: to be clear, I'm not sure I how I feel about the broader norm of requesting costly explanations when something looks vaguely off. My first instinct is "against", but if I were to adopt a policy of not engaging with such requests (unless they actually managed to surface something I'd consider a mistake I didn't realize I'd made), I'd make that policy explicit.
I liked CEA's statement
- Writing statements like this is really hard. It's the equivalent of writing one tweet on something that you know everyone is gonna rip to pieces. I think there are tradeoffs here that people on the forum don't seem to acknowledge. I am very confident (90%) that a page length discussion of this would have been worse in terms of outcomes.
- I don't think it was for us - I think it was for journalists etc. And I think it performed its job of EA not being dragged into all of this. Note how much better it was than either Anders' statement or Bostrom's - no one externally is discussing it, and in an adversarial environment that means it's succeeded.
- I think it was an acceptable level of accuracy. It's very hard to write short things, but does EA roughly hold that all people are equal? Yes I think that's not a bad 4 word summary. I think a better summary is "the value of beings doesn't change based on their position in space or time and I reject the many heuristics humanity has used to narrow concern which have led to the suffering we see today - racism, sexism, speciesism, etc". I think that while more precise that phrase isn't that much more accurate and is wors
... (read more)Thank you for a good and swift response, and in particular, for stating so clearly that fraud cannot be justified on altruistic grounds.
I have only one quibble with the post: IMO you should probably increase your longtermist spending quite significantly over the next ~year or so, for the following reasons (which I'm sure you've already considered, but I'm stating them so others can also weigh in)
- IIRC Open Philanthropy has historically argued that a lack of high-quality, shovel-ready projects has been limiting the growth in your longtermist portfolio. This is not the case at the moment. There will be projects that 1) have significant funding gaps, 2) have been vetted by people you trust for both their value alignment and competence, 3) are not only shovel-ready, but already started. Stepping in to help these projects bridge the gap until they can find new funding sources looks like an unusually cost-effective opportunity. It may also require somewhat less vetting on your end, which may matter more if you're unusually constrained by grantmaker capacity for a while
- Temporarily ramping up funding can also be justified by considering likely flow-through effects of acting as an "insurer o
... (read more)I want to push back on this a tiny bit. Just because some projects got funding from FTX, that doesn't necessarily mean Open Phil should fund them. There's a few reasons for this:
- When FTX Future Fund was functioning, there was lots more money available in the ecosystem, hence (I think) the bar for receiving a longtermist grant was lower. This money is now gone, and lots of orgs who got FTX funding might not meet OP's bar / the new bar we should have given less resources. So basically I don't think it's sufficient to say 1) they have significant funding gaps, 2) they exist and 3) they've been vetted by people you trust. IMO you need to prove that they're also sufficiently high-quality, which might not be true as FTX was vetting them with a different bar in mind.
... (read more)I think there's a lot that's intriguing here. I also really enjoyed the author's prior takedown of "Why We Sleep".
However, I need to throw a flag on the field for isolated demands of rigor / motivated reasoning here - I think you are demanding a lot from sleep science to prove their hypotheses about needing >7hrs of sleep but then heavily relying on an unproven analogy to eating (why should we think sleeping and eating are similar?), the sleep patterns of a few hunter-gatherers (why should we think what hunter-gatherers did was the healthiest?), the sailing coach guy (this was the most compelling IMO but shouldn't be taken as conclusive), and a random person with brain surgery (that wasn't even an RCT). If someone had the same scattered evidence in favor of sleep, there's no way you'd accept it.
Maybe not sleeping doesn't affect writing essays, but in the medical field at least there seems to at least be an increased risk of medical error for physicians who are sleep deprived. "I'm pretty sure this is 100% psyop" goes too far.
For what it's worth (and it should be worth roughly the same as this blog post), my personal anecdotes:
1.) Perhaps too convenient and my data quality is no... (read more)
Leopold Aschenbrenner is starting a cross between a hedge fund and a think tank for AGI. I have read only the sections of Situational Awareness most relevant to this project, and I don't feel nearly like I understand all the implications, so I could end up being quite wrong. Indeed, I’ve already updated towards a better and more nuanced understanding of Aschenbrenner's points, in ways that have made me less concerned than I was to begin with. But I want to say publicly that the hedge fund idea makes me nervous.
Before I give my reasons, I want to say that it seems likely most of the relevant impact comes not from the hedge fund but from the influence the ideas from Situational Awareness have on policymakers and various governments, as well as the influence and power Aschenbrenner and any cohort he builds wield. This influence may come from this hedge fund or be entirely incidental to it. I mostly do not address this here, but it does make all of the below less important.
I also believe that some (though not all) of my concerns about the hedge fund are based on specific disagreements with Aschenbrenner’s views. I discuss some of those below, but a full rebutt... (read more)
Hi Simon, thanks for writing this! I’m research director at FP, and have a few bullets to comment here in response, but overall just want to indicate that this post is very valuable. I’m also commenting on my phone and don’t have access to my computer at the moment, but can participate in this conversation more energetically (and provide more detail) when I’m back at work next week.
-
-
-
-
... (read more)I basically agree with what I take to be your topline finding here, which is that more data is needed before we can arrive at GiveWell-tier levels of confidence about StrongMinds. I agree that a lack of recent follow-ups is problematic from an evaluator’s standpoint and look forward to updated data.
FP doesn’t generally strive for GW-tier levels of confidence; we’re risk-neutral and our general procedure is to estimate expected cost-effectiveness inclusive of deflators for various kinds of subjective consideration, like social desirability bias.
The 2019 report you link (and the associated CEA) is deprecated— FP hasn’t been resourced to update public-facing materials, a situation that is now changing—but the proviso at the top of the page is accurate: we stand by our recommendation.
This is be
A contingent of EAs (e.g., Oliver Habryka and the early Alameda exodus) seems to have had strongly negative views of SBF well in advance of the FTX fraud coming to light. So I think it's worthwhile for some EAs to do a postmortem on why some people were super worried and others were (apparently) not worried at all.
Otherwise, I agree with you that folks have seemed to overreact more than underreact, and that there have been a lot of rushed overconfident claims, and a lot of hindsight-bias-y claims.
I object to how closely you link polyamory with shitty behaviour. At one point you say this you are not criticizing polyamory, but you repeatedly bring it up when talking about stuff like the overlap of work and social life, or men being predatory at EA meetups.
I think men being predatory and subscribing to 'redpill' ideologies is terrible and we shouldn't condone it in the community.
I feel more complicated about the overlap between social life and work life, but I take your general point that this could (and maybe does in fact) lead to conflicts of interest and exploitation.
But neither of these is strongly related to polyamory, polycules etc. I worry that you are contributing to harmful stereotypes about polyamory.
Extra ideas for the idea list:
Also for what it is worth I was really impressed by the post. I it was an very well written, clear, and transparent discussion of this topic with clear actions to take.
I don't know Carrick very well, but I will be pretty straightforward that this post, in particular in the combination with the top comment by Ryan Carey gives me a really quite bad vibe. It seems obvious to me that anyone saying anything bad right now about Carrick would be pretty severely socially punished by various community leaders, and I expected the community leadership to avoid saying so many effusively positive things in a context where it's really hard for people to provide counterevidence, especially when it comes with an ask for substantial career shifts and funding.
I've seen many people receive genuine references in the EA community, many of them quite positive, but they usually are expressed substantially more measured and careful than this post. This post reads to me like a marketing piece that I do not trust, and that I expect to exaggerate at many points (like, did Carrick really potentially save "thousands of lives"? An assertion thrown around widely in the world, but one that is very rarely true, and one that I also doubt is true in this case, by the usual EA standards of evidence).
I don't know Carrick, and the little that I've seen seemed positive and... (read more)
I think there's a bit of a misunderstanding - I'm not asking people to narrowly conform to some message. For example, if you want to disagree with Andrew's estimate of the number of lives that Carrick has saved, go ahead. I'm saying exhibit a basic level of cultural and political sensitivity. One of the strengths of the effective altruism community is that it's been able to incorporate people to whom that doesn't always come naturally, but this seems like a moment when it's required anyway.
Yeah, my reading of your comment was in some ways the opposite of Habryka's original take, since I was reading it as primarily directed at people who might support Carrick in weird/antisocial ways, rather than people who might dissent from supporting him.
Could you say why you chose the name Probably Good, and to what extent that's locked-in at this stage?
I may be alone in this, but to me it seems like a weird name, perhaps especially if a large part of your target audience will be new EAs and non-EAs.
Firstly, it seems like it doesn't make it at all clear what the focus of the organisation is (i.e., career advice). 80,000 Hours' name also doesn't make its focus clear right away, but the connection can be explained in a single sentence, and from then on the connection seems very clear. Whereas if you say "We want to give career advice that's probably good", I might still think "But couldn't that name work just as well and for just the same reason for donation advice, or AI research, or relationship advice, or advice about what present to buy a friend?"
This is perhaps exacerbated by the fact that "good" can be about either morality or quality, and that the name doesn't provide any clues that in this case it's about morality. (Whereas CEA has "altruism" in the name - not just "effective" - and GiveWell has "give" in the name - not just "well".)
In contrast, most other EA orgs' names seem to more clearly gesture at roughly wh... (read more)
Has anyone talked with/lobbied the Gates Foundation on factory farming? I was concerned to read this in Gates Notes.
"On the way back to Addis, we stopped at a poultry farm established by the Oromia government to help young people enter the poultry industry. They work there for two or three years, earn a salary and some start-up money, and then go off to start their own agriculture businesses. It was a noisy place—the farm has 20,000 chickens! But it was exciting to meet some aspiring farmers and businesspeople with big dreams."
It seems a disaster that the Gates foundation are funding and promoting the rapid scale up of factory farming in Africa, and reversing this seems potentially tractable to me. Could individuals, Gates insiders or the big animal rights orgs take this up?
I still think this is hyperbole. Hanania isn't saying he things they/them pronouns are worse than genocide, he says he gets more upset about they/them pronouns than about genocide, just as (according to him) people on the left get more upset about racial slurs than about genocide:
... (read more)Disclosure (copying from a previous comment): I have served in Israel Defense Forces, I live in Israel, I feel horrible about what Israel has done in the past 75 years to millions of Palestinians and I do not want Israel to end up as a horrible stain on human history. I am probably unusually biased when dealing with this topic. I am not making here a claim that people in EA should or should not get involved and in what way.
The author mentioned they do not want the comments to be "a discussion of the war per se" and yet the post contains multiple contentious pro-Israel propaganda talking points, and includes arguments that a cease-fire is net-negative. Therefore it seems to me legitimate to mention here the following.
In interviews to foreign press, Israeli officials/politicians often make claims to the effect that Israel is doing everything it can to minimize civilian casualties. Explaining why those claims are untrustworthy in a short comment is a hard task because whatever I'll write will leave out so much important stuff. (Imagine you had to explain to an alien, in a short text, why a certain claim by Donald Trump is untrustworthy.) But I'll give it a go anyway:
- The current Mini
... (read more)Hello!
I’m Minh, Nonlinear intern from September 2022 to April 2023. The last time allegations of bad practices came up, I reiterated that I had a great time working at Nonlinear. Since this post is >10,000 words, I’m not able to address everything, both because:
I’m just sharing my own experience with Nonlinear, and interpreting specific claims made about Kat/Emerson’s character/interaction styles based on my time with Nonlinear. In fact, I’m largely assuming Alice and Chloe are telling the truth, and speaking in good faith.
Disclaimers
In the interest of transparency, I’d like to state:
- I have never been approached in this investigation, nor was I aware of it. I find this odd, because … if you’re gonna interview dozens of people about a company’s unethical treatment of employees, why wouldn’t you ask the recent interns? Nonlinear doesn’t even have that many people to interview, and I was very easy to find/reach. So that’s … odd.
- I was not asked to write this comment. I just felt like it. It’s been a while since I’ve writte
... (read more)Here's Bostrom's letter about it (along with the email) for context: https://nickbostrom.com/oldemail.pdf
The post in which I speak about EAs being uncomfortable about us publishing the article only talks about interactions with people who did not have any information about initial drafting with Torres. At that stage, the paper was completely different and a paper between Kemp and I. None of the critiques about it or the conversations about it involved concerns about Torres, co-authoring with Torres or arguments by Torres, except in so far as they might have taken Torres an example of the closing doors that can follow a critique. The paper was in such a totally different state and it would have been misplaced to call it a collaboration with Torres.
There was a very early draft of Torres and Kemp which I was invited to look at (in December 2020) and collaborate on. While the arguments seemed promising to me, I thought it needed major re-writing of both tone and content. No one instructed me (maybe someone instructed Luke?) that one could not co-author with Torres. I also don't recall that we were forced to take Torres off the collaboration (I’m not sure who know about the conversations about collaborations we had): we decided to part because we wanted to move the content and tone i... (read more)
Do you think it was a mistake to put "FTX" in the "FTX Future Fund" so prominently? My thinking is that you likely want the goodness of EA and philanthropy to make people feel more positively about FTX, which seems fine to me, but in doing so you also run a risk of if FTX has any big scandal or other issue it could cause blowback on EA, whether merited or not.
I understand the Future Fund has tried to distance itself from effective altruism somewhat, though I'm skeptical this has worked in practice.
To be clear, I do like FTX personally, am very grateful for what the FTX Future Fund does, and could see reasons why putting FTX in the name is also a positive.
Since it looks like you're looking for an opinion, here's mine:
To start, while I deeply respect GiveWell's work, in my personal opinion I still find it hard to believe that any GiveWell top charity is worth donating to if you're planning to do the typical EA project of maximizing the value of your donations in a scope sensitive and impartial way. ...Additionally, I don't think other x-risks matter nearly as much as AI risk work (though admittedly a lot of biorisk stuff is now focused on AI-bio intersections).
Instead, I think the main difficult judgement call in EA cause prioritization right now is "neglected animals" (eg invertebrates, wild animals) versus AI risk reduction.
AFAICT this also seems to be somewhat close to the overall view of the EA Forum as well as you can see in some of the debate weeks (animals smashed humans) and the Donation Election (where neglected animal orgs were all in the top, followed by PauseAI).
This comparison is made especially difficult because OP funds a lot of AI but not any of the neglected animal stuff, which subjects the AI work to significantly more diminished marginal returns.
To be clear, AI orgs still do need money. I think there's a vibe that ... (read more)
My personal view is that being an EA implies spending some significant portion of your efforts being (or aspiring to be) particularly effective in your altruism, but it doesn't by any means demand you spend all your efforts doing so. I'd seriously worry about the movement if there was some expectation that EAs devote themselves completely to EA projects and neglect things like self-care and personal connections (even if there was an exception for self-care & connections insofar as they help one be more effective in their altruism).
It sounds like you developed a personal connection with this particular dog rather quickly, and while this might be unusual, I wouldn't consider it a fault. At the same time, while I don't see a problem with EAs engaging in that sort of partiality with those they connect with, I would worry a bit if you were making the case that this sort of behavior was in itself an act of effective altruism, as I think prioritization, impartiality, and good epistemics are really important to exhibit when engaged in EA projects. (Incidentally, this is one further reason I'd worry if there was an expectation that EAs devote themselves completely to EA projects – I think this would lead to more backwards rationalizations about why various acts people want to do are actually EA projects when they're not, and this would hurt epistemics and so on.) But you don't really seem to be doing that.
Thanks. Is this person still active in the EA community? Does this person still have a role in "picking out promising students and funneling them towards highly coveted jobs"?
Downvoted. I appreciate you a lot for writing this letter, and am sorry you/Will were slandered in this way! But I would like to see less of this content on the EA Forum. I think Torres' has a clear history of writing very bad faith and outrage inducing hit pieces, and think that prominently discussing these or really paying any attention on the EA forum easily sucks in time and emotional energy with little reward. So seeing this post with a lot of comments and at 300+ karma feels sad to me!
My personal take is that the correct policy for the typical EA is to not bother reading their criticisms, given their history of quote mining and misrepresentation, and would have rather never heard about this article.
All that said, I want to reiterate that I'm very glad you wrote this letter, sorry you went through this, and that this has conveyed the useful information to take the bulletin's editorial standards less seriously!
I don't mind sharing a bit about this. SBF desperately wanted to do the Korea arb, and we spent quite a bit of time coming up with any number of outlandish tactics that might enable us to do so, but we were never able to actually figure it out. The capital controls worked. The best we could do was predict which direction the premium would go and trade into KRW and then back out of it accordingly.
Japan was different. We were able to get a Japanese entity set up, and we did successfully trade on the Japan arb. As far as I know we didn't break any laws in doing so, but I wasn't directly involved in the operational side of it. My recollection is that we made something like 10-30 million dollars (~90%CI) off of that arb in total, but I'm not at all confident on the exact amount.
Is that what created his early wealth, though? Not really. Before we all left, pretty much all of that profit had been lost to a series of bad trades and mismanagement of assets. Examples included some number of millions lost to a large directional bet on ETH (that Sam made directly counter to the predictions of our best event trader), a few million more on a large OTC trade in some illiquid shitcoin that crashed... (read more)
I think it's very plausible the reputational damage to EA from this - if it's as bad as it's looking to be - will outweigh the good the Future Fund has done tbh
Agreed lots of kudos to the Future Fund people though
These numbers seem pretty all-over-the-place. On nearly every question, the odds given by the 7 forecasters span at least 2 orders of magnitude, and often substantially more. And the majority of forecasters (4/7) gave multiple answers which seem implausible (details below) in ways that suggest that their numbers aren't coming from a coherent picture of the situation.
I have collected the numbers in a spreadsheet and highlighted (in red) the ones that seem implausible to me.
Odds span at least 2 orders of magnitude:
Another commenter noted that the answers to "What is the probability that Russia will use a nuclear weapon in Ukraine in the next MONTH?" range from .001 to .27. In odds that is from 1:999 to 1:2.7, which is an odds ratio of 369. And this was one of the more tightly clustered questions; odds ratios between the largest and smallest answer on the other questions were 144, 42857, 66666, 332168, 65901, 1010101, and (with n=6) 12.
Other than the final (tactical nuke) question, these cover enough orders of magnitude for my reaction to be "something is going on here; let's take a closer look" rather than "there are some different perspectives which we can combine by aggregating" or... (read more)
Not opinionating on the general point, but:
IIRC, Kelsey was in fact the president of the Stanford EA student group, and I do not think she would've been voted "least likely to succeed" by the other members.
Quite. I was in that Stanford EA group, I thought Kelsey was obviously very promising and I think the rest of us did too, including when she was taking a leave of absence.
How to fix EA "community building"
Today, I mentioned to someone that I tend to disagree with others on some aspects of EA community building, and they asked me to elaborate further. Here's what I sent them, very quickly written and only lightly edited:
... (read more)GET AMBITIOUS SLOWLY
Most approaches to increasing agency and ambition focus on telling people to dream big and not be intimidated by large projects. I'm sure that works for some people, but it feels really flat for me, and I consider myself one of the lucky ones. The worst case scenario is big inspiring speeches get you really pumped up to Solve Big Problems but you lack the tools to meaningfully follow up.
Faced with big dreams but unclear ability to enact them, people have a few options.
The first three are all very costly, especially if you repeat the cycle a few times.
My preferred version is ambition snowball or "get ambitious slowly". Pick something b... (read more)
[Own views]
- I think we can be pretty sure (cf.) the forthcoming strongminds RCT (the one not conducted by Strongminds themselves, which allegedly found an effect size of d = 1.72 [!?]) will give dramatically worse results than HLI's evaluation would predict - i.e. somewhere between 'null' and '2x cash transfers' rather than 'several times better than cash transfers, and credibly better than GW top charities.' [I'll donate 5k USD if the Ozler RCT reports an effect size greater than d = 0.4 - 2x smaller than HLI's estimate of ~ 0.8, and below the bottom 0.1% of their monte carlo runs.]
- This will not, however, surprise those who have criticised the many grave shortcomings in HLI's evaluation - mistakes HLI should not have made in the first place, and definitely should not have maintained once they were made aware of them. See e.g. Snowden on spillovers, me on statistics (1, 2, 3, etc.), and Givewell generally.
- Among other things, this would confirm a) SimonM produced a more accurate and trustworthy assessment of Strongminds in their spare time as a non-subject matter expert than HLI managed as the centrepiece of their activity; b) the ~$250 000 HLI has moved to SM should be counted on th
... (read more)An update:
This RCT (which should have been the Baird RCT - my apologies for mistakenly substituting Sarah Baird with her colleague Berk Ozler as first author previously) is now out.
I was not specific on which effect size would count, but all relevant[1] effect sizes reported by this study are much lower than d = 0.4 - around d = 0.1. I roughly[2] calculate the figures below.
In terms of "SD-years of depression averted" or similar, there are a few different ways you could slice it (e.g. which outcome you use, whether you linearly interpolate, do you extend the effects out to 5 years, etc). But when I play with the numbers I get results around 0.1-0.25 SD-years of depression averted per person (as a sense check, this lines up with an initial effect of ~0.1, which seems to last between 1-2 years).
These are indeed "dramatically worse results than HLI's [2021] evaluation would predict". They are also substantially worse than HLI's (much lower) updated 2023 estimates of Strongminds. The immediate effects of... (read more)
Where? You mean in the 26-year-old email that he quoted in the apology? If so, the above claim seems unfair and deceptive.
Hi all -
This post has now been edited, but we would like to address some of the original claims, since many people have read them. In particular, the author claims:
Here is some context:
- The author emailed the Community Health team about 7 months ago, when she shared some information about interpersonal harm; someone else previously forwarded us some anonymous information that she may have compiled. Before about 7 months ago, we hadn’t been in contact with her.
- The information from her included serious concerns about various people in the Bay Area, most of whom had no connection to EA as far as we know. 4 of the accused seemed to be possibly or formerly involved with EA. CEA will not allow those 4 people at our events (though for context most of them haven’t applied). As we’ve said before, we’re grateful to her for this information.
- In addition, she later sent us some information that we had also previously received from other sources and we were already taking action on. We appreciate people sharing information even
... (read more)I'm confused why people keep insisting this is a "CEA" decision even after Owen Cotton-Barratt's clarification (which I assume everyone commenting has read).
I see the process on deciding to purchase Wytham Abbey as:
To the extent that anyone is responsible for this decision, it's primarily (1) Owen, and (2) his funder(s). I don't think (3) is much to blame here. Also, CEA the organization is distinct from EV, their fiscal sponsor.
I think if you think this is an ineffective use of limited resources, you absolutely should feel entitled to critique it! In many ways this is what our movement is about! But I think you should place the burden of blame on the actual decision-makers, and not vaguely associated institutions.
Hey Maya, I like your post. It has a very EA conversational style to it which will hopefully help it be well received and I'm guessing took some effort.
A problem I can't figure out, which you or someone else might be able to help suggest solutions to -
-If I (or someone else) post about something emotional without suggestions for action, everyone's compassionate but nothing happens, or people suggest actions that I don't think would help
-If I (or someone else) post about something emotional and suggest some actions that could help fix it, people start debating those actions, and that doesn't feel like the emotions are being listened to
-But just accepting actions because they're linked to a bad experience isn't the right answer either, because someone could have really useful experience to share but their suggestions might be totally wrong
If anyone has any suggestions, I'd welcome them!
Here's to not losing faith 🥂
Here's a Q&A which answers some of the questions by reviewers of early drafts. (I planned to post it quickly, but your comments came in so soon! Some of the comments hopefully find a reply here)
"Do you not think we should work on x-risk?"
"Do you think the authors you critique have prevented alternative frameworks from being applied to Xrisk?"
"Do you hate longtermism?"
"You’re being unfair to Nick Bostrom. In the vulnerable world hypothesis, Bostrom merely speculates that such a surveillance system may, in a hypothetical world in which VWH is true, be the only option"
- It doesn't matter whether Nick Bostrom speculates or wants to implement surveillance globally. In respect to what we talk about (justification of extreme actions) what matters is how readers perceive his work and who the readers are.
- There’s some hedging i
... (read more)I think it's great that CEA increased the event size on short notice. It's hard to anticipate everything in advance for complex projects like this one, and I think it's very cool that when CEA realized the potential mistake, it fixed the issue and expanded capacity in time.
I'd much rather have a CEA that gets important things broadly right and acts swiftly to fix any issues in time, than a CEA that overall gets less done due to risk aversion resulting from pushback from posts like this one*, or one that stubbornly sticks to early commitments rather than flexibly adjusting its plans.
I also feel like the decision not to worry too much about Covid seems correct given the most up-to-date risk estimates, similar to how conference organizers usually don't worry too much about the risk of flu/norovirus outbreaks.
(Edit - disclosure: From a legal perspective, I am employed by CEA, but my project (EA Funds) operates independently (meaning I don't report to CEA staff), and I wasn't involved in any decisions related to EA Global.)
* Edit: I don't mean to discourage thoughtful critiques like this post. I just don't want CEA to become more risk-averse because of them.
I think you left out another key reason: you do not agree with lots of EAs about lots of things and think that telling people you are an EA will give them false impressions about your beliefs.
I am a development economist, and I often tell other development economists that I am an EA. That tells them "this is someone who cares a lot about cost-effectiveness and finding impactful interventions, not just answering questions for the sake of answering questions", which is true. But if I said I was an EA to random people in the Bay Area, they would infer "this is someone who thinks AI risk is a big deal", which is not true of me, so I don't want to convey that. This example could apply to lots of people who work on global development or animal welfare and don't feel compelled by AI risk. (ETA: one solution would be to signal the flavor of EA you're most involved in, e.g. "bed nets not light cone" but it s... (read more)
I find myself particularly disappointed in this as I was working for many years on projects that were intended to diversify the funding landscape, but Open Phil declined to fund those projects, and indeed discouraged me from working on them multiple times (most notably SFF and most recently Lightspeed Grants).
I think Open Phil could have done a much better job at using the freedom it had to create a diverse funding landscape, and I think Open Phil is largely responsible for the degree to which the current funding landscape is as centralized as it currently is.
I'm really excited about this! :)
One further thought on pitching Athena: I think there is an additional, simpler, and possibly less contentious argument about why increasing diversity is valuable for AI safety research, which is basically "we need everyone we can get". If a large percentage of relevant people don't feel as welcome/able to work on AI safety because of, e.g., their gender, then that is a big problem. Moreover, it is a big problem even if one doesn't care about diversity intrinsically, or even if one is sceptical of the benefits of more diverse research teams.
To be clear, I think we should care about diversity intrinsically, but the argument above nicely sidesteps replies of the form "yes, diversity is important, but we need to prioritise reducing AI x-risk above that, and you haven't given me a detailed story for how diversity in-and-of-itself helps AI x-risk, e.g., one's gender does not, prima facie, seem very relevant to one's ability to conduct AI safety research". This also isn't to dispute any of your reasons in the post, by the way, merely to add to them :)
I have read the OP. I have skim read the replies. I'm afraid I am only making this one post because involvement with online debates is very draining for me.
My post is roughly structured along the lines of:
Kat is a good friend, who I trust and think highly of. I have known her personally (rather than as a loose acquaintance, as I did for years prior) since summer 2017. I do not know Emerson or his brother from Adam.
I see somebody else was asked about declaring interests when they spoke positively about Kat. I have never been employed by Kat. Back in 2017, Charity Science had some legal and operational infrastructure (e.g. charity status, tax stuff) which was hard to arrange. And during that time, .impact - which later became Rethink Charity, collaborated with Charity Science in order to shelter under that charitable status in order to be able to hire people, legally process funds and so forth. So indirectly the entity that employed me was helped out by Charity Science.
However, I never collaborated in a work sense... (read more)
While this is a very valuable post, I don't think the core argument quite holds, for the following reasons:
- Markets work well as information aggregation algorithms when it is possible to profit a lot from being the first to realize something (e.g., as portrayed in "The Big Short" about the Financial Crisis).
- In this case, there is no way for the first movers to profit big. Sure, you can take your capital out of the market and spend it before the world ends (or everyone becomes super-rich post-singularity), but that's not the same as making a billion bucks.
- You can argue that one could take a short position on interest rates (e.g., in the form of a loan) if you believe that they will rise at some point, but that is a different bet from short timelines - what you're betting on then, is when the world will realize that timelines are short, since that's what it will take before many people choose to pull out of the market, and thus drive interest rates up. It is entirely possible to believe both that timelines are short, and that the world won't realize AI is near for a while yet, in which case you wouldn't do this. Furthermore, counterparty risks tend to get in the way of taking up
... (read more)I want to say that I have tremendous respect for you, I love your writing and your interviews, and I believe that your intentions are pure.
How concerned were you about crypto generally being unethical? Even without knowledge of the possibly illegal, possibly fraudulent behaviour. Encouraging people to invest in "mathematically complex garbage" seemed very unethical. (Due to the harm to the investor and the economy as a whole).
SBF seemed like a generally dishonest person. He ran ads saying, "don't be like Larry". But in this FT interview, he didn't seem to have a lot of faith that he was helping his customers.
"Does he worry about the clients who lose life-changing sums through speculation, some trading risky derivatives products that are banned in several countries? The subject makes Bankman-Fried visibly uncomfortable. Throughout the meal, he has shifted in his seat, but now he has his crossed arms and legs all crammed into a yogic pose."
It is now clear that he is dishonest. Given he said on Twitter that FTX US was safe when it wasn't (please correct me if I'm wrong here).
I think that even SBF thinks/thought crypto is garbage, yet he spent billions bailing out a scam industry, poss... (read more)
Hey, yeah, for the last few months CEA and Forethought and a few other organizations have been working to try to help accurately explain EA and related ideas in the media. We've been working with experienced communications professionals. CEA recently also hired a Head of Communications to lead these efforts, and they're starting in September. I think that it was a mistake on CEA's part not to do more of this sooner.
I think that there might be a post sharing more of about these efforts in the future (but not 100% sure this will happen).
Some notes from CEA:
I didn't downvote (because as you say it's providing relevant information), but I did have a negative reaction to the comment. I think the generator of that negative reaction is roughly: the vibe of the comment seems more like a political attempt to close down the conversation than an attempt to cooperatively engage. I'm reminded of "missing moods"; it seems like there's a legitimate position of "it would be great to have time to hash this out but unfortunately we find it super time consuming so we're not going to", but it would naturally come with a mood of sadness that there wasn't time to get into things, whereas the mood here feels more like "why do we have to put up with you morons posting inaccurate critiques?". And perhaps that's a reasonable position, but it at least leaves a kind of bad taste.
The accusation of sexual misconduct at Brown is one of the things that worried us at CEA. But we approached Jacy primarily out of concern about other more recent reports from members of the animal advocacy and EA communities.
I haven't looked into the evidence here at all, but fwiw the section on 'sharing information on ben pace' is deranged. I know you are using this as an example of how unfounded allegations can damage someone's reputation. But in repeating them, you are also repeating unfounded allegations and damaging someone's reputation. You are also obviously doing this in retaliation for him criticising you. You could have used an infinite number of examples of how unfair allegations can damage someone's reputation, including eg known false allegations against celebrities or other people reported in the news, or hypotheticals.
Just share your counter-evidence, don't in the process try to smear the person criticising you.
For someone who seems to have made at least 20 comments on this post, why haven't you bothered to at least look into the evidence they provided?
I'm concerned that there's an information cascade going on. That is, some claims were made about people being negatively affected by having posted public criticism; as a result some people made critical posts anonymously; that reinforces the perception that the original claim is true; more people post anonymously; the cycle continues.
But I just roll to disbelieve that people facing bad consequences for posting criticism is a serious problem. I can totally believe that it has happened at some point, but I'd be very surprised if it's widespread. Especially given how mild some of the stuff that's getting anonymously posted is!
So I think there's a risk that we meme ourselves into thinking there's an object level problem when there actually isn't. I would love to know what if any actual examples we have of this happening.
Hi, I think on balance I appreciate this post. This is a hard thing for me to say, as the post has likely caused nontrivial costs to some people rather close to me, and has broken some norms that I view as both subtle and important. But on balance I think our movement will do better with more critical thinkers, and more people with critical pushback when there is apparent divergence between stated memes and revealed goals.
I think this is better both culturally, and also is directly necessary to combat actual harm if there is also actual large-scale wrongdoing that agreeable people have been acculturated to not point out. I think it will be bad for the composition and future of our movement if we push away young people who are idealistic and disagreeable, which I think is the default outcome if posts like this only receive critical pushback.
So thank you for this post. I hope you stay and continue being critical.
These are anonymous quotes from two people I know and vouch for about the TIME piece on gender-based harassment in the EA community:
Anon 1: I think it's unfortunate that the women weren't comfortable with the names of the responsible parties being shared in the article. My understanding is that they were not people strongly associated with EA, some of them had spoken out against EA and had never identified as an EA or had any role in EA, and an article with their names would have given people a very different impression of what happened. I guess I think someone should just spell out who the accused parties are (available from public evidence).
Anon 2: I want EAs to not be fucking stupid 😭
"Oh geez this Times reporter says we're doing really bad things, we must be doing really bad things A LOT, that's so upsetting!"
yet somehow "This New York Times reporter says Scott Alexander is racist and bad, but he's actually not, ugh I hate how the press is awful and lies & spins stuff in this way just to get clicks"
And yes, this included reports of people, but like I've met the first person interviewed in the article and she is hella scary and not someone I would trust to report accurately ... (read more)
When I read this part of your bullet point summary, I thought someone at Open Phil might be related to someone at Helena. But then it became clear that you mean that the Helena founder dropped out of college supported with money from his rich investor dad to start a project that you think "(subjectively) seems like" self-aggrandizing.
(The word "inherent" probably makes clear what you mean; I just had a prior that nepotism is a problem when someone receives funding, and I didn't know that you were talking about other funding that Helena also received.)
Hi, I'm pretty new here, so pls correct me if I'm wrong. I had, however, one important impression which I think I should share.
EA started as a small movement and right now is expanding like crazy. The thing is, it still has a "small movement" mentality.
One of the key aspects of this is trust. I have an impression that the EA is super trust-based. I have a feeling that if somebody calls themselves EA everybody assumes that they have probably super altruistic intentions and most of the values aligned. It is lovely. But maybe dangerous?
In a small movement everybody knows everyone and if somebody does something suspicious, the whole group can very easily spread the warning. In a large groups, however, it won't work. So if somebody is a grifter, an amoral person, just an a*hole or anything similar - they can super easily abuse the system, just by, for example, changing the EA crowd they talk to. I have an impression that there was a push towards attracting the maximum number of people possible. I assume that it was thought through and there is a value added in it. It, however, may have a pretty serious cost.
I thought I would like this post based on the title (I also recently decided to hold off for more information before seriously proposing solutions), but I disagree with much of the content.
A few examples:
I think we can safely say with at this point >95% confidence that SBF basically committed fraud even if not technically in the legal sense (edit: but also seems likely to be fraud in the legal sense), and it's natural to start thinking about the implications of this and in particular be very clear about our attitude toward the situation if fraud indeed occurred as looks very likely. Waiting too long has serious costs.
... (read more)The Belgian senate votes to add animal welfare to the constitution.
It's been a journey. I work for GAIA, a Belgian animal advocacy group that for years has tried to get animal welfare added to the constitution. Today we were present as a supermajority of the senate came out in favor of our proposed constitutional amendment. The relevant section reads:
It's a very good day for Belgian animals but I do want to note that:
If there's interest I will make a full post about it
ifonce it passes the Chamber.EDIT: Translated the linked article on our site into English.
I'm sorry to be pushing on this when it seems like you are doing the right thing, but could you elaborate more on this sentence from the article?
Why was she being put up in your house and not a hotel, if you weren't affiliated with the group she was interviewing for? I think this is the part a lot of people were sketched out by, so more context would be helpful.
Sorry I'm mostly trying to take a day away from the forum, but someone let me know that it would be helpful to chime in here. Essentially what happened:
(I'm eliding details to reduce risk of leaking information about the person's identity.)
I think your consequentialist analysis is likely wrong and misguided. I think you're overstating the effects of the harms Bostrom perpetuated?
I think a movement where our leading intellectuals felt pressured to distort their views for social acceptability is a movement that does a worse job of making the world a better place.
Bostrom's original email was bad and he disavowed it. The actual apology he presented was fine IMO; he shouldn't have pretended to believe that there are definitely no racial differences in intelligence.
tl;dr:
In the context of interpersonal harm:
1. I think we should be more willing than we currently are to ban or softban people.
2. I think we should not assume that CEA's Community Health team "has everything covered"
3. I think more people should feel empowered to tell CEA CH about their concerns, even (especially?) if other people appear to not pay attention or do not think it's a major concern.
4. I think the community is responsible for helping the CEA CH team with having a stronger mandate to deal with interpersonal harm, including some degree of acceptance of mistakes of overzealous moderation.
(all views my own) I want to publicly register what I've said privately for a while:
For people (usually but not always men) who we have considerable suspicion that they've been responsible for significant direct harm within the community, we should be significantly more willing than we currently are to take on more actions and the associated tradeoffs of limiting their ability to cause more harm in the community.
Some of these actions may look pretty informal/unofficial (gossip, explicitly warning newcomers against specific people, keep an unofficial eye out for some people during par... (read more)
After a ~5min online research on Emerson Spartz's past CEO role at his previous company "Dose", it looks like there were a lot more "disgruntled ex-employee[s]" (even if this is external to EA).
... (read more)Overall, CEO approval is at 0%. Some examples out of the many:
The timeline doesn't make sense for this version of events at all. Eliezer was uninformed on this topic in 1999, at a time when Robin Hanson had already written about gambling on scientific theories (1990), prediction markets (1996), and other betting-related topics, as you can see from the bibliography of his Futarchy paper (2000). Before Eliezer wrote his sequences (2006-2009), the Long Now Foundation already had Long Bets (2003), and Tetlock had already written Expert Political Judgment (2005).
If Eliezer had not written his sequences, forecasting content would have filtered through to the EA community from contacts of Hanson. For instance, through blogging by other GMU economists like Caplan (2009). And of course, through Jason Matheny, who worked at FHI, where Hanson was an affiliate. He ran the ACE project (2010), which led to the science behind Superforecasting, a book that the EA community would certainly have discovered.
(Writing from OP’s point of view here.)
We appreciate that Nuño reached out about an earlier draft of this piece and incorporated some of our feedback. Though we disagree with a number of his points, we welcome constructive criticism of our work and hope to see more of it.
We’ve left a few comments below.
*****
The importance of managed exits
We deliberately chose to spin off our CJR grantmaking in a careful, managed way. As a funder, we want to commit to the areas we enter and avoid sudden exits. This approach:
Exiting a program requires balancing:
- the cost of additional below-the-bar spending during a slow exit;
- the risks from a faster
... (read more)I am ridiculously late to the party, and I must confess that I have not read the entire article.
My comment is about what I would expect to happen if EA decided to shift towards encouraging pro-growth policies. What I have to say is perhaps a refining of objection 5.4, politicization. It is how I perceive this would be instantiated. My perceptions are informed by being from a middle-income country (Brazil) and living in another (Chile), while having lived in the developed world (America) to know what it's like.
The authors correctly acknowledge that this has a "politicized nature". For the time being, the only way to enact pro-growth policies would be to influence those who hold political power in the target countries.
My concern about this is: people in such countries do not want these policies. They show that by how they think, how they act, how they vote, how they protest. Here in Chile, for example, people have been fighting tooth and nail against the policies that made the country the wealthiest, most educated one in South America, the only OECD member in the subcontinent. The content of the protests is explicitly against the pro-market policies that have prevailed... (read more)
I agree with much of Leopold's empirical claims, timelines, and analysis. I'm acting on it myself in my planning as something like a mainline scenario.
Nonetheless, the piece exhibited some patterns that gave me a pretty strong allergic reaction. It made or implied claims like:
* a small circle of the smartest people believe this
* i will give you a view into this small elite group who are the only who are situationally aware
* the inner circle longed tsmc way before you
* if you believe me; you can get 100x richer -- there's still alpha, you can still be early
* This geopolitical outcome is "inevitable" (sic!)
* in the future the coolest and most elite group will work on The Project. "see you in the desert" (sic)
* Etc.
Combined with a lot of retweets, with praise, on launch day, that were clearly coordinated behind the scenes; it gives me the feeling of being deliberately written to meme a narrative into existence via self-fulfilling prophecy; rather than inferring a forecast via analysis.
As a sidenote, this felt to me like an indication of how different the AI safety adjacent community is now to when I joined it about a decade ago. In the early days of this space, I expect a piece ... (read more)
I’d like for Marisa to be remembered for all the many ways she contributed to the EA Community, and the causes we all care about.
She contributed in so many ways, that I know I’m going to miss a bunch of things. I guess that was one striking feature of Marisa – she saw things that needed to be done, or heard people’s requests for help or advice, and she didn’t hesitate to leap in to help. In particular, she did an impressive amount to help us be a welcoming, inclusive and supportive community for newcomers to EA and to people of backgrounds underrepresented in EA.
I think she really embodied the EA principles. Many of the things she did were unpaid, unglamorous, and sometimes tedious. But she took on all these tasks with eagerness because they were important and needed doing.
This is certainly not a full list - please feel free to add more information if you know it.
- In her early EA days (2017) Marisa volunteered at Rethink Charity, helping out across a number of Rethink Charity projects, and was hired part time to work on operations. We could throw her a wide variety of problems and could deeply trust that she’d somehow manage to work them all out.
- During
... (read more)Super sorry to see you go Max. It's honestly kind of hard to believe how different CEA is today from when I joined, and a lot of that is due to your leadership. CEA has a bunch of projects going on, and the fact that you can step down without these projects being jeopardized is a strong endorsement of the team you've built here.
I look forward to continuing to work with you in an advisory role!
I'm glad that FLI put this FAQ out, but I'm nervous that several commenters are swinging from one opinion (boo, FLI) to the opposite (FLI is fine! Folks who condemned FLI were too hasty!) too quickly.
This FAQ only slightly changed my opinion on FLI's grantmaking process. My best guess is that something went very wrong with this particular grant process. My reasoning:
I'd be surprised if FLI's due diligence step is intended to be a substantial part of the assessment process. My guess it that due diligence might usually be more about formalities like answering - can we legally pay this person? Is the person is who they say they are? And not - Is this a good grant to make?
It seems like FLI would be creating a huge hassle if they regularly sent out "intention to issue a grant" to prospective grantees (with the $ amount especially), only to withdraw support later. It would be harmful for the prospective grantees by giving them false hopes (could cause them to change their plans thinking the money is coming), and annoying for the grant maker because I suspect they'd be asked to explain why they changed their mind.
If indeed FLI does regularly reject grants at due diligence stage, that would update me towards thinking nothing went too badly with this particular grant (and I'd like to know their reasons for doing that as I'm probably missing something).
Note - I'm speaking for myself not CEA (where I work).
Most EAs I've met over the years don't seem to value their time enough, so I worry that the frugal option would often cost people more impact in terms of time spent (e.g. cooking), and it would implicitly encourage frugality norms beyond what actually maximizes altruistic impact.
That said, I like options and norms that discourage fancy options that don't come with clear productivity benefits. E.g. it could make sense to pay more for a fancier hotel if it has substantially better Wi-Fi and the person might do some work in the room, but it typically doesn't make sense to pay extra for a nice room.
I’ve heard multiple reports of people being denied jobs around AI policy because of their history in EA. I’ve also seen a lot of animosity against EA from top organizations I think are important - like A16Z, Founders Fund (Thiel), OpenAI, etc. I’d expect that it would be uncomfortable for EAs to apply or work in to these latter places at this point.
This is very frustrating to me.
First, it makes it much more difficult for EAs to collaborate with many organizations where these perspectives could be the most useful. I want to see more collaborations and cooperation - not having EAs be allowed in many orgs makes this very difficult.
Second, it creates a massive incentive for people not to work in EA or on EA topics. If you know it will hurt your career, then you’re much less likely to do work here.
And a lighter third - it’s just really not fun to have a significant stigma associated with you. This means that many of the people I respect the most, and think are doing some of the most valuable work out there, will just have a much tougher time in life.
Who’s at fault here? I think the first big issue is that resistances get created against all interesting and powerful groups. There are sim... (read more)
It's not clear to me how far this is the case.
I agree that:
- Selection bias (from EAs with more negative reactions dropping out) could mean that the true effects are more negative.
- I agree that if we knew large numbers of people were leaving EA this would be another useful datapoint, though I've not seen much evidence of this myself. Formally surveying the community to see how many people know of leaving could be useful to adjudicate this.
- We could also conduct a 'non-EA Survey' which tries to reach people who have dropped out of EA, or who would be in EA's target audience but who declined to join EA (most likely via referrals), which would be more systematic than anecdotal evidence. RP discussed doing with
... (read more)Here’s a followup with some reflections.
Note that I discuss some takeaways and potential lessons learned in this interview.
Here are some (somewhat redundant with the interview) things I feel like I’ve updated on in light of the FTX collapse and aftermath:
- The most obvious thing that’s changed is a tighter funding situation, which I addressed here.
- I’m generally more concerned about the dynamics I wrote about in EA is about maximization, and maximization is perilous. If I wrote that piece today, most of it would be the same, but the “Avoiding the pitfalls” section would be quite different (less reassuring/reassured). I’m not really sure what to do about these dynamics, i.e., how to reduce the risk that EA will encourage and attract perilous maximization, but a couple of possibilities:
- It looks to me like the community needs to beef up and improve investments in activities like “identifying and warning about bad actors in the community,” and I regret not taking a stronger hand in doing so to date. (Recent sexual harassment developments reinforce this point.).
- I’ve long wanted to try to write up a detailed intellectual case against what one might call “hard-core utilita
... (read more)Thanks for the question Denise. Probably x-risk reduction, although on some days I’d say farmed animal welfare. Farmed animal welfare charities seem to significantly help multiple animals per dollar spent. It is my understanding that global health (and other charities that help currently living humans) spend hundreds or even thousands of dollars to help one human in the same way. I think that human individuals are more important than other animals but not thousands of times.
Sometimes when I lift my eyes from spreadsheets and see the beauty and richness of life, I really don’t want all of this to end. And then I also think about how digital minds could have even richer and better experiences: they could be designed for extreme happiness in the widest sense of the word. And if only a tiny fraction of world’s resources could be devoted to the creation of such digital minds, there could be bazillions of them thriving for billions of years. I’m not sure if we can do much to increase this possibility, maybe just spread this idea a little bit (it’s sometimes called hedonium or utilitronium). So I was thinking of switching my career to x-risk reduction if I could manage to find a way to be... (read more)
I agree the we ignore experts over people who are more value aligned. Seems like a mistake.
Maya - thanks for a thoughtful, considered, balanced, and constructive post.
Regarding the issue that 'Effective Altruism Has an Emotions Problem': this is very tricky, insofar as it raises the issue of neurodiversity.
I've got Aspergers, and I'm 'out' about it (e.g. in this and many other interviews and writings). That means I'm highly systematizing, overly rational (by neurotypical standards), more interested in ideas than in most people, and not always able to understand other people's emotions, values, or social norms. I'm much stronger on 'affective empathy' (feeling distressed by the suffering of others) than on 'cognitive empathy' (understanding their beliefs & desires using Theory of Mind.)
Let's be honest. A lot of us in EA have Aspergers, or are 'on the autism spectrum'. EA is, to a substantial degree, an attempt by neurodivergent people to combine our rational systematizing with our affective empathy -- to integrate our heads and our hearts, as they actually work, not as neurotypical people think they should work.
This has lead to an EA culture that is incredibly welcoming, supportive, and appreciative of neurodivergent people, and that capitalizes on our distincti... (read more)
Thanks so much for your post here! I spent 5ish years as a litigator and couldn't agree more with this. As an additional bit of context for non-lawyers, how discovery works in a large civil trial, from someone who used to do it:
Like a bird building a nest at a landfill, it's hard to know what throwaway comment a lawyer might make something out of.
I really don't understand how you could have read that whole interview and see SBF as incompetent rather than a malicious sociopath. I know this is a very un-EA-forum-like comment, but I think it's necessary to say.
I was really looking forward to maybe implementing impact markets in collaboration with Future Fund plus FTX proper if you and they wanted, and feel numb with regard to this shocking turn. I really believed FTX had some shot at 'being the best financial hub in the world', SBF 'becoming a trillionaire', and this longshot notion I had of impact certificates being integrated into the exchange, funding billions of dollars of EA causes through it in the best world. This felt so cool and far out to imagine. I woke up two days ago and this dream is now ash. I have spiritually entangled myself with this disaster.
I don't want to be the first commenter to be that guy, and forgive me if I'm poking a wound, but when you have the time and slack can you please explain to us to what extent you guys grilled FTX leadership about the integrity of the sources of money they were giving you? Surely you had an inside view model of how risky this was if it blew up? If it's true SBF has had a history of acting unethically before (rumors, I don't know), isn't that something to have thoroughly questioned and spoken against? If there was anyone non-FTX who could have pressured them to act ethically, it would have been you. As an outsider it felt like y'all were in a highly trusted concerted relationship with each other going back a decade.
In any case, thank you for what you've done.
Sven Rone should've won a prize in the Red Teaming contest[1]:
The Effective Altruism movement is not above conflicts of interest
[published Sep 1st 2022]
... (read more)I wrote that comment from over a month ago. And I actually followed it up with a more scathing comment that got downvoted a lot, and that I deleted out of a bit of cowardice, I suppose. But here's the text:
Consider this bit from the origin story of FTX:
Binance, you say? This Binance?
... (read more)Wow, I didn't see it at the time but this was really well written and documented. I'm sorry it got downvoted so much and think that reflects quite poorly on Forum voting norms and epistemics.
Maybe hold off on this sentiment until we know exactly what they were doing with customer funds? It could age quite badly.
The linked article seems to overstate the extent to which EAs support totalitarian policies. While it is true that EAs are generally left-wing and have more frequently proposed increases in the size & scope of government than reductions, Bostrom did commission an entire chapter of his book on the dangers of a global totalitarian government from one of the world's leading libertarian/anarchists, and longtermists have also often been supportive of things that tend to reduce central control, like charter cities, cryptocurrency and decentralised pandemic control.
Indeed, I find it hard to square the article's support for ending technological development with its opposition to global governance. Given the social, economic and military advantages that technological advancement brings, it seems hard to believe that the US, China, Russia etc. would all forgo scientific development, absent global coordination/governance. It is precisely people's skepticism about global government that makes them treat AI progress as inevitable, and hence seek other solutions.
This is what ACE's "overview" lists as Anima's weaknesses:
Their "comprehensive review" doesn't mention the firing of the CEO as a consideration behind their low rating. The primary reason for their negative evaluation seems to be captured in the following excerpt:
... (read more)Is it possible to elaborate on how certain grants but not others would unusually draw on GV's bandwidth? For example, what is it about digital minds work that draws so much more bandwidth than technical AI safety grants? Personally I find that this explanation doesn't actually make any sense as offered without more detail.
I think the key quote from the original article is "In the near term, we want to concentrate our giving on a more manageable number of strategies on which our leaders feel more bought in and with which they have sufficient time and energy to engage." Why doesn't Good Ventures just want to own the fact that they're just not bought in on some of these grant areas? "Using up limited capacity" feels like a euphemism.
This is a very unfortunate situation, but as a general piece of life advice for anyone reading this: expressions of interest are not commitments and should not be "interpreted" -- let alone acted upon! -- as such.
For example, within academia, a department might express interest in having Prof X join their department. But there's no guarantee it will work out. And if Prof. X prematurely quit their existing job, before having a new contract in hand, they would be taking a massive career risk!
(I'm not making any comment on the broader issues raised here; I sympathize with all involved over the unfortunate miscommunication. Just thought it was important to emphasize this particular point. Disclosure: I've recently had positive experiences with EAIF.)
This is entirely consistent with two other applications I know of from 2023, both of which were funded but experienced severe delays and poor/absent/straightforwardly unprofessional communication
Thank you! This post says very well a lot of things I had been thinking and feeling in the last year but not able to articulate properly.
I think it's very right to say that EA is a "do-ocracy", and I want to focus in on that a bit. You talked about whether EA should become more or less centralized, but I think it's also interesting to ask "Should EA be a do-ocracy?"
My response is a resounding yes: this aspect of EA feels (to me) deeply linked to an underrated part of the EA spirit. Namely, that the EA community is a community of people who not only identify problems in the world, but take personal action to remedy them.
- I love that we have a community where random community members who feel like an idea is neglected feel empowered to just do the research and write it up.
- I love that we have a community where even those who do not devote much of their time to action take the very powerful action of giving effectively and significantly.
- I love that we have a community where we fund lots of small experimental projects that people just though should exist.
- I love that most of our "big" orgs started with a couple of people in a basement because they thought it was a
... (read more)In your recent Cold Takes post you disclosed that your wife owns equity in both OpenAI and Anthropic. (She was appointed to a VP position at OpenAI, as was her sibling, after you joined OpenAI's board of directors[1]). In 2017, under your leadership, OpenPhil decided to generally stop publishing "relationship disclosures". How do you intend to handle conflicts of interest, and transparency about them, going forward?
You wrote here that the first intervention that you'll explore is AI safety standards that will be "enforced via self-regulation at first, and potentially government regulation later". AI companies can easily end up with "self-regulation" that is mostly optimized to appear helpful, in order to avoid regulation by governments. Conflicts of interest can easily influence decisions w.r.t. regulating AI companies (mostly via biases and self-deception, rather than via conscious reasoning).
EDIT: you joined OpenAI's board of directors as part of a deal between OpenPhil and OpenAI that involved recommending a $30M grant to OpenAI. ↩︎
Would be interested in your (eventual) take on the following parallels between FTX and OpenAI:
I think this model is kind of misleading, and that the original astronomical waste argument is still strong. It seems to me that a ton of the work in this model is being done by the assumption of constant risk, even in post-peril worlds. I think this is pretty strange. Here are some brief comments:
- If you're talking about the probability of a universal quantifier, such as "for all humans x, x will die", then it seems really weird to say that this remains constant, even when the thing you're quantifying over grows larger.
- For instance, it seems clear that if there were only 100 humans, the probability of x-risk would be much higher than if there were 10^6 humans. So it seems like if there are 10^20 humans, it should be harder to cause extinction than 10^10 humans.
- Assuming constant risk has the implication that human extinction is guaranteed to happen at some point in the future, which puts sharp bounds on the goodness of existential risk reduction.
- It's not that hard to get exponentially decreasing probability on universal quantifiers if you assume independence in survival amongst some "unit" of humanity. In computing applications, it's not that hard to drive down the probability of er
... (read more)Good to see a post that loosely captures my own experience of EAG London and comes up with a concrete idea for something to do about the problem (if a little emotionally presented).
I don't have a strong view on the ideal level of transparency/communication here, but something I want to highlight is: Moving too slowly and cautiously is also a failure mode.
In other words, I want to emphasise how important "this is time consuming, and this time is better spent making more grants/doing something else" can be. Moving fast and breaking things tends to lead to much more obvious, salient problems and so generally attracts a lot more criticism. On the other hand, "Ideally, they should have deployed faster" is not a headline. But if you're as consequentialist as the typical EA is, you should be ~equally worried about not spending money fast enough. Sometimes to help make this failure mode more salient, I imagine a group of chickens in a factory farm just sitting around in agony waiting for us all to get our act together (not the most relevant example in this case, but the idea is try to counteract the salience bias associated with the problems around moving fast). Maybe the best way fo... (read more)
I know this isn't the only thing to track here, but it's worth noting that funding to GiveWell-recommended charities is also increasing fast, both from Open Philanthropy and from other donors. Enough so that last year GiveWell had more money to direct than room for more funding at the charities that meet their bar (which is "8x better than cash transfers", though of course money could be donated to things less effective than that). They're aiming to move 1 billion annually by 2025.
Fwiw, anecdotally my impression is that a more common problem is that people engage in motivated reasoning to justify projects that aren't very good, and that they just haven't thought through their projects very carefully. In my experience, that's more common than outright, deliberate fraud - but the latter may get more attention since it's more emotionally salient (see my other comment). But this is just my impression, and it's possible that it's outdated. And I do of course think that EA should be on its guard against fraud.
Tangentially related: I would love to see a book of career decision worked examples. Rather than 80k's cases, which often read like biographies or testimonials, these would go deeper on the problem of choosing jobs and activities. They would present a person (real or hypothetical), along with a snapshot of their career plans and questions. Then, once the reader has formulated some thoughts, the book would outline what it would advise, what that might depend on, and what career outcomes occurred in similar cases.
A lot of fields are often taught in a case-based fashion, including medicine, poker, ethics, and law. Often, a reader can make good decisions in problems they encounter by interpolating between cases, even when they would struggle to analyse these problems analytically. Some of my favourite books have a case-based style, such as An Anthropologist on Mars by Oliver Sacks. It's not always the most efficient way to learn, but it's pretty fun.
To me this post ignores the elephant in the room: OpenPhil still has billions of dollars left and is trying to make funding decisions relative to where they think their last dollar is. I'd be pretty surprised if having the Wytham money liquid rather than illiquid (or even having £15mn out of nowhere!) really made a difference to that estimate.
It seems reasonable to argue that they're being too conservative, and should be funding the various things you mention in this post, but also plausible to me that they're acting correctly? More importantly, I think this is a totally separate question to whether to sell Wytham,and requires different arguments. Eg I gather that CEEALAR has several times been considered and passed over for funding before, I don't have a ton of context for why, but that suggests to me it's not a slam dunk re being a better use of money.
I can view an astonishing amount of publications for free through my university, but they haven't opted to include this one, weird... So should I pay money to see this "Mankind Quarterly" publication?
When I googled it I found that Mankind Quarterly includes among its founders Henry Garrett an American psychologist who testified in favor of segregated schools during Brown versus Board of Education, Corrado Gini who was president of the Italian genetics and eugenics Society in fascist Italy and Otmar Freiherr von Verschuer who was director of the Kaiser Wilhelm Institute of anthropology human heredity and eugenics in Nazi Germany. He was a member of the Nazi Party and the mentor of Josef Mengele, the physician at the Auschwitz concentration camp infamous for performing human experimentation on the prisoners during World War 2. Mengele provided for Verschuer with human remains from Auschwitz to use in his research into eugenics.
It's funded by the Pioneer Fund which according to wikipedia:
... (read more)Massive thanks to Ben for writing this report and to Alice and Chloe for sharing their stories. Both took immense bravery.
There's a lot of discussion on the meta-level on this post. I want to say that I believe Alice and Chloe. I currently want to keep my distance from Nonlinear, Kat and Emerson, and would caution others against funding or working with them. I don't want to be part of a community that condones this sort of thing.
I’m not and never have been super-involved in this affair, but I reached out to the former employees following the earlier vague allegations against Nonlinear on the Forum, and after someone I know mentioned they’d heard bad things. It seemed important to know about this, because I had been a remote writing intern at Nonlinear, and Kat was still an occasional mentor to me (she’d message me with advice), and I didn’t want to support NL or promote them if it turned out that they had behaved badly.
Chloe and Alice’s stories had the ring of truth about them to me, and seemed consistent with my experiences with Emerson and Kat — albeit I didn’t know either of them that well and I didn’t have any strongly negative experiences with them.
It seems relevan... (read more)
I applied to attend the Burner Accounts Anonymous meetup and was rejected.
Initially, I received no feedback. Just a standard auto-generated rejection message.
After reaching out to BurnerMeetupBurner for feedback, I learned that I was rejected because of my IQ. The event is apparently only for high IQ individuals.
I feel very disappointed. Not only because I believe that intelligence is not relevant for making a fruitful contribution to event, but also because of the lack of transparency in the application process.
This makes me consider leaving the EA burner movement and post under my real name in the future.
I think this is worth talking about, but I think it's probably a bad idea. I should say up front that I have a pretty strong pro-transparency disposition, and the idea of hiding public things from search engines feels intuitively wrong to me.
I think this has similar problems to the proposal that some posts should be limited to logged-in users, and I see two main downsides:
-
-
... (read more)Discussion of community problems on the Forum is generally more informed and even-handed than I see elsewhere. To take the example of FTX, if you look on the broader internet there was lots of uninformed EA bashing. The discussion on the forum was in many places quite negative, but usually those were places where the negativity was deserved. On most EA community issues the discussion on the Forum is something I would generally want to point interested people at, instead of them developing their perspective with only information available elsewhere.
I expect people would respond to their words being somewhat less publicly visible by starting to talk more as if they are chatting off the record among friends, and that seems very likely to backfire. The Forum has search functionality, RSS feeds, posts with public
Many people are tired of being constantly exposed to posts that trigger strong emotional reactions but do not help us make intellectual progress on how to solve the world's most pressing problems. I have personally decided to visit the Forum increasingly less frequently to avoid exposing myself to such posts, and know several other EAs for whom this is also the case. I think you should consider the hypothesis that the phenomenon I'm describing, or something like it, motivated the Forum team's decision, rather than the sinister motive of "attemp[ting] to sweep a serious issue under the rug".
EA Forum discourse tracks actual stakes very poorly
Examples:
Basically I think EA Forum discourse, Karma voting, and the inflation-adjusted overview of top posts completely fails to correctly track the importance of the ideas presented there. Karma seems to be useful to decide which comments to read, but otherwise its use seems fairly limited.
(Here's a related post.)
Thanks for your thoughtful response.
I'm trying to figure out how much of a response to give, and how to balance saying what I believe vs. avoiding any chance to make people feel unwelcome, or inflicting an unpleasant politicized debate on people who don't want to read it. This comment is a bad compromise between all these things and I apologize for it, but:
I think the Kathy situation is typical of how effective altruists respond to these issues and what their failure modes are. I think "everyone knows" (in Zvi's sense of the term, where it's such strong conventional wisdom that nobody ever checks if it's true ) that the typical response to rape accusations is to challenge and victim-blame survivors. And that although this may be true in some times and places, the typical response in this community is the one which, in fact, actually happened - immediate belief by anyone who didn't know the situation, and a culture of fear preventing those who did know the situation from speaking out. I think it's useful to acknowledge and push back against that culture of fear.
(this is also why I stressed the existence of the amazing Community Safety team - I think "everyone knows" that EA doesn't ... (read more)
Yes - I almost can't believe I am reading a senior EA figure suggesting that every major financial institution has an unreasonably prurient interest in the sex lives of their risk-holding employees. EA has just taken a bath because it was worse at financial risk assessment than it thought it was. The response here seems to be to double-down on the view that a sufficiently intelligent rationalist can derive - from first principles - better risk management than the lessons embedded in professional organisations. We have ample evidence that this approach did not work in the case of FTX funding, and that real people are really suffering because EA leaders made the wrong call here.
Now is the time to eat a big plate of epistemically humble crow, and accept that this approach failed horribly. Conspiracy theorising about 'voting rings' is a pretty terrible look.
One of the biggest lessons I learned from all of this is that while humans are quite good judges of character in general, we do a lot worse in the presence of sufficient charisma, and in those cases we can't trust our guts, even when they're usually right. When I first met SBF, I liked him quite a bit, and I didn't notice any red flags. Even during the first month or two of working with him, I kind of had blinders on and made excuses for things that in retrospect I shouldn't have.
It's hard for me to say about what people should have been able to detect from his public presence, because I haven't watched any of his public interviews. I put a fair amount of effort into making sure that news about him (or FTX) didn't show up in any of my feeds, because when it did I found it pretty triggering.
Personally, I don't think his character flaws are at all a function of EA. To me, his character seems a lot more like what I hear from friends who work in politics about what some people are like in that domain. Given his family is very involved in politics, that connection seems plausible to me. This is very uncharitable, but: from my discussions with him he always seemed a lot more interested in power than in doing good, and I always worried that he just saw doing good as an opportunity to gain power. There's obviously no way for me to have any kind of confidence in that assessment, though, and I don't think people should put hardly any weight on it.
I agree! As a founder, I promise to never engage in fraud, either personally or with my business, even if it seems like doing so would result in large amounts of money (or other benefits) to good things in the world. I also intend to discourage other people who ask my advice from making similar trade-offs.
This should obviously go without saying, and I already was operating this way, but it is worth writing down publicly that I think fraud is of course wrong, and is not in line with how I operate the philosophy of EA.
What would have been really interesting is if someone wrote a piece critiquing the EA movement for showing little to no interest in scrutinizing the ethics and morality of Sam Bankman-Fried's wealth.
To put a fine point on it, has any of his wealth come from taking fees from the many scams, Ponzi schemes, securities fraud, money laundering, drug trafficking, etc. in the crypto markets? FTX has been affiliated with some shady actors (such as Binance), and seems to be buying up more of them (such as BlockFi, known for securities fraud). Why isn't there more curiosity on the part of EA, and more transparency on the part of FTX? Maybe there's a perfectly good explanation (and if so, I'll certainly retract and apologize), but it seems like that explanation ought to be more widely known.
I disagree, I know several people who fit this description (5 off the top of my head) who would find this very hard. I think it very much depends on factors like how well networked you are, where you live, how much funding you've received and for how long, and whether you think you could work for and org in the future.
Here's an anonymous form where people can criticize us, in case that helps.
This was a very interesting post. Thank you for writing it.
I think it's worth emphasizing that Rotblat's decision to leave the Manhattan Project was based on information available to all other scientists in Los Alamos. As he recounts in 1985:
That so many scientists who agreed to become involved in the development of the atomic bomb cited the need to do so before the Germans did, and yet so few chose to terminate their involvement when it had become reasonably clear that the Germans would not develop the bomb provides an additional, separate cautionary tale besides the one your post focuses on. Misperceiving a technological race can, as you note, make people more likely to embark on ambitious projects aimed at accelerating the development of ... (read more)
Thank you Will! This is very much the kind of reflection and updates that I was hoping to see from you and other leaders in EA for a while.
I do hope that the momentum for translating these reflections into changes within the EA community is not completely gone given the ~1.5 years that have passed since the FTX collapse, but something like this feels like a solid component of a post-FTX response.
I disagree with a bunch of object-level takes you express here, but your reflections seem genuine and productive and I feel like me and others can engage with them in good faith. I am grateful for that.
context: I'm relatively new to EA, mid 20s, and a polyamorous woman. Commenting anonymously because I am not yet totally "out" as polyamorous to everyone in my life.
I feel that this post risks conflating and/or unfairly associating polyamory with poor handling of power dynamics and personal/professional boundaries. Such issues can overlap with any relationship structure. Sexual misconduct exists throughout our society, and throughout both monogamous and non-monogamous spaces.
I've experienced a range of sexual misconduct prior to my involvement in EA, and so far have found my dating and professional interactions with men in EA to be high quality, relative to high personal standards. In particular, the openness to and active solicitation of feedback I've experienced is something I've never really experienced outside of polyamory within EA. Since I learned about EA thanks to polyamory (not the other way around), I think I have a pretty different experience than that shared by women in the Time article. Their experience is not a representation of what polyamory done well actually looks like.
Additionally, the Time article fosters skepticism about restorative justice approaches to ... (read more)
I wanted to thank you for sharing. I think it can be hard or scary to raise concerns or feedback to a board like this, and I appreciate it.
(I can only speak for EVF US:)
Since the beginning of all this, we’ve been thinking through board composition questions. In particular, we’ve been discussing what’s needed on the US board and what changes should be made. We’ve also explicitly discussed conflicts of interest and how we should think about that for board composition.
There are a variety of different issues raised in the post and comments, but I want to say something specifically about FTX-related conflicts. In the aftermath of the FTX collapse, EVF UK and EVF US commissioned an outside independent investigation by the law firm Mintz to examine the organizations’ relationship to FTX, Alameda Research, Sam Bankman-Fried, and related individuals. We’re waiting for the results of the investigation to make a determination about whether any board members should be removed for FTX-related reasons. We’re doing this to avoid making rushed decisions with incomplete information. Nick has been recused from all FTX-related decision-making at EVF US. (Nick and Will have also been... (read more)
Hi Ludwig, thanks for raising some of these issues around governance. I work on the research team at Giving What We Can, and I’m responding here specifically to the claims relating to our work. There are a few factual errors in your post, and other areas I’d like to add additional context on. I’ll touch on:
#1 Recommendations
With respect to our recommendations: They are determined by our inclusion criteria which we regularly link to (for example, on our recommended charities page and on every charity page). As outlined in our inclusion criteria, we rely on our trusted evaluators to determine our giving recommendations. Longview Philanthropy and EA Funds are two of the five trusted evaluators we relied on this giving season. We explicitly outline our conflict of interests with both organisations in our trusted evaluators page.
We want to provide the best possible giving recommendations ... (read more)
I’m so sorry to hear about your negative experiences in EA community meetups. It is totally not okay for people to feel pressured or manipulated into sexual relationships. The community health team at CEA is available to talk, and will try to help resolve the situation. You can use this form to contact the team (you can be anonymous) or contact Julia Wise julia.wise@centreforeffectivealtruism.org or Catherine Low catherine@centreforeffectivealtruism.org directly.
If a crime has been committed (or you have reason to suspect a crime has been committed), we encourage people to report the crime to the police.
In the future I’d also be happy to talk with community members about the codes of conducts and other processes that CEA and the wider EA community has in place, and listen to their suggestions.
This post is mostly making claims about what a very, very small group of people in a very, very small community in Berkeley think. When throwing around words like "influential leaders" or saying that the claims "often guide EA decision-making" it is easy to forget that.
The term "background claims" might imply that these are simply facts. But many are not: they are facts about opinions, specifically the opinions of "influential leaders"
Do not take these opinions as fact. Take none for granted. Interrogate them all.
"Influential leaders" are just people. Like you and I, they are biased. Like you and I, they are wrong (in correlated ways!). If we take these ideas as background, and any are wrong, we are destined to all be wrong in the same way.
If you can, don't take ideas on background. Ask that they be on the record, with reasoning and attribution given, and evaluate them for yourself.
I'm really sorry that you and so many others have this experience in the EA community. I don't have anything particularly helpful or insightful to say -- the way you're feeling is understandable, and it really sucks :(
I just wanted to say I'm flattered and grateful that you found some inspiration in that intro talk I gave. These days I'm working on pretty esoteric things, and can feel unmoored from the simple and powerful motivations which brought me here in the first place -- it's touching and encouraging to get some evidence that I've had a tangible impact on people.
Reading between the lines, you're a funny writer, are self-aware, were successful enough at work to be promoted multiple times, and have a partner and a supportive family. This is more than what most people can hope for. At some level I think you should be proud of what you've accomplished, not just what you tried and failed to do.
Depression really sucks, and it's unfortunate that this is entangled with trying hard to achieve ambitious EA goals and not succeeding. At the same time, I think the EA community would've done right by its members if most of our "failure stories" looked like yours, albeit I'd prefer perhaps more community support in the longer term.
I think it's important to frame longtermism as particular subset of EA. We should be EAs first and longtermists second. EA says to follow the importance-tractability-crowdedness framework, and allocate funding to the most effective causes. This can mean funding longtermist interventions, if they are the most cost-effective. If longtermist interventions get a lot of funding and hit diminishing returns, then they won't be the most cost-effective anymore. The ITC framework is more general than the longtermist framing of "focus on the long-term future", and allows us to pivot as funding and tractability changes.
Just to note, many unrelated communities underrepresent black people eg, to quote Scott Alexander,
and manifest likely heavily overrepresented queer and neurodivergent people. It's unclear to me that every single minority group should be represented perfectly in every single community (do we hold EA to this standard? what % of EA talks are given by black people?).
I think it's pretty hard to have your community be about even 1 thing, let alone 1 thing + perfect representation of every group. The sectors that Manifest draws from (forecasting, crypto, heterodoxy, rationalism, EA, tech) probably all have low black representation, so it seems a lot to ask manifest alone to improve this.
To state the obvious, I don't expect manifest ever to have 50/50 gender representation (though I think it would be better if it were, say, 80/20[1] than like 95/5). To give another example in the forecasting spac... (read more)
I want to flag for Forum readers that I am aware of this post and the associated issues about FTX, EV/CEA, and EA. I have also reached out to Becca directly.
I started in my new role as CEA’s CEO about six weeks ago, and as of the start of this week I’m taking a pre-planned six-week break after a year sprinting in my role as EV US’s CEO[1]. These unusual circumstances mean our plans and timelines are a work in progress (although CEA’s work continues and I continue to be involved in a reduced capacity).
Serious engagement with and communication about questions and concerns related to these issues is (and was already) something I want to prioritize, but I want to wait to publicly discuss my thoughts on these issues until I have the capacity to do so thoroughly and thoughtfully, rather than attempt to respond on the fly. I appreciate people may want more specific details, but I felt that I’d at least respond to let people know I’ve acknowledged the concerns rather than not responding at all in the short-term.
- ^
... (read more)It’s unusual to take significant time off like this immediately after starting a new role, but this is functionally a substitute for me not taking an extended break bet
Many people find the Forum anxiety inducing because of the high amount of criticism. So, in the spirit of Giving Season, I'm going to give some positive feedback and shout-outs for the Forum in 2023 (from my PoV). So, without further ado, I present the 🎄✨🏆 totally-not-serious-worth-no-internet-points-JWS-2023-Forum-Awards: 🏆✨🎄[1]
Best Forum Post I read this year:
10 years of Earning to Give by @AGB: A clear, grounded, and moving look at what it actually means to 'Earn to Give'. In particular, the 'Why engage?' section really resonated with me.
Honourable Mentions:
Best ... (read more)
Hello Jack, I'm honoured you've written a review of my review! Thanks also for giving me sight of this before you posted. I don't think I can give a quick satisfactory reply to this, and I don't plan to get into a long back and forth. So, I'll make a few points to provide some more context on what I wrote. [I wrote the remarks below based on the original draft I was sent. I haven't carefully reread the post above to check for differences, so there may be a mismatch if the post has been updated]
First, the piece you're referring to is a book review in an academic philosophy journal. I'm writing primarily for other philosophers who I can expect to have lots of background knowledge (which means I don't need to provide it myself).
Second, book reviews are, by design, very short. You're even discouraged from referencing things outside the text you're reviewing. The word limit was 1,500 words - I think my review may even be shorter than your review of my review! - so the aim is just to give a brief overview and make a few comments.
Third, the thrust of my article is that MacAskill makes a disquietingly polemical, one-sided case for longtermism. My objective was to point this out and deliber... (read more)
I'm worried and skeptical about negative views toward the community health team and Julia Wise.
My view is informed by the absence of clear objective mistakes described by anyone. It also seems very easy and rewarding to criticize them[1].
I'm increasingly concerned about the dynamic over the last few months where CEA and the Community Health team constantly acts as a lightning rod for problems they have little control over. This dynamic has always existed, but it has become more severe post-SBF.
This seems dysfunctional and costly to good talent at CEA. It is an even deeper issue because these seem to be one of the few people trying to take ownership and help EA publicly right now.
I'm not sure what happens if Julia Wise and co. stop.
- ^
... (read more)The Guzey incident is one example where a detractor seems excessive toward Wise. I share Will Bradshaw's view that this is both minor and harmless, although I respect and would be interested in Nuno's dissenting view.
(Alexey Guzey wrote a book chapter, that he would be releasing publicly, that was critical of MacAskill's content in DGB, to Julia Wise. Wise sent the chapter to MacAskill, which Guzey asked her not to do. It's unclea
Several nitpicks:
I realise there are legal and other constraints, so maybe I am being harsh, but overall, several components of this post seemed not very "real" or straightforward relative to what I would usually expect from this sort of EA org update.
I think some of this post's criticisms have bite: for example, I agree that EVF suborgs are at significant risk of falling prey to conflicts of interest, especially given the relatively low level of transparency at many of these suborgs, and that EVF should have explicit mechanisms for avoiding this.
However, I think this post largely fails to engage with the reasons so many suborgs have federated with EVF. Based on my experience[1], members of many of these suborgs genuinely consider themselves separate orgs, and form part of EVF mainly because this allows them to be supported by EVF's very well-oiled ops machine. This makes it significantly easier for new EA projects to spin up quickly, while offering high-quality HR and other support to their employees. This is a pretty exciting proposal for any new EA project that doesn't place a high value on being legally independent.
"Breaking up" EVF could thus be very costly from an impact perspective, insofar as it makes the component orgs less effective (which seems likely to me) and necessitates lots of duplication of ops effort. You might argue that it's worth it for the transparency benefits, but I'd want to see serious engagement with ... (read more)
Given the uncertainty in the chronology of events and nature of how authorship and review occurred, would it have not made sense to reach out to Cremer and Kemp before posting this? It would make any commentary much less speculative and heated. If the OP has done this and not received a reply, they should make that clear (but my understanding is that this was not done, which imo is a significant oversight)
It's quite likely the extinction/existential catastrophe rate approaches zero within a few centuries if civilization survives, because:
If we're more than 50% likely to get to that kind of robust state, which I think is true, and I believe Toby does as well, then the life expectancy of civilization is very long, almost as long on a log scale as with 100%.
Your argument depends on 99%+++ credence that such safe stable states won't be attained, wh... (read more)
I had a pretty painful experience where I was in a pretty promising position in my career, already pretty involved in EA, and seeking direct work opportunities as a software developer and entrepreneur. I was rejected from EAG twice in a row while my partner, a newbie who just wanted to attend for fun (which I support!!!) was admitted both times. I definitely felt resentful and jealous in ways that I would say I coped with successfully but wow did it feel like the whole thing was lame and unnecessary.
I felt rejected from EA at large and yeah I do think my life plans have adjusted in response. I know there were many such cases! In the height of my involvement I was a very devoted EA, really believed in giving as much as I could bear (time etc included).
This level of devotion juxtaposed with being turned away from even hanging out with people, it's quite a shock. I think the high devotion version of my life would be quite fulfilling and beautiful, and I got into EA seeking a community for that, but never found it. EAG admissions is a pretty central example of this mismatch to me.
(Writing quickly, sorry if I'm unclear)
Since you asked, here are my agreements and disagreements, mostly presented without argument:
- As someone who is roughly in the target audience (I am involved in hiring for senior ops roles, though it's someone else's core responsibility), I think I disagree with much of this post (eg I think this isn't as big a problem as you think, and the arguments around hiring from outside EA are weak), but in my experience it's somewhat costly and quite low value to publicly disagree with posts like this, so I didn't write anything.
- It's costly because people get annoyed at me.
- It's low value because inasmuch as think your advice is bad, I don't really need to persuade you you're wrong, I just need to persuade the people who this article is aimed at that you're wrong. It's generally much easier to persuade third parties than people who already have a strong opinion. And I don't think that it's that useful for the counterarguments to be provided publicly.
- And if someone was running an org and strongly agreed with you, I'd probably shrug and say "to each their own" rather than trying that hard to talk them out of it: if a leader really feels passionate about sh
... (read more)Retrospective grant evaluations
Research That Can Help Us Improve
Some quick responses to Nuño’s article about EA Forum stewardship
I work on the CEA Online Team, which runs the Forum, but I am only speaking for myself. Others on my team may disagree with me. I wrote this relatively quickly so I wouldn’t be surprised if I changed my mind on things upon reflection.
Overall, I really appreciated Nuño’s article, and did not find it to be harsh, overly confrontational, or unpleasant to read. I appreciated the nice messages that he included to me, a person working on the Forum, at the start and end of the piece.
On the design change and addition of features:
People have different aesthetic preferences, and I personally think the current Forum design looks nicer than the 2018 version, plus I think it has better usability in various ways. I like minimalism in some contexts, but I care more about doing good than about making the Forum visually pleasing to me. To that end, I think it is correct for the Forum to have more than just a simple frontpage list of posts plus a “recent discussion” feed (which seems to be the entirety of the 2018 version).
For example, I think adding the “quick takes” and “popular comments” sections to the home page have been real... (read more)
Some other potentially useful references for this debate:
- Emily Oehlsen's/Open Phil's response to Open Phil Should Allocate Most Neartermist Funding to Animal Welfare, and the thread that follows, (EDIT) and other comments there.
- How good is The Humane League compared to the Against Malaria Foundation? by Stephen Clare and AidanGoth for Founders Pledge (using old cost-effectiveness estimates).
- Discussion of the two envelopes problem for moral weights (can get pretty technical):
- Tomasik, 2013-2018
- Karnofsky, 2018, section 1.1
- St. Jules, 2024 (that's me!)
- GiveWell's marginal cost-effectiveness estimates for their top charities, of course
- Some recent-ish (mostly) animal welfare intervention cost-effectiveness estimates:
- Track records of Charity Entrepreneurship-incubated charities (animal and global health)
- Charity Entrepreneurship prospective animal welfare reports and global health reports
- Charity Entrepreneurship Research Training Program (2023) prospective reports
- on animal welfare with cost-effectiveness estimates: Intervention Report: Ballot initiatives to improve broiler welfare in the US by Aashish K and Exploring Corporate Campaigns Against Silk Retailers by Zuzana Sperlova and Mor
... (read more)If you’re seeing things on the forum right now that boggle your mind, you’re not alone.
Forum users are only a subset of the EA community. As a professional community builder, I’m fortunate enough to know many people in the EA community IRL, and I suspect most of them would think it’d be ridiculous to give a platform to someone like Hanania.
If you’re like most EAs I know, please don’t be dissuaded from contributing to the forum.
I’m very glad CEA handles its events differently.
I was so sorry to learn this.
Some other resources:
5 steps to help someone who may be suicidal
Crisis resources around the world
Years ago Marisa was the first person to put in an application for several EA Globals, to where I was curious if she had some kind of notification set up. I asked her about it once, and she was surprised to hear that she’d been first; she was just very keen.
I’m sorry I didn’t handle this better in the first place. My original comments are here, but to reiterate some of the mistakes I think I made in handling the concerns about Owen:
Some things that are different now, related to the changes that Chana describes:
- The community health team has spent months going through lessons learned both from this situation and from other cases we’ve handled. B
... (read more)Personal feelings (which I don't imply are true or actionable)
I am annoyed and sad.
I want to feel like I can trust the leaders of this community are playing by a set of agreed rules. Eg I want to hear from them. And half of me trusts them and half feels I should take an outside view that leaders often seek to protect their own power. The disagreement between these parts causes hurt and frustration.
I also variously feel hurt, sad, afraid, compromised, betrayed.
I feel ugly that I talk so much about my feelings too. It feels kind of obscene.
I feel sad that saying negative things, especially about Will. I sense he's worked really hard. I feel ungrateful and snide. Yuck.
Object level
I don't think this article moves me muchThis article moves me a bit on a number of important things:- We have some more colour around the specific warnings that were given
- It becomes much more likely that MacAskill backed Bankman-Fried in the aftermath of the the early Alameda disagreements which was ex-ante, dubious and ex-post disasterous. The comment about threatening Mac Auley is very concerning.
- I update a bit that Sam used this support as cover
- I sense that people ought to take the accusations of inappropri
... (read more)It was, and we explicitly said that it was at the time. Many of those of us who left have a ton of experience in startups, and the persistent idea that this was a typical “founder squabble” is wrong, and to be honest, getting really tiresome to hear. This was not a normal startup, and these were not normal startup problems.
(Appreciate the words of support for my honesty, thank you!)
fwiw I will probably post something in the next ~week (though I'm not sure if I'm one of the people you are waiting to hear from).
Here's my tentative take:
So, while Will should be removed, Nick has demonstrated competence and should stay on.
(Meta note: I feel frustrated about the lack of distinction between Nick and Will on this questi... (read more)
Thanks for making the case. I'm not qualified to say how good a Board member Nick is, but want to pick up on something you said which is widely believed and which I'm highly confident is false.
Namely - it isn't hard to find competent Board members. There are literally thousands of them out there, and charities outside EA appoint thousands of qualified, diligent Board members every year. I've recruited ~20 very good Board members in my career and have never run an open process that didn't find at least some qualified, diligent people, who did a good job.
EA makes it hard because it's weirdly resistant to looking outside a very small group of people, usually high status core EAs. This seems to me like one of those unfortunate examples of EA exceptionalism, where EA thinks its process for finding Board members needs to be sui generis. EA makes Board recruitment hard for itself by prioritising 'alignment' (which usually means high status core EAs) over competence, sometimes with very bad results (e.g. ending up with a Board that has a lot of philosophers and no lawyers/accountants/governance experts).
It also sometimes sounds like EA orgs think their Boards have higher entry requirements... (read more)
Thank you for sharing. In particular, I find your mention of shame vs edginess interesting.. But I expect that at least one person reading your story will think "Uh sounds like you need more shame, dude, not less" so I'd like to share a perspective for any such readers:
If I understand Owen anyway, I'll say that I relate in that I also have had some brazen periods of life, prompted by a sort of cultural rebirth and sex-positive idealism. An outsider might have labelled these brazen periods as a swinging of the pendulum in response to my strict religious upbringing, but that isn't quite right.. It's hard to notice how it is related to shame but in my case:
For a very shame-prone or shame-trained person, it can be very difficult to parse out "What is the actual harm here? What are the actual bad acts and why, when I know that most of these things I'm programmed to feel shame about simply are not wrong or shame-worthy?" This can lead to a sort of idealistically-motivated throwing out of all feelings that look like shame. Anxiety, hesitance, guilt, and self-criticality are examples of possibly-adaptive-feelings that can be mistakenly thrown out here. This, I think, can lead to soci... (read more)
Julia - thanks for a helpful update.
As someone who's dealt with journalists & interviews for over 25 years, I would just add: if you do talk to any journalists for any reason, be very clear up front about (1) whether the interview is 'on the record', 'off the record', 'background', or 'deep background', (2) ask for 'quote approval', i.e. you as the interviewee having final approval over any quotes attributed to them, (3) possibly ask for overall pre-publication approval of the whole piece, so its contents, tone, and approach are aligned with yours. (Most journalists will refuse 2 and 3, which reminds you they are not your friends or allies; they are seeking to produce content that will attract clicks, eyeballs, and advertisers.)
Also, record the interview on your end, using recording software, so you can later prove (if necessary, in court), that you were quoted accurately or inaccurately.
If you're not willing to take all these steps to protect yourself, your organization, and your movement, DO NOT DO THE INTERVIEW.
This piece is a useful resource about these terms and concepts.
I generally directionally agree with Eli Nathan and Habryka's responses. I also weak-downvoted this post (though felt borderline about that), for two reasons.
(1) I would have preferred a post that tried harder to even-handedly discuss and weigh up upsides and downsides, whereas this mostly highlighted upsides of expansion, and (2) I think it's generally easier to publicly call for increased inclusivity than to publicly defend greater selectivity (the former will generally structurally have more advocates and defenders). In that context I feel worse about (1) and wish Scott had handled that asymmetry better.
But I wouldn't have downvoted if this had been written by someone new to the community, I hold Scott to a higher standard and I'm pretty uncertain about the right policy with respect to voting differently in response to the same content on that basis.
I agree with Scott Alexander that when talking with most non-EA people, an X risk framework is more attention-grabbing, emotionally vivid, and urgency-inducing, partly due to negativity bias, and partly due to the familiarity of major anthropogenic X risks as portrayed in popular science fiction movies & TV series.
However, for people who already understand the huge importance of minimizing X risk, there's a risk of burnout, pessimism, fatalism, and paralysis, which can be alleviated by longtermism and more positive visions of desirable futures. This is especially important when current events seem all doom'n'gloom, when we might ask ourselves 'what about humanity is really worth saving?' or 'why should we really care about the long-term future, it it'll just be a bunch of self-replicating galaxy-colonizing AI drones that are no more similar to us than we are to late Permian proto-mammal cynodonts?'
In other words, we in EA need long-termism to stay cheerful, hopeful, and inspired about why we're so keen to minimize X risks and global catastrophic risks.
But we also need longtermism to broaden our appeal to the full range of personality types, political views, and religious views ... (read more)
Here's a crazy idea. I haven't run it by any EAIF people yet.
I want to have a program to fund people to write book reviews and post them to the EA Forum or LessWrong. (This idea came out of a conversation with a bunch of people at a retreat; I can’t remember exactly whose idea it was.)
Basic structure:
What books are on topic: Anything of interest to people who want to have a massive altruistic impact on the world. More specifically:
- Things directly related to traditional EA topics
- Things about the world more generally. Eg macrohistory, how do governments work, The Doomsday Machine, history of science (eg Asimov’s “A Short History of Chemistry”)
- I think that b
... (read more)I don't have any arguments over cancel culture or anything general like that, but I am a bit bothered by a view that you and others seem to have. I don't consider Robin Hanson an "intellectual ally" of the EA movement; I've never seen him publicly praise it or make public donation decisions, but he has claimed that do-gooding is controlling and dangerous, that altruism is all signaling with selfish motivations, that we should just save our money and wait for some unspecified future date to give it away, and that poor faraway people are less likely to exist according to simulation theory so we should be less inclined to help them. On top of that he made some pretty uncharitable statements about EA Munich and CEA after this affair. And some of his pursuits suggest that he doesn't care if he turns himself into a super controversial figure who brings negative attention towards EA by association. These things can be understandable on their own, you can rationalize each one, but when you put it all together it paints a picture of someone who basically doesn't care about EA at all. It just happens to be the case that he was big in the rationalist blogosphere and lots of EAs (includi... (read more)
I think forecasting is attractive to many people in EA like myself because EA skews towards curious people from STEM backgrounds who like games. However, I’m yet to see a robust case for it being an effective use of charitable funds (if there is, please point me to it). I’m worried we are not being objective enough and trying to find the facts that support the conclusion rather than the other way round.
@EV US Board @EV UK Board could you include Owen's response document somewhere in the post? It contains a lot of important information and it's getting lost in the comments.
Hey Aella, I appreciate you telling your story. I’m really sorry that you’ve experienced people lying about you, and making harmful assumptions about your intent . That really really sucks.
I’ve put more information about most (not all) of the Community Health team’s understanding of the TIME cases in this comment:
https://forum.effectivealtruism.org/posts/JCyX29F77Jak5gbwq/ea-sexual-harassment-and-abuse?commentId=jKJ4kLq8e6RZtTe2P
It might clarify some of your questions about individual cases.
Here are some excerpts from Sequoia Capital's profile on SBF (published September 2022, now pulled).
On career choice:
... (read more)As a moderator, I think the phrase "seems otherwise unintelligent" is clearly not generous or collaborative and breaks Forum norms. This is a warning, please don't insult other users.
As a somewhat separate point: fwiw, I'm a woman and I've not experienced this general toxicity in EA myself. Obviously I am not challenging your experience - there are lots of EA sub-communities and it makes sense that some could be awful, others fine. But it's worth adding this nuance, I think (e.g., from what I've heard, Bay Area EA circles are particularly incestuous wrt work/life overlap stuff).
The discussion on this post is getting heated, so we'd like to remind everyone of the Forum norms. Chiefly:
If you don’t think you can respect these norms consistently in the comments of this post, consider not contributing, and moving on to another post.
We’ll investigate the issues that are brought up to the best of our ability. We’d like to remind readers that a lot of this is speculation.
I'm going to be boring/annoying here and say some things that I think are fairly likely to be correct but may be undersaid in the other comments:
Most people on average are reasonably well-calibrated about how smart they are.(To be clear exceptions certainly exist)EDIT: This is false, see Max Daniel's comment.The Michael Neilsen critique seems thoughtful, constructive, and well-balanced on first read, but I have some serious reservations about the underlying ethos and its implications.
Look, any compelling new world-view that is outside the mainstream cultures' Overton window can be pathologized as an information hazard that makes its believers feel unhappy, inadequate, and even mentally ill by mainstream standards. Nielsen seems to view 'strong EA' as that kind of information hazard, and critiques it as such.
Trouble is, if you understand that most normies are delusional about some important issue, and you you develop some genuinely deeper insights into that issue, the psychologically predictable result is some degree of alienation and frustration. This is true for everyone who has a religious conversion experience. It's true for everyone who really takes onboard the implications of any intellectually compelling science -- whether cosmology, evolutionary biology, neuroscience, signaling theory, game theory, behavior genetics, etc. It's true for everyone who learns about any branch of moral philosophy and takes it seriously as a guide to action.
I've seen this over, and over, an... (read more)
This is an excellent post, one slightly subtle point about the political dynamics that I think it misses is the circumstances around BoldPAC's investment in Salinas.
BoldPAC is the superpac for Hispanic House Democrats. It happens to be the case that in the 2022 election cycle there is a Hispanic state legislator (Andrea Salinas) living in a blue-leaning open US House of Representatives seat. It also happens to be the case that given the ups and downs of the political cycle, this is the only viable opportunity to add a Hispanic Democrat to the caucus this year. So just as it's basically happenstance the the EA community got involved in the Oregon 6th as opposed to some other district, it's also happenstance that BoldPAC was deeply invested in this race. It's not a heavily Hispanic area or anything, Salinas just happens to be Latina.
If it was an Anglo state legislator holding down the seat, the "flood the zone with unanswered money" strategy might have worked. And if there were four other promising Hispanic prospects in the 2022 cycle, it also might have worked because BoldPAC might have been persuaded that it wasn't worth going toe-to-toe with Protect Our Future. N... (read more)
Thanks for writing this! One small comment:
I believed this and wanted EV to hire more outside experts. To support my case, I made a spreadsheet of all the major issues EV had run into that I was aware of and whether having non-EA experts helped. To my dismay, the result was pretty equivocal: there were certainly instances where non-EA experts outperformed EAs, but ~as many instances to the contrary.
I don't think EA is unique here; I have half a foot in the startup world and pg's recent Founder Mode post has ignited a bunch of discussion about how startup founders with ~0 experience often outperform the seasoned experts that they hire.
Unfortunately I don't have a good solution here - hiring/contracting good people is just actually a very hard problem. But at least from the issues I am aware of at EV I don't think the correct update was in favor of experience and away from value alignment.[1]
- ^
... (read more)If I had to come up with advice, I think it would be to note the scare quotes around "value alignment". Someone sincerely trying to do well at the thing they are hired for is very valuable; someone
Ok, I donated 10k
Lessons and updates
The scale of the harm from the fraud committed by Sam Bankman-Fried and the others at FTX and Alameda is difficult to comprehend. Over a million people lost money; dozens of projects’ plans were thrown into disarray because they could not use funding they had received or were promised; the reputational damage to EA has made the good that thousands of honest, morally motivated people are trying to do that much harder. On any reasonable understanding of what happened, what they did was deplorable. I’m horrified by the fact that I was Sam’s entry point into EA.
In these comments, I offer my thoughts, but I don’t claim to be the expert on the lessons we should take from this disaster. Sam and the others harmed me and people and projects I love, more than anyone else has done in my life. I was lied to, extensively, by people I thought were my friends and allies, in a way I’ve found hard to come to terms with. Even though a year and a half has passed, it’s still emotionally raw for me: I’m trying to be objective and dispassionate, but I’m aware that this might hinder me.
There are four categories of lessons and updates:
- Undoing updates made because of FTX
- Appreciating the
... (read more)Thank you so much for writing so clearly and compellingly about what happened to you and the subculture which encourages treating women like this.
There is no place for such a subculture in EA (or anywhere else).
Great post. I strongly agree with the core point.
Regarding the last section: it'd be an interesting experiment to add a "democratic" community-controlled fund to supplement the existing options. But I wouldn't want to lose the existing EA funds, with their vetted expert grantmakers. I personally trust (and agree with) the "core EAs" more than the "concerned EAs", and would be less inclined to donate to a fund where the latter group had more influence. But by all means, let a thousand flowers bloom -- folks could then direct their donations to the fund that's managed as they think best.
[ETA: Just saw that Jason has already made a similar point.]
What's stunning to me is the following:
Leaking private slack conversations to journalists is a 101 on how to destroy trust. The response to SBF and FTX betrayal shouldn't be to further erode trust within the community.
EA should not have to learn every single group dynamic from first principles - the community might not survive such a thorough testing and re-learning of all social rules around discretion, trust and why its important to have private channels of communication that you can assume will not be leaked to journalists.
If the community ignores trust, networks and support for one another - then the community will not form, ideas will not be exchanged in earnest and everyone will be looking over their shoulder for who may leak or betray their confidence.
Destroying trust decimates communities - we've all found that with SBF. The response to that shouldn't be fur... (read more)
A lot of liar’s paradox issues with this interview.
The earning to give company I started got acquired.
I love this, haha.
... (read more)But, as with many things, J.S. Mill did this meme first!!!
In the Houses of Parliament on April 17th, 1866, he gave a speech arguing that we should keep coal in the ground (!!). As part of that speech, he said:
This sounds very right to me.
Another way of putting this argument is that "global priorities (GP)" community is both more likable and more appropriate than "effective altruism (EA)" community. More likable because it's less self-congratulatory, arrogant, identity-oriented, and ideologically intense.
More appropriate (or descriptive) because it better focuses on large-scale change, rather than individual action, and ideas rather than individual people or their virtues. I'd also say, more controversially, that when introducing EA ideas, I would be more likely to ask the question: "how ought one to decide what to work on?", or "what are the big probl... (read more)
This seems like an bizarre position to me. Sure, maybe you disagree with them (I personally have a fair amount of respect for the OpenPhil team and their judgement, but whatever, I can see valid reasons to criticise), but to consider their judgement not just irrelevant, but actively such strong negative evidence as to make an org not worth donating to, seems kinda wild. Why do you believe this? Reversed stupidity is not intelligence. Is the implicit model that all of x risk focused AI policy is pushing on some 1D spectrum such that EVERY org in the two camps is actively working against the other camp? That doesn't seem true to me.
I would have a lot more sympathy with an argument that eg other kinds of policy work is comparatively neglected, so OpenPhil funding it is a sign that it's less neglected.
Meta note: I wouldn’t normally write a comment like this. I don’t seriously consider 99.99% of charities when making my donations; why single out one? I’m writing anyway because comments so far are not engaging with my perspective, and I hope more detail can help 80,000 hours themselves and others engage better if they wish to do so. As I note at the end, they may quite reasonably not wish to do so.
For background, I was one of the people interviewed for this report, and in 2014-2018 my wife and I were one of 80,000 hours’ largest donors. In recent years it has not made my shortlist of donation options. The report’s characterisation of them - spending a huge amount while not clearly being >0 on the margin - is fairly close to my own view, though clearly I was not the only person to express it. All views expressed below are my own.
I think it is very clear that 80,000 hours have had a tremendous influence on the EA community. I cannot recall anyone stating otherwise, so references to things like the EA survey are not very relevant. But influence is not impact. I commonly hear two views for why this influence may not translate into positive impact:
-80,000 hours prioritises AI well a... (read more)
Consider hiring an outside firm to do an independent review.
I don't think this is a healthy way of framing disagreements about cause prioritization. Imagine if a fan of GiveDirectly started complaining about GiveWell's top charities for "redirecting money from the wallets of world's poorest villagers..." Sounds almost like theft! Except, of course, that the "default" implicitly attributed here is purely rhetorical. No cause has any prior claim to the funds. The only question is where best to send them, and this should be determined in a cause neutral way, not picking out any one cause as the privileged "default" that is somehow robbed of its due by any or all competing candidates that receive funding.
Of course, you're free to feel frustrated when others disagree with your priorities. I just think that the rhetorical framing of "redirected" funds is (i) not an accurate way to think about the situation, and (ii) potentially harmful, insofar as it seems apt to feed unwarranted grievances. So I'd encourage folks to try to avoid it.
Thanks for the detailed response.
I agree that we don't want EA to be distinctive just for the sake of it. My view is that many of the elements of EA that make it distinctive have good reasons behind them. I agree that some changes in governance of EA orgs, moving more in the direction of standard organisational governance, would be good, though probably I think they would be quite different to what you propose and certainly wouldn't be 'democratic' in any meaningful sense.
- I don't have much to add to my first point and to the discussion below my comment by Michael PJ. Boiled down, I think the point that Cowen makes stripped of the rhetoric is just that EAs did a bad job on the governance and management of risks involved in working with SBF and FTX, which is very obvious and everyone already agrees with. It simply has no bearing on whether EAs are assessing existential risk correctly, and enormous equivocation on the word 'existential risk' doesn't change that fact.
- Since you don't want diversity essentially along all dimensions, what sort of diversity would you like? You don't want Trump supporters; do you want more Marxists? You apparently don't want more right win
... (read more)Thanks for writing this, Will. I appreciate the honesty and ambition. Thank you for all you do and I hope you have people around you who love and support you.
I like the framing of judicious ambition. My key question around this and the related longtermism discussion is something like, What is the EA community for?
Are we the democratic body that makes funding decisions? No and I don't want us to be. Doing the most good likely involves decisions that the median EA will disagree with. I would like to trial forecasting funding outcomes and voting systems, but I don't assume that EA should be democratic. The question is what actually does the most good.
Are we a body of talented professionals who work on lower wages than they otherwise would? Yes, but I think we are more than that. Fundamentally it's our work that is undervalued, rather than us. Animals, the global poor and future generations cannot pay to save their own lives, so we won't be properly remunerated, except by the joy we take from doing it.
Are we community support for one another? Yes, and I think in regard to this dramatic shift in EA's fortunes that... (read more)
Hi, I run the 80,000 Hours job board, thanks for writing this out!
I agree that OpenAI has demonstrated a significant level of manipulativeness and have lost confidence in them prioritizing existential safety work. However, we don’t conceptualize the board as endorsing organisations. The point of the board is to give job-seekers access to opportunities where they can contribute to solving our top problems or build career capital to do so (as we write in our FAQ). Sometimes these roles are at organisations whose mission I disagree with, because the role nonetheless seems like an opportunity to do good work on a key problem.
For OpenAI in particular, we’ve tightened up our listings since the news stories a month ago, and are now only posting infosec roles and direct safety work – a small percentage of jobs they advertise. See here for the OAI roles we currently list. We used to list roles that seemed more tangentially safety-related, but because of our reduced confidence in OpenAI, we limited the listings further to only roles that are very directly on safety or security work. I still expect these roles to be good opportunities to do important work. Two live examples:
- In
... (read more)Nod, thanks for the reply.
I won't argue more for removing infosec roles at the moment. As noted in the post, I think this is at least a reasonable position to hold. I (weakly) disagree, but for reasons that don't seem worth getting into here.
The things I'd argue here:
- Safetywashing is actually pretty bad, for the world's epistemics and for EA and AI safety's collective epistemics. I think it also warps the epistemics of the people taking the job, so while they might be getting some career experience... they're also likely getting a distorted view of what what AI safety is, and becoming worse researchers than they would otherwise.
- As previously stated – it's not that I don't think anyone should take these jobs, but I think the sort of person who should take them is someone who has a higher degree of context and skill than I expect the 80k job board to filter for.
- Even if you disagree with those points, you should have some kind of crux for "what would distinguish an 'impactful AI safety job?'" vs a fake safety-washed role. It should be at least possible for OpenAI to make a role so clearly fake that you notice and stop listing it.
- If you're set on continuing to list Ope
... (read more)This post feels structurally misleading to me. You spend most of it diving into reasonably common but useful technical critiques before, in the final paragraphs, shifting abruptly to what appears to be the substance of your dispute: that you would prefer to exclude some people from events connected to it. In contrast to your data-driven, numbers-heavy analysis of its predictive power, you assert in brief and without evidence that "neoliberal ideology" and the participation of people you consider bigots has meaningfully reduced its accuracy as a market.
I think both topics in isolation are worth discussing, and perhaps there would be a productive way to combine the technical and cultural critique, but a response to your first fifteen paragraphs looks dramatically different to a response to your last two paragraphs, such that combining the two clouds more than it elucidates.
EV US has made a court motion to settle with the FTX estate for 100% of the funds received in 2022 for a total of $22.5M. See this public docket for the details: https://restructuring.ra.kroll.com/FTX/Home-DocketInfo (Memo number 3745).
My guess is Open Phil is covering this amount. Seems very relevant to anyone who is exposed to FTX clawback risk, or wants to understand what is going on with FTX things.
Thanks for all of the hard work on this, Howie (and presumably many others), over the last few months and (presumably) in the coming months
I felt a lot of this when I was first getting involved in effective altruism. Two of the things that I think are most important and valuable in the EA mindset -- being aware of tradeoffs, and having an acute sense of how much needs to get done in the world and how much is being lost for a lack of resources to fix it -- can also make for a pretty intense flavor of guilt and obligation. These days I think of these core elements of an EA mindset as being pieces of mental technology that really would ideally be installed gradually alongside other pieces of mental technology which can support them and mitigate their worst effects and make them part of a full and flourishing life.
Those other pieces of technology, at least for me, are something like:
- a conviction that I should, in fact, be aspiring to a full and flourishing life; that any plan which doesn't viscerally feel like it'll be a good, satisfying, aspirational life to lead is not ultimately a viable plan; that I may find sources of strength and flourishing outside where I imagined, and that it'd fine if I have to be creative or look harder to find them, but that I cannot and will not make life plans that don't entail having a good
... (read more)In addition to having a lot more on the line, other reasons to expect better of ourselves:
Because of the second point, many professional investors do surprisingly little vetting. For example, SoftBank is pretty widely reputed to be "dumb money;" IIRC they shook hands on huge investments in Uber and WeWork on the basis of a single meeting, and their flagship Vision Fund lost 8% (~$8b) this past quarter alone. I don't know about OTPP but I imagine they could be similarly diligence-light given their relatively short history as a venture investor. Sequoia is less famously dumb than those two, but still may not have done much vetting if FTX was perceived to be a "hot" deal with lots of time pressure.
This is from a couple months ago: in large part due to the advocacy of New York kidney donors in the EA community, this bill passed the NY state assembly, which will reimburse kidney donors and may save around 100 lives a year. It still needs to be signed into law by the governor, but it's very likely to, and EAs are already on the ball to lobby for its passing!
I was at Manifest as a volunteer, and I also saw much of the same behaviour as you. If I had known scientific racism or eugenics were acceptable topics of conversation there, I wouldn’t have gone. I’m increasingly glad I decided not to organise a talk.
EA needs to recognise that even associating with scientific racists and eugenicists turns away many of the kinds of bright, kind, ambitious people the movement needs. I am exhausted at having to tell people I am an EA ‘but not one of those ones’. If the movement truly values diversity of views, we should value the people we’re turning away just as much.
Edit: David Thorstad levelled a very good criticism of this comment, which I fully endorse & agree with. I did write this strategically to be persuasive in the forum context, at the cost of expressing my stronger beliefs that scientific racism & eugenics are factually & morally wrong over and above just being reputational or strategic concerns for EA.
You can now import posts directly from Google docs
Plus, internal links to headers[1] will now be mapped over correctly. To import a doc, make sure it is public or shared with "eaforum.posts@gmail.com"[2], then use the widget on the new/edit post page:
Importing a doc will create a new (permanently saved) version of the post, but will not publish it, so it's safe to import updates into posts that are already published. You will need to click the "Publish Changes" button to update the live post.
Everything that previously worked on copy-paste[3] will also work when importing, with the addition of internal links to headers (which only work when importing).
There are still a few things that are known not to work:
Nested bullet points(these are working now)Cropped images get uncropped(also working now)There might be other issues that we don't know about. Please report any bugs or give any other feedback by replying to this quick take, you can also contact us in the usual ways.
Appendix: Version history
There are some minor impr... (read more)
I’m very sorry that you had such a bad experience here. Whilst I would disagree with some of the details here I do think that our communication was worse than I would have liked and I am very sorry for any hardship that you experienced. It sounds like a stressful process which could have been made much better if we had communicated more often and more quickly.
In my last email (March 4th), I said that we were exploring making this grant, but it’s legally challenging. Grants for mental health support are complicated, in general, as we have to show that there is a pure public benefit. We have an open thread with our legal counsel, and I’m cautiously optimistic about getting a decision on this relatively soon.
In general, I don’t think I made promises or hard commitments to get back in a certain time frame; instead, I said that we aim to get back by a certain time. I believe I am at fault for not making this distinction appropriately clear, and I am upset that this mismatch of expectations resulted in hardship.
Edit: As I said above, from my perspective, this account doesn't accurately depict EAIF's interaction with Igor. We did actually reject this application, but I did say that I was ... (read more)
All views are my own rather than those of any organizations/groups that I’m affiliated with. Trying to share my current views relatively bluntly. Note that I am often cynical about things I’m involved in. Thanks to Adam Binks for feedback.
Edit: See also child comment for clarifications/updates.
Edit 2: I think the grantmaking program has different scope than I was expecting; see this comment by Benjamin for more.
Following some of the skeptical comments here, I figured it might be useful to quickly write up some personal takes on forecasting’s promise and what subareas I’m most excited about (where “forecasting” (edit: is defined as things in the vein of "Tetlockian superforecasting" or general prediction markets/platforms, in which questions are often answered by lots of people spending a little time on them, without much incentive to provide deep rationales)
is defined as things I would expect to be in the scope of OpenPhil’s program to fund).- Overall, most forecasting grants that OP has made seem much lower EV than the AI safety grants (I’m not counting grants that seem more AI-y than forecasting-y, e.g. Epoch, and I believe these wouldn’t be covered by the new grantmak
... (read more)Thanks for this update! Two questions…
I can see where Ollie's coming from, frankly. You keep referring to these hundreds of pages of evidence, but it seems very likely you would have been better off just posting a few screenshots of the text messages that contradict some of the most egregious claims months ago. The hypothesising about "what went wrong", the photos, the retaliation section, the guilt-tripping about focusing on this, etc. - these all undermine the discussion about the actual facts by (1) diluting the relevant evidence and (2) making this entire post bizarre and unsettling.
For the most part, an initial reading of this post and the linked documents did have the intended effect on me of making me view many of the original claims as likely false or significantly exaggerated. With that said, my suggestion would have been to remove some sorts of stuff from the post and keep it only in the linked documents or follow-up posts. In particular, I'd say:
- The photos provide a bit of information, but can be viewed as distracting and misleading. I think the value of information they provide is probably sufficient for their inclusion in a linked Google Doc, but including them twice in the post (and once near the top) gives them a lot of salience, and as some of the comments here show, this can cause some readers to switch off or view your post with hostility.
- Some of the alternative hypothesis stuff, and the stuff related to claims about Ben Pace, may also have been better suited to a linked Google Doc -- something that curious readers could dig into, but that was not given a lot of salience for somebody who was just interested in the core claims. I think there's some value to these exercises, but it would muddy the waters less if this were less salient, so that r
... (read more)I quit trying to have direct impact and took a zero-impact tech job instead.
I expected to have a hard time with this transition, but I found a really good fit position and I'm having a lot of fun.
I'm not sure yet where to donate extra money. Probably MIRI/LTFF/OpenPhil/RethinkPriorities.
I also find myself considering using money to try fixing things in Israel. Or maybe to run away first and take care things and people that are close to me. I admit, focusing on taking care of myself for a month was (is) nice, and I do feel like I can make a difference with E2G.
(AMA)
My overall impression is that the CEA community health team (CHT from now on) are well intentioned but sometimes understaffed and other times downright incompetent. It's hard to me to be impartial here, and I understand that their failures are more salient to me than their successes. Yet I endorse the need for change, at the very least including 1) removing people from the CHT that serve as a advisors to any EA funds or have other conflict of interest positions, 2) hiring HR and mental health specialists with credentials, 3) publicly clarifying their role and mandate.
My impression is that the most valuable function that the CHT provides is as support of community building teams across the world, from advising community builders to preventing problematic community builders from receiving support. If this is the case, I think it would be best to rebrand the CHT as a CEA HR department, and for CEA to properly hire the community builders who are now supported as grantees, which one could argue is an employee misclassification.
I would not be comfortable discussing these issues openly out of concern for the people affected, but here are some horror stories:
- A CHT staff pressured a c
... (read more)Thank you for so much for articulating this in such a thoughtful and considered way! It must have taken a lot of courage to share these difficult experiences but I'm so glad you did.
Your suggested actions are really helpful, and I would encourage anyone who cares about building a strong community based on altruism to take the time to think on this.
*CW*
As someone who has had a similar experience with a partner I trusted, this paragraph felt incredibly true:
"The realistic tradeoffs as a survivor of sexual harassment or assault often push the survivor to choose an ideal, like justice or safety for others, at the expense of their time, energy, and health. While reeling from the harm of the situation, the person experiencing the harm might engage in a process that hurts them in an effort to ensure their safety, protect other potential victims, educate the perpetrator, or signal that the perpetrator’s actions were harmful."
I spent the weeks following the incident going over the facts in my head, considering his point of view, minimising the experience, wondering if I should have been more direct (anyone who has met me in person will know that's not something I usually have a proble... (read more)
I think this article paints a fairly misleading picture, in a way that's difficult for me to not construe as deliberate.
It doesn't provide dates for most of the incidents it describes, despite that many of them happened many years ago, and thereby seems to imply that all the bad stuff brought up is ongoing. To my knowledge, no MIRI researcher has had a psychotic break in ~a decade. Brent Dill is banned from entering the group house I live in. I was told by a friend that Michael Vassar (the person who followed Sonia Joseph home and slept on her floor despite that it made her uncomfortable, also an alleged perpetrator of sexual assault) is barred from Slate Star Codex meetups.
The article strongly reads to me as if it's saying that these things aren't the case, that the various transgressors didn't face any repercussions and remained esteemed members of the community.
Obviously it's bad that people were assaulted, harassed, and abused at all, regardless of how long ago it happened. It's probably good for people to know that these things happened. But the article seems to assume that all these things are still happening, and it seems to be drawing conclusions on ... (read more)
I am at best 1/1000th as "famous" as the OP, but the first ten paragraphs ring ABSOLUTELY TRUE from my own personal experience, and generic credulousness on the part of people who are willing to entertain ludicrous falsehoods without any sort of skepticism has done me a lot of damage.
I also attest that Aella is, if anything, severely underconveying the extent to which this central thesis is true. It's really really hard to convey until you've lived that experience yourself. I also don't know how to convey this to people who haven't lived through it. My experience was also of having been warned about it, but not having integrated the warnings or really actually understood how bad the misrepresentation actually was in practice, until I lived through it.
The annual report suggests there are 45 to 65 statutory inquiries a year, link below (on mobile / lunch break, sorry!). So maybe a half to slightly less seem to end up as public reports.
https://www.gov.uk/government/publications/charity-commission-annual-report-and-accounts-2021-to-2022/charity-commission-annual-report-and-accounts-2021-to-2022
I skimmed the oldest ten very quickly and it looks like four subjects were wound up / dissolved, and four more had trustee-related actions like appointment of new trustees by a Commission-appointed Interim Manager, disqualification from being a trustee, etc. One organization had some poor governance not rising to misconduct/misadministration (but some trustees resigned), one had Official Warnings issued to trustees, one got an action plan.
Pending more careful and complete review, most inquires that result in public reports do seem to find substantial mismanagement and result in significant regulatory action.
Thanks, I think this post is thoughtfully written. I think that arguments for lower salary sometimes are quite moralising/moral purity-based; as opposed to focused on impact. By contrast, you give clear and detached impact-based arguments.
I don't quite agree with the analysis, however.
You seem to equate "value-alignment" with "willingness to work for a lower salary". And you argue that it's important to have value-aligned staff, since they will make better decisions in a range of situations:
... (read more)It's bugged me for a while that EA has ~13 years of community building efforts but (AFAIK) not much by way of "strong" evidence of the impact of various types of community building / outreach, in particular local/student groups. I'd like to see more by way of baking self-evaluation into the design of community building efforts, and think we'd be in a much better epistemic place if this was at the forefront of efforts to professionalise community building efforts 5+ years ago.
By "strong" I mean a serious attempt at causal evaluation using experimental or quasi-experimental methods - i.e. not necessarily RCTs where these aren't practical (though it would be great to see some of these where they are!), but some sort of "difference in difference" style analysis, or before-after comparisons. For example, how do groups' key performance stats (e.g. EA's 'produced', donors, money moved, people going on to EA jobs) compare in the year(s) before vs after getting a full/part time salaried group organiser? Possibly some of this already exists either privately or publicly and the relevant people know where to look (I haven't looked hard, sorry!). E.g. I remember GWWC putting together a fu... (read more)
Would really appreciate links to Twitter threads or any other publicly available versions of these conversations. Appreciate you reporting what you’ve seen but I haven’t heard any of these conversations myself.
While I don't follow Hanania or (the social media platform formerly known as) Twitter closely, it seems to me that this kind of ambiguity is strategic. He wants to expand what is acceptable to say publicly, and one way of doing this is to say things which can be read both in a currently-acceptable and a currently-unacceptable way. If challenged on any specific one you just give the acceptable interpretation and apologize for the misunderstanding, but this doesn't do much to diminish the window-pushing effect.
I often think whether Benjamin Lay would be banned from the EA forum or EA events. It seems to me that the following exchange would have gotten him at least a warning within the context of vegetarianism:
“Benjamin gave no peace” to slave owners, the 19th-century radical Quaker Isaac Hopper recalled hearing as a child. “As sure as any character attempted to speak to the business of the meeting, he would start to his feet and cry out, ‘There’s another negro-master!’”
I can't think of any EAs that take actions similar to the following:
"Benjamin Lay’s neighbors held slaves, despite Lay’s frequent censures and cajoling. One day, he persuaded the neighbors’ 6-year old son to his home and amused him there all day. As evening came, the boy’s parents became extremely concerned. Lay noticed them running around outside in a desperate search, and he innocently inquired about what they were doing. When the parents explained in panic that their son was missing, Lay replied: Your child is safe in my house, and you may now conceive of the sorrow you inflict upon the parents of the negroe girl you hold in slavery, for she was torn from them by avarice. (Swarthmore College Bulletin)"
Thanks for posting this update. I prefer to have it out rather than pending, and I think it’s appropriate that people will get a sense of approximately the scope of what happened. I deeply regret my actions, which were wrong and harmful; I think it’s a fair standard to expect me to have known better; I will of course abide by the restrictions you’re imposing.
I spent a lot of last year working on these issues, and I put up an update in December; that’s still the best place to understand my perspective on things going forwards.
I think that the first-order impression given by these findings is broadly accurate — I did a poor job of navigating feelings of romantic attraction, failed to track others’ experiences, took actions which were misguided and wrong, and hurt people. For most readers that’s probably enough to be going with. Other people might be interested in more granularity, either because they care about the nature of my character flaws and what types of mistakes I might be prone to in the future, or because they care about building detailed pictures of the patterns that cause harm. For this audience I’ve put my takes on the specific findings in this document. My&nbs... (read more)
You touched on something here that I am coming to see as the key issue: whether there should be a justice system within the EA/Rationality community and whether Lightcone can self-appoint into the role of community police. In conversations with people from Lightcone re:NL posts, I was told that is wrong to try to guard your reputation because that information belongs to the community to decide. US law on reputation is that you do have a right to protect yourself from lies and misrepresentation. Emerson talking about suing for libel-- his right-- was seen as defection from the norms which that Lightcone employee thinks should apply to the whole EA/rationality community. When did Emerson opt into following these norms, being judged by these norms? Did any of us? The Lightcone employees also did not like that Kat made a veiled threat to either Chloe or Alice (can't remember) that her reputation in EA could be ruined by NL if she kept saying bad things about them. They saw that as bad not just because it was a threat but because it conspired to hide information from the community. From what I understood, that Lightcone employee thought it would have been okay for Kat to talk shit about... (read more)
We're coming up on two weeks now since this post was published, with no substantive response from Nonlinear (other than this). I think it would be good to get an explicit timeline from Nonlinear on when we can expect to see their promised response. It's reasonable to ask for folks to reserve judgement for a short time, but not indefinitely. @Kat Woods @Emerson Spartz
OP strikes me as hyperbolic in a way that makes me disinclined to trust it.
I can't deny this, in the sense that I don't know that it's false, but OP gives no evidence for this beyond the bare claims. OP doesn't provide any details that people could investigate to verify, and OP writes anonymously on a one-off account, so that people can't check how trustworthy OP has been in the past or on similar topics.
Now, I don't think there's anything wrong with saying things without proof or evidence - and in fact, it wouldn't shock me to hear that there were 30 incidents of rape or prolonged abuse in EA circles in something like a 6-year period (I've had friends tell me of some sexual infractions, and I don't see why I would have heard about all of them) - but I think one should own that they're doing that.
That link shows an anonymous commenter saying that they reported people to CEA community health, and Julia Wise agreeing, thanking that commenter, ... (read more)
Thanks for sharing your experiences here Sam.
Something that I find quite difficult is the fact that all of these things are true, but hard to 'feel' true at the same time:
You're experiencing a bit of #1 and #2 right now. And I think that the huge upsides to that is (a) we're have good a shot of doing a lot more good; and (b) EA is less likely to be the pursuit of the already privileged (e.g. those who can afford to fly to a conference in SF or London or quit their job to pursue something that the world doesn't compensate for its value).
I'm glad that access to funding hasn't been a barrier for your pursuit of doing a lot of good.
Regarding #3.
It still stings every time I hear the funding situation talked about as if it's perpetually solved.
I'm glad that a promising AI safety researcher is likely to find the funding they need to switch careers and some of the top projects are able to a... (read more)
I prefer something like "Imagine you're one of the first people to discover that cancer is a problem, or one of the first people to work on climate change seriously and sketch out the important problems for others to work on. There are such problems today, that don't have [millions] of smart people already working on them"
[this allows me to point at the value of being early on a neglected problem without presenting new "strange" such problems. moreover, after this part of the pitch, the other person is more welcoming to hear a new strange problem, I think]
[disclaimer: acting director of CSER, but writing in personal capacity]. I'd also like to add my strongest endorsement of Carrick - as ASB says, a rare and remarkable combination of intellectual brilliance, drive, and tremendous compassion. It was a privilege to work with him at Oxford for a few years. It would be wonderful to see more people like Carrick succeeding in politics; I believe it would make for a better world.
I like the goal of politically empowering future people. Here's another policy with the same goal:
This seems particularly politically feasible; a philanthropist can unilaterally set this up for a few million dollars of surveys and prediction market subsidies. You could start by running this kind of poll a few times; then opening a prediction market on next year's poll about policy decisions from a few decades ago; then lengthening the time horizon.
(I'd personally expect this to have a larger impact on future-orientation of policy, if we imagine it getting a fraction of the public buy-in that would be required for changing voting weights.)
This post is an impressive feat of honesty and humility
(Just want to say, I really appreciate you sharing your thoughts and being so candid, Dustin. I find it very interesting and insightful to learn more about your perspective.)
>> If they endow a re-granter that funds something weird, they can say "well the whole point of this endowment was to diversify decision-making; it's out of our hands at this point".
I proposed this myself at one point, and the team politely and quite correctly me informed me that projecting this response from critics was naive. We are ultimately responsible for the grants downstream of our decisions in the eyes of the world, regardless of who made intermediate decisions.
As an example of how this has played out in practice, we're known (and largely reviled) locally in San Francisco for supporting the controversial DA Chesa Boudin. In fact, I did not even vote for him (and did not vote in the recall), and the only association is the fact that we made a grant in 2019 to a group that supported him in later years, for totally unrelated work. We made a statement to clarify all this, which helped a little on the margin, but did not substantially change the narrative.
How I publicly talked about Sam
Some people have asked questions about how I publicly talked about Sam, on podcasts and elsewhere. Here is a list of all the occasions I could find where I publicly talked about him. Though I had my issues with him, especially his overconfidence, overall I was excited by him. I thought he was set to do a tremendous amount of good for the world, and at the time I felt happy to convey that thought. Of course, knowing what I know now, I hate how badly I misjudged him, and hate that I at all helped improve his reputation.
Some people have claimed that I deliberately misrepresented Sam’s lifestyle. In a number of places, I said that Sam planned to give away 99% of his wealth, and in this post, in the context of discussing why I think honest signalling is good, I said, “I think the fact that Sam Bankman-Fried is a vegan and drives a Corolla is awesome, and totally the right call”. These statements represented what I believed at the time. Sam said, on multiple occasions, that he was planning to give away around 99% of his wealth, and the overall picture I had of him was highly consistent with that, so the Corolla seemed like an honest si... (read more)
I am commenting to encourage everyone to think about the real people at the centre of all of the very ugly accusations being made, which I hope is acceptable to do, even though this comment does not directly address the evidence presented by either Lightcone or Nonlinear.
This is getting a lot of engagement, as did Ben Pace’s previous post, and for the people being discussed, this must be incredibly stressful. No matter how you think events actually played out, the following are true:
a) at least one group of people is having unfair accusations made against them, either of creating abusive working conditions and taking advantage of the naivety of young professionals, or of being delusional and unreliable or malicious. Neither of these are easy to deal with.
b) the situation is ongoing, and there is no clear timeline for when things will be neatly wrapped up and concluded.
Given this, and having read several comments speaking to the overwhelming experience of being dogpiled on the internet, I just want to encourage everyone who is friendly with any of the people at the centre of this, including Alice, Chloe, Kat Woods, Emerson and Drew Spartz, Ben Pace, and Habryka to reach out and make sure they are coping well. The goal here is hopefully to get to the truth and to update community norms, and it is far too easy for individuals to become casualties of this process. A simple ‘how ya doing?’ can make a big difference when people are struggling.
You don't mention the cost of the EA forum, but per this comment, which gives more details, and per your own table, the "online team", of which the EA Forum was a large part, was spending ~$2M per year.
As such I think that your BOTECs are uninformative and might be "hiding the ask":
I look forward to this. In the meantime, readers can see my own take on this here: in short, I think that the value of the forum is high but the ... (read more)
Post on everybody who’s living together and dating each other and giving each other grants when?
Clarification: I’m just kind of surprised to see some of the things in this post portrayed as bad when they are very common in EA orgs, like living together and being open to unconventional and kind of unclear boundaries and pay arrangements and especially conflicts of interest from dating coworkers and bosses. I worry that things we’re all letting slide could be used to discredit anybody if the momentum turns against them.
Whatever its legitimate uses, defamation law is also an extremely useful cudgel that bad actors can, and very frequently do, use to protect their reputations from true accusations. The cost in money, time and risk of going through a defamation trial is such that threats of such can very easily intimidate would-be truth-tellers into silence, especially when the people making the threat have a history of retaliation. Making such threats even when the case for defamation seems highly dubious (as here), should shift us toward believing that we are in the defamation-as-unscrupulous-cudgel world, and update our beliefs about Nonlinear accordingly.
Whether or not we should be shocked epistemically that Nonlinear made such threats here, I claim that we should both condemn and punish them for doing so (within the extent of the law), and create a norm that you don't do that here. I claim this even if Nonlinear's upcoming rebuttal proves to be very convincing.
I don't want a community where we need extremely high burdens of proof to publish true bad things about people. That's bad for everyone (except the offenders), but especially for the vulnerable people who fall prey to the people doing the bad things because they happen not to have access to the relevant rumor mill. It's also toxic to our overall epistemics as a community, as it predictably and dramatically skews the available evidence we have to form opinions about people.
My theory is that while EA/rationalism is not a cult, it contains enough ingredients of a cult that it’s relatively easy for someone to go off and make their own.
Not everyone follows every ingredient, and many of the ingredients are actually correct/good, but here are some examples:
These ingredients do not make EA/rationalism in general a cult, because it lacks enforced conformity and control by a leader. Plenty of people, including myself, have posted on Lesswrong critiquing the sequences and Yudkowsky and been massively upvoted for it. It’s decentralised across the internet, if someo... (read more)
Brief reflections on the Conjecture post and it's reception
(Written from the non-technical primary author)
- Reception was a lot more critical than I expected. As last time, many good points were raised that pointed out areas where we weren't clear
- We shared it with reviewers (especially ones who we would expect to disagree with us) hoping to pre-empt these criticisms. The gave useful feedback.
- However, what we didn't realize was that the people engaging with our post in the comments were quite different from our reviewers and didnt share the background knowledge that our reviewers did
- We included our end line views (based on feedback previously that we didn't do this enough) and I think it's those views that felt very strong to people.
- It's really, really hard to share the right level of detail and provide adequate context. I think this post managed to be both too short and too long.
- Short: because we didn't make as many explicit comparisons benchmarking research
- Long: we felt we needed to add context on several points that weren't obvious to low context people.
- When editing a post it's pretty challenging to figure out what assumptions you can assume and what your reader
... (read more)High Impact Medicine and Probably good recently produced a report on medical careers that gives more in-depth consideration to clinical careers in low and middle income countries- you can check it out here: https://www.highimpactmedicine.org/our-research/medicalcareers
So far I have been running on the policy that I will accept money from people who seem immoral to me, and indeed I preferred getting money from Sam instead of Open Philanthropy or other EA funders because I thought this would leave the other funders with more marginal resources that could be used to better ends (Edit: I also separately thought that FTX Foundation money would come with more freedom for Lightcone to pursue its aims independently, which I do think was a major consideration I don't want to elide).
To be clear, I think there is a reasonable case to be made for the other end of this tradeoff, but I currently still believe that it's OK for EAs to take money from people whose values or virtues they think are bad (and that indeed this is often better than taking money from the people who share your values and virtues, as long as its openly and willingly given). I think the actual tradeoffs are messy, and indeed I ended up encouraging us to go with a different funder for a loan arrangement for a property purchase we ended up making, since that kind of long-term relationship seemed much worse to me, and I was more worried about that entangling us more with FTX.
To b... (read more)
For what it's worth, as someone saying in another thread that I do think there were concerns about Sam's honesty circulating, I don't know of anyone I have ever talked to who expressed concern about the money being held primarily in FTT, or who would have predicted anything close to the hole in the balance sheet that we now see.
I heard people say that we should assume that Sam's net-wealth has high-variance, given that crypto is a crazy industry, but I think you are overstating the degree to which people were aware of the incredible leverage in FTX's net-position (if I understand the situation correctly there was also no way of knowing that before Alameda's balance sheet leaked a week ago. If you had asked me what Alameda's portfolio consists of, I would have confidently given you a much more diversified answer than "50% FTT with more than 50% liabilities").
Suppose there was no existing nonprofit sector, or perhaps that everyone who worked there was an unpaid volunteer, so the only comparison was to the private sector. Do you think that the optimal level of compensation would differ significantly in this world?
In general I'm skeptical that the existence of a poorly paid fairly dysfunctional group of organisations should inspire us to copy them, rather than the much larger group of very effective orgs who do practice competitive, merit based compensation.
The reason we have a deontological taboo against “let’s commit atrocities for a brighter tomorrow” is not that people have repeatedly done this, it worked exactly like they said it would, and millions of people received better lives in exchange for thousands of people dying unpleasant deaths exactly as promised.
The reason we have this deontological taboo is that the atrocities almost never work to produce the promised benefits. Period. That’s it. That’s why we normatively should have a taboo like that.
(And as always in a case like that, we have historical exceptions that people don’t like to talk about because they worked, eg, Knut Haukelid, or the American Revolution. And these examples are distinguished among other factors by a found mood (the opposite of a missing mood) which doesn’t happily jump on the controversial wagon for controversy points, nor gain power and benefit from the atrocity; but quietly and regretfully kills the innocent night watchman who helped you, to prevent the much much larger issue of Nazis getting nuclear weapons.)
This logic applies without any obvious changes to “let’s commit atrocities in pursuit of a brighter tomorrow a million years away” just li... (read more)
Thanks for making this podcast feed! I have a few comments about what you said here:
I think if you are going to call this feed "Effective Altruism: An Introduction", it doesn't make sense to skew the selection towards longtermism so heavily. Maybe you should have phrased the feed as "An Introduction to Effective Altruism & Longtermism" given the current list of episodes.
In particular, I think it would be better if the Lewis Bollard episode was... (read more)
I’ve recently been thinking about medieval alchemy as a metaphor for longtermist EA.
I think there’s a sense in which it was an extremely reasonable choice to study alchemy. The basic hope of alchemy was that by fiddling around in various ways with substances you had, you’d be able to turn them into other things which had various helpful properties. It would be a really big deal if humans were able to do this.
And it seems a priori pretty reasonable to expect that humanity could get way better at manipulating substances, because there was an established history of people figuring out ways that you could do useful things by fiddling around with substances in weird ways, for example metallurgy or glassmaking, and we have lots of examples of materials having different and useful properties. If you had been particularly forward thinking, you might even have noted that it seems plausible that we’ll eventually be able to do the full range of manipulations of materials that life is able to do.
So I think that alchemists deserve a lot of points for spotting a really big and important consideration about the future. (I actually have no idea if any alchemists were thinking about it this way; th... (read more)
My main issue with the paper is that it treats existential risk policy as the result of a global collective utility-maximizing decision based on people's tradeoffs between consumption and danger. But that is assuming away approximately all of the problem.
If we extend that framework to determine how much society would spend on detonating nuclear bombs in war, the amount would be zero and there would be no nuclear arsenals. The world would have undertaken adequate investments in surveillance, PPE, research, and other capacities in response to data about previous coronaviruses such as SARS to stop COVID-19 in its tracks. Renewable energy research funding would be vastly higher than it is today, as would AI technical safety. As advanced AI developments brought AI catstrophic risks closer, there would be no competitive pressures to take risks with global externalities in development either by firms or nation-states.
Externalities massively reduce the returns to risk reduction, with even the largest nation-states being only a small fraction of the world, individual politicians much more concerned with their term of office and individual careers than national-level outcomes, and ind... (read more)
> My plan was then to invite & highlight folks who could balance this out
I think this is basically a misconception of how the social dynamics at play work. People aren't worried about the relative number of "racists", they're worried about the absolute number. The primary concern is not that they will exposed to racism at the conference itself, but rather that attending a conference together will be taken as a signal of support for the racists, saying that they are welcome in the community.
To pick Hanania as an example, since he has the most clearly documented history of racist statements, I have peers who would absolutely see me choosing to attend the same conference as him as a sign that I don't think he's too bad. And if I know that expectation and chose to go anyway, there would be additional merit to that reading.
To an extent, the more that Manifest is focused on discussions of prediction, the more leeway there is to invite controversial speakers. You can say make a case for ignoring views that are not relevant to the topic at hand. But as Saul says in his other post "although manifest is nominally about prediction markets, it's also about all the ideas that folks who like prediction markets are also into — betting, philosophy, mechanism design, writing, etc". In other words, it's about forming a broader intellectual community. And people are obviously going to be uncomfortable identifying with an intellectual community that includes people that they, and the broader world, consider to be racist.
And even if it were possible to "balance out", the examples given don't exactly fill me with confidence this was given serious consideration. Someone known primarily[1] for being an angry culture warrior like Hanania isn't "balanced out" by the presence of "gracious" longtermists who are unlikely to have written anything racist,[2] he'd "balanced out" by getting a culture warrior from the other side, whether in open debate or purely speaking about markets but making it clear the organizers definitely weren't endorsing a particular side...
- ^
- ^
... (read more)The Guardian may not always capture the nuance, but there's a difference between inviting someone known primarily for his controversial views who incidentally also favours prediction markets and inviting, say, notable prediction market proponent Robin Hanson who incidentally also said questionable things in the past
Indeed if I wanted to organize a conference with the explicit purpose of covertly promoting fringe views to a largely unrelated audience (which I don't think was actually the case here FWIW), this is exactly how I'd stack the speakers for faux balance: a few people on my side to insinuate the fringe views and a bunch of harml
You can read more about how the project came together on our blog. Adding a specific section below that might be of interest:
Working with the most watched person on Earth will help us reach more people in need
Beast Philanthropy videos are typically seen by 20-40 million people and dubbed into over a dozen languages to improve accessibility. We expect this will help us reach more families in need. Here’s why:
Partnering with content creators means large, new audiences learn about direct cash
You may support direct cash giving, but most people still do not. GiveDirectly recently ran a survey of potential donors, and found only 13% of respondents had heard of us. Direct cash was their least favored way to help people in extreme poverty.
Clearly more people need to learn about the impact of our work. While we’re good at our main job of delivering cash to the most vulnerable families in the world, we’re not as good at reaching large audiences from our own channels and platforms –– few nonprofits are. Press and content creators are very good at it.
Beast Philanthropy’s video dispels common concerns about direct cash
In that same survey, most respondents said giving $1,... (read more)
(I work for EA Funds, including EAIF, helping out with public communications among other work. I'm not a grantmaker on EAIF and I'm not responsible for any decision on any specific EAIF grant).
Hi. Thanks for writing this. I appreciate you putting the work in this, even though I strongly disagree with the framing of most of the doc that I feel informed enough to opine on, as well as most of the object-level.
Ultimately, I think the parts of your report about EA Funds are mostly incorrect or substantively misleading, given the best information I have available. But I think it’s possible I’m misunderstanding your position or I don’t have enough context. So please read the following as my own best understanding of the situation, which can definitely be wrong. But first, onto the positives:
- I appreciate that the critical points in the doc are made as technical critiques, rather than paradigmatic ones. Technical critiques are ones that people are actually compelled to respond to, and can actually compel action (rather than just make people feel vaguely bad/smug and don’t compel any change).
- The report has many numerical/quantitative details. In theory, those are easier
... (read more)As a person with an autism (at the time "asperger's") diagnosis from childhood, I think this is very tricky territory. I agree that autistics are almost certainly more likely to make innocent-but-harmful mistakes in this context. But I'm a bit worried about overcorrection for that for a few reasons:
Firstly, men in general (and presumably women to some degree also), autistic or otherwise are already incredibly good at self-deception about the actions they take to get sex (source: basic commonsense). So giving a particular subset of us more of an excuse to think "I didn't realize I would upset her", when the actual facts are more "I did know there was a significant risk, but I couldn't resist because I really wanted to have sex with her", seems a bit fraught. I think this is different from the sort of predatory, unrepentant narcissism that Jonas Vollmer says we shouldn't ascribe to Owen: it's a kind of self-deception perfectly compatible with genuine guilt at your own bad behavior and certainly with being a kind and nice person overall. I actually think the feminism-associated* meme about sexual bad behavior being always really about misogyny or dominance can sometimes obscure ... (read more)
My basic takeaway from all of this is not who is right/wrong so much as that EA professional organisations should act more like professional organisations. While it may be temporarily less enjoyable I would expect overall the organisations with things like HR professionals, safeguarding policies, regular working hours, offices in normal cities and work/life boundaries to be significantly more effective contributors to EA
I’m less interested in “debating whether a person in a villa in a tropical paradise got a vegan burger delivered fast enough” or “whether it’s appropriate for your boss to ask you to pick up their ADHD medication from a Mexican pharmacy” or “if $58,000 of all inclusive world travel plus $1000 a month stipend is a $70,000 salary”? Than in interrogating whether EA wouldn’t be better off with more “boring” organisations led by adults with significant professional experience managing others, where the big company drama is the quality of coffee machine in the office canteen.
Hey! I work at 80k doing outreach.
Thanks for your work here!
I think the data from 80k overall tells a bit of a different story.
Here’s a copy of our programmes’ lead metrics, and our main funnel metrics (more detailed).
As you can see, some metrics take a dip in Q1 and Q2 2023: site visitors & engagement time, new newsletter subscribers, podcast listening time, and applications to advising.
I’d like to say four things about that data:
- It seems pretty plausible to me that lower interest in EA due to the FTX crash is one (important) factor driving those metrics that took a dip. That said:
- All of those seem to have “bounced back” in Q3
- Our website (and to some extent podcast) metrics are very heavily driven by how much outreach & marketing we do. In Q4 2022, we spent very little on marketing compared to the rest of 2022 & 2023, which I think is a significant contributor to the trend.
- It looks like the second half of 2022 was just an unusually high-growth period (for 80k, and I think EA more broadly), and falling from that peak is not particularly surprising due to regression to the mean. Maintaining a level of growth that high might have been p
... (read more)As one of the people Ben interviewed:
Thank you to the CAIS team (and other colleagues) for putting this together. This is such a valuable contribution to the broader AI risk discourse.
I have seen confidentiality requests weaponized many time (indeed, it is one of the most common ways I've seen people end up in abusive situations), and as such I desperately don't want us to have a norm of always erring on the side of confidentiality and heavily punishing people who didn't even receive a direct request for confidentiality but are just sharing information they could figure out from publicly available information.
The first statement would be viewed positively by most, the second would get a raised eyebrow and a "And what of it?", the third is on thin fucking ice, and the fourth is utterly unspeakable.
2-4 aren't all that different in terms of fact-statements, except that IQ ≠ intelligence, so some accuracy is lost moving to the last. It's just that the first makes it clear which side the speaker is on, the second states an empiricism and the next two look like they're... attacking black people, I think?
I would consider the fourth a harmful gloss - but it doesn't state that there is a genetic component to IQ, that's only in the reader's eye. This makes sense in the context of Bostrom posing outrageous but Arguably Technically True things to inflame the reader's eye.
I think people woul... (read more)
In all seriousness, I hope he is on some sort of suicide watch. If anyone in his orbit is reading this, you need to keep an eye on him or have his dad or whoever keep an eye on him.
Brenton Mayer runs internal systems at 80k. That basically means operations and impact evaluation, ie the parts that don't really get visibility or interact with the outside world. He's been doing that extremely competently for years. Him and his team make it feel easier to work to a high standard (eg through making sure we get more of a sense of how we're impacting users and setting an ambitious but sustainable culture), keep the lights on (figuratively by fundraising and literally) and make 80k a lovely place to work.
The Parable of the Talents, especially the part starting at:
Might prove reassuring. Yes, EA has lots of very smart people, but those people exist in an ecosystem which almost everyone can contribute to. People do and should give kudos to those who do the object level work required to keep the attention of the geniuses on the parts of the problems which need them.
As some examples of helpful things available to you:
I downvoted because it called the communication hostile without any justification for that claim. The comment it is replying to doesn't seem at all hostile to me, and asserting it is, feels like it's violating some pretty important norms about not escalating conflict and engaging with people charitably.
I also think I disagree that orgs should never be punished for not wanting to engage in any sort of online discussion. We have shared resources to coordinate, and as a social network without clear boundaries, it is unclear how to make progress on many of the disputes over those resources without any kind of public discussion. I do think we should be really careful to not end up in a state where you have to constantly monitor all online activity related to your org, but if the accusations are substantial enough, and the stakes high enough, I think it's pretty important for people to make themselves available for communication.
Importantly, the above also doesn't highlight any non-public communication channels that people who are worried about the negative effects of ACE can use instead. The above is not saying "we are worried about this conversation being difficult to have in pub... (read more)
Thanks for raising this question about EA's growth, though I fully agree it would have been better to frame that question more like: “Given that we're pouring a substantial amount of money into EA community growth, why doesn't it show up in some of these metrics?" To that end, while I may refer to “growing” or “not growing” below for brevity I mean those terms relative to expectations rather than in an absolute sense. With that caveat out of the way…
There’s a very telling commonality about almost all the possible explanations that have been offered so far. Aside from a fraction of one comment, none of the explanations in the OP or this followup post even entertain the possibility that any mistakes by people/organizations in the EA community inhibited growth. That seems worthy of a closer look. We expect an influx of new funding (ca. ~2016-7) to translate into growth (with some degree of lag), but only if it is deployed in effective strategies that are executed well. If we see funding but not growth, why not look at which strategies were funded and how well they were executed?
CEA is probably the most straightforward example to look at, as an organization that has run a lot of ... (read more)
I think this is the wrong question.
The point of lockdown is that for many people it is individually rational to break the lockdown - you can see your family, go to work, or have a small wedding ceremony with little risk and large benefits - but this imposes external costs on other people. As more and more people break lockdown, these costs get higher and higher, so we need a way to persuade people to stay inside - to make them consider not only the risks to themselves, but also the risks they are imposing on other people. We solve this with a combination of social stigma and legal sanctions.
The issue is exactly the same with ideologies. To environmentalists, preventing climate change is more important than covid. To pro-life people, preventing over half a million innocent deaths every year is more important than covid. To animal rights activists, ending factory farming is more important than covid. To anti-lockdown activists, preventing mass business failure and a depression is more important than covid. But collectively we are all better off if everyone stops holding protests for now.
The correct question is "is it good if I, and everyone else who thinks their reason is as good as I think this one is, breaks the lockdown?" Failure to consider this, as it appears most people have, is to grossly privilege this one cause over others and defect in this iterated prisoners dilemma - and the tragic consequence will be many deaths.
Thanks for this post Will, it's good to see some discussion of this topic. Beyond our previous discussions, I'll add a few comments below.
I'd like to flag that I would really like to see a more elegant term than 'hingeyness' become standard for referring to the ease of influence in different periods.
I would dispute this. Possibilities of AGI and global disaster were discussed by pioneers like Turing, von Neumann, Good, Minsky and others from the founding of the field of AI.
The possibility of engineered plagues causing an apocalypse was a grave concern of forward thinking people in the early 20th century as biological weapons were developed and demonstrated. Many of the anti-nuclear scientists concerned for the global prospects of humanity were also concerned about germ warfare.
Both of the above also had prominent fictional portrayals to come to mind for longtermist altruists engaging in a wide-ranging search. If there had been a longtermist altruist movement trying to c... (read more)
The problem (for people like me, and may those who enjoy it keep doing so), as I see it: this is an elite community. Which is to say, this is a community primarily shaped by people who are and have always been extremely ambitious, who tend to have very strong pedigrees, and who are socialized with the norms of the global upper/top professional class.
"Hey you could go work for Google as a machine learning specialist" sounds to a person like me sort of like "Hey you could go be an astronaut." Sure, I guess it's possible. "Hey you could work for a nice nonprofit with all these people who share your weird values about charity, and join their social graph!" sounds doable. Which makes it a lot more damaging to fail.
People like me who are standardized-test-top-tier smart but whose backgrounds are pretty ordinary (I am inspired to post this in part because I had a conversation with someone else with the exact same experience, and realized this may be a pattern) don't tend to understand that they've traveled into a space of norms that is highly different than we're used to, when we join the EA social community. It just feels like "Oh! G... (read more)
Sorry to hear about your long, very difficult experience. I think part of what happened is that it did in fact get a lot harder to get a job at leading EA-motivated employers in the past couple years, but that wasn't clear to many EAs (including me, to some extent) until very recently, possibly as recently as this very post. So while it's good news that the EA community has grown such that these particular high-impact jobs can attract talent sufficient for them to be so competitive, it's unfortunate that this change wasn't clearer sooner, and posts like this one help with that, albeit not soon enough to help mitigate your own 1.5 years of suffering.
Also, the thing about some people not having runway is true and important, and is a major reason Open Phil pays people to take our remote work tests, and does quite a few things for people who do an in-person RA trial with us (e.g. salary, health benefits, moving costs, severance pay for those not made a subsequent offer). We don't want to miss out on great people just because they don't have enough runway/etc. to interact with our process.
FWIW, I found some of your comments about "elite culture" surprising. For context: I grew up in rur
... (read more)The Economist has an article about China's top politicians on catastrophic risks from AI, titled "Is Xi Jinping an AI Doomer?"
... (read more)I'm kind of confused by this. I went to LessOnline and Manifest feel like I hardly heard any racist opinions. It's possible that such people don't talk to me or that opinions that the poster thinks are racist, I don't, but I dunno, I just didn't hear much of that kind of edginess. It was probably slightly less edgy than I expected.
I have some sympathy with the poster. I didn't like that Hanania was given top billing last year and pushed in the discord for that to change (and wrote this). I have literally taken flack for not being harsh enough there, but I stand by what I said - that status is something to be careful when doling out and that Hanania didn't deserve it. Not that he never would, but that he wasn't at the time.
To me it feels like those people who generate new ideas are pretty scattershot about it. Hanson has some great ideas and some pretty bad ones. But I think if he never felt comfortable saying a bad idea he might not say some really good ones too.
The question then is whether it is ethical to have events that involve people with bad ideas and whether there are ways to minimise harms. I think yes to both. To me, the prediction market space is an unusually ... (read more)
I’m glad to see that Nonlinear’s evidence is now public, since Ben’s post did not seem to be a thorough investigation. As I said to Ben before he posted his original post, I knew of evidence that strongly contradicted his post, and I encouraged him to temporarily pause the release of his post so he could review the evidence carefully, but he would not delay.
I appreciate the spirit of this post as I am not a Yudkowsky fan, think he is crazy overconfident about AI, am not very keen on rationalism in general, and think the EA community sometimes gets overconfident in the views of its "star" members. But some of the philosophy stuff here seems not quite right to me, though none of its egregiously wrong, and on each topic I agree that Yudkowsky is way, way overconfident. (Many professional philosophers are way overconfident too!)
As a philosophy of consciousness PhD: the view that animals lack consciousness is definitely an extreme minority view in the field, but it it's not a view that no serious experts hold. Daniel Dennett has denied animal consciousness for roughly Yudkowsky like reasons I think. (EDIT: Actually maybe not: see my discussion with Michael St. Jules below. Dennett is hard to interpret on this, and also seems to have changed his mind to fairly definitively accept animal consciousness more recently. But his earlier stuff on this at the very least opposed to confident assertions that we just know animals are conscious, and any theory that says otherwise is crazy.) And more definitely Peter Carruthers (https://scholar.google.... (read more)
Largely in response to the final paragraph of Ivy's comment: FWIW, as a woman in EA, I do not feel "healed" by Owen's post. I feel *very* annoyed and sorry for the person who was affected by Owen's behavior. In response to the final sentence ("extra obligations like board responsibilities on hold til you have things sorted"), I would be concerned if Owen was in a board position in EA because he has clearly proved himself incapable of doing so in a way that doesn't discredit legitimate actors in the space and cause harm. I'm surprised, and again really annoyed, this is already a topic of discussion.
Funnily enough, I think EA does worse than other communities / movements I'm involved with (grassroots animal advocacy & environmentalism). My partner and other friends (women) have often complained about various sexist issues when attending EA events e.g. men talking over them, borderline aggressive physical closeness, dismissing their ideas, etc., to the point that they doesn't want to engage with the community. Experiences like this rarely, if ever, happen in other communities we hang out in. I think there are a few reasons for why EA has been worse than other communities in my cases:
- I think our experiences differ on animal issues as when groups /movements professionalise, as has been happening over the past decade for animal welfare, the likelihood that men will abuse their positions of power increases dramatically. At the more grassroots level, power imbalances often aren't stark enough to lead the types of issues that came out in the animal movement a few years back. EA has also been undergoing this professionalisation and consolidation of power, and seems like the article above highlights the negative consequences of that.
- As has been noted many times, EA is current
... (read more)Pointing out the %70 male number seems very relevant since issues like this may contribute to that number and will likely push other women (such as myself) away from the movement.
While I haven’t experienced men in EA being dismissive of my ideas (though that’s only my personal experience in a very small EA community) I have found that the people I have met in EA are much more open to talking about sex and sexual experiences than I am comfortable with in a professional environment. I have personally had a colleague in EA ask me to go to a sex party to try BDSM sex toys. This was very strange for me. I have worked as a teacher, as a health care professional, and have spent a lot of time in academic settings, and I have never had an experience like that elsewhere. I also felt that it was being asked because they were sussing out whether or not I was part of the “cool crowd” who was open about my sex life and willing to be experimental.
I found this especially strange because there seem to be a lot of norms around conversation in EA (the same person who asked me to go to that party has strong feelings about up-keeping these norms) but they for some reason don’t have norms around speaking about sexual relationships, which is taboo in every other professional setting I have been a part of. I think having stronger “norms” or whatever you want to call it, or making discussions like this more taboo in EA, would be a good start. This will make it less likely that people in EA will feel comfortable doing the things discussed in this article.
Some historical context on this issue. If Bostrom's original post was written around 1996 (as I've seen some people suggest), that was just after the height of the controversy over 'The Bell Curve' book (1994) by Richard Herrnstein & Charles Murray.
In response to the firestorm around that book, the American Psychological Association appointed a blue-ribbon committee of 11 highly respected psychologists and psychometricians to evaluate the Bell Curve's empirical claims. They published a report in 1996 on their findings, which you can read here, and summarized here. The APA committee affirmed most of the Bell Curve's key claims, and concluded that there were well-established group differences in average general intelligence, but that the reasons for the differences were not yet clear.
More recently, Charles Murray has reviewed the last 30 years of psychometric and genetic evidence in his book Human Diversity (2020), and in his shorter, less technical book Facing Reality (2021).
This is the most controversial topic in all of the behavioral sciences. EAs might be prudent to treat this whole controversy as an information hazard, in which learning about the scientific findings can be s... (read more)
Hello Peter, I will offer my perspective as a relative outsider who is not formally aligned with EA in any way but finds the general principle of "attempting to do good well" compelling and (e.g.) donates to Give Directly. I found Bostrom's explanation very offputting and am relieved that an EA institution has commented to confirm that racism is not welcome within EA. Given Bostrom's stature within the movement, I would have taken a lack of institutional comment as a tacit condonation and/or determination that it is more valuable to avoid controversy than to ensure that people of colour feel welcome within EA.
I encourage readers to consider whether they are the correct audience for this advice. As I understand it, this advice is directed at those for whom all of the following apply:
- Making a large impact on the world is overwhelmingly more important to you than other things people often want in their lives (such as spending lot of time with friends/family, raising children, etc.)
- You have already experienced a normal workload of ~38h per week for at least a couple of years, and found this pretty easy/comfortable to maintain
- You generally consider yourself to be happy, highly composed and emotionally stable. You have no history of depression or other mood-related dissorders.
If any of these things do not apply, this post is not for you! And it would probably be a huge mistake to seek out an adderall prescription.
This seems to be a false equivalence. There's a big difference between asking "did this writer, who wrote a bit about ethics and this person read, influence this person?" vs "did this philosophy and social movement, which focuses on ethics and this person explicitly said they were inspired by, influence this person?"
I agree with you that the question
has the answer
But the question
Is nevertheless sensible and cannot have the answer FTX.
UPDATE: less certain of the below. Be sure to read this comment by Cremer disputing Torres's account https://forum.effectivealtruism.org/posts/vv7FBtMxBJicM9pae/democratising-risk-a-community-misled?commentId=CwxqjeG8qqwy8gz4c
The fact that Torres was a co-author certainly does change the way I interpret the original post. For example. Cremer writes of the review process, “By others we were accused of lacking academic rigour and harbouring bad intentions.”
Before I knew about the Torres part, that sounded more troubling - it would maybe reflect badly on EA culture if reviewers were accusing Cremer and Kemp of these things just for writing “Democratising Risk”. I don’t think it’s a good paper, but I don’t think the content of the final paper is evidence of bad intentions.
But to accuse Torres of having bad intentions and lacking academic rigor? Reviewers would have been absolutely right to do so. By the time the paper was circulating, presumably Torres had already begun their campaign of slander against various members of the longtermist and EA communities.
Jonathan Mustin added the ability to copy and paste footnotes from google docs into the Forum, which has been one of our most oft-requested features.
If anyone has any neartermist community building ideas, I'd be happy to evaluate them at any scale (under $500K to $3M+). I'm on the EA Infrastructure Fund and helping fund more neartermist ideas is one of my biggest projects for the fund. You can contact me at peter@rethinkpriorities.org to discuss further (though note that my grantmaking on the EAIF is not a part of my work at Rethink Priorities).
Additionally, I'd be happy to discuss with anyone who wants seed funding in global poverty, neartermist EA community building, mental health, family planning, wild animal suffering, biorisk, climate, or broad policy and see how I can get them started.
I strongly agree with all this. Another downside I've felt from this exercise is it feels like I've been dragged into a community ritual I'm not really a fan of where my options are a) tacitly support (even if it is just deleting the email where I got the codes with a flicker of irritation) b) an ostentatious and disproportionate show of disapproval.
I generally think EA- and longtermist- land could benefit from more 'professional distance': that folks can contribute to these things without having to adopt an identity or community that steadily metastasises over the rest of their life - with at-best-murky EV to both themselves and the 'cause'. I also think particular attempts at ritual often feel kitsch and prone to bathos: I imagine my feelings towards the 'big red button' at the top of the site might be similar to how many Christians react to some of their brethren 'reenacting' the crucifixion themselves.
But hey, I'm (thankfully) not the one carrying down the stone tablets of community norms from the point of view of the universe here - to each their own. Alas this restraint is not universal, as this is becoming a (capital C) Community ritual, where 'success' or 'failu... (read more)
I really like the specific numbers people are posting. I'll add my own (rough estimates) from the ~5 months I spent applying to roles in 2018.
Context: In spring 2018, I attended an event CEA ran for people with an interest in operations, because Open Phil referred me to them; this is how I wound up deciding to apply to most of the roles below. Before attending the operations event, I'd started two EA groups, one of which still existed, and spent ~1 year working 5-10 hours/week as a private consultant for a small family foundation, doing a combination of research and operations work. All of the below experiences were specific to me; others may have gone through different processes based on timing, available positions, prior experience with organizations, etc.
- CEA (applied to many positions, interviewed for all of them at once, didn't spend much additional time vs. what I'd have done if I just applied to one)
- ~4 hours of interview time before the work trial, including several semi-casual conversations with CEA staff at different events about roles they had open.
- ~2-hour work trial task, not very intense compared to Open Phil's tasks
- 1.5-week work trial at CEA; th
... (read more)One person I was thinking about when I wrote the post was Medhi Hassan. According to Wikipedia:
Medhi has spoken several times at the Oxford Union and also in a recent public debate on antisemitism, so clearly he's not beyond the pale for many.
I personally also think that the "from the river to the sea" chant is pretty analogous to, say, white nationalist slogans. It does seem to have a complicated history, but in the wake of the October 7 attacks its association with Hamas should I think put it beyond the pale. Nevertheless, it has been defended by Rashida Tlaib. In general I am in favor of people being able to make arguments like hers, but I suspect that if Hanania were to make an argument for why a white nationalist slogan should be interpreted positively, it would be counted as a strong point against him.
I expect that either Hassan or Tlaib, were they interested in prediction markets, would have been treated in a similar way as Hanania by the Manifest organiz... (read more)
Crossposted from LessWrong.
Maybe I'm being cynical, but I'd give >30% that funders have declined to fund AI Safety Camp in its current form for some good reason. Has anyone written the case against? I know that AISC used to be good by talking to various colleagues, but I have no particular reason to believe in its current quality.
- MATS has steadily increased in quality over the past two years, and is now more prestigious than AISC. We also have Astra, and people who go directly to residencies at OpenAI, Anthropic, etc. One should expect that AISC doesn't attract the best talent.
- If so, AISC might not make efficient use of mentor / PI time, which is a key goal of MATS and one of the reasons it's been successful.
- Why does the founder, Remmelt Ellen, keep linkposting writing by Forrest Landry which I'm 90% sure is obvious crankery? It's not just my opinion; Paul Christiano said "the entire scientific community would probably consider this writing to be crankery", one post was so obviously flawed it gets -46 karma, and generally the community response has been extremely negative. Some AISC work is directly about the content in question. This seems like a concern especially given the ph
... (read more)I run an advocacy nonprofit, 1Day Sooner. When good things happen that we have advocated for, it raises the obvious question, "were we the but-for cause?"
A recent experience in our malaria advocacy work (W.H.O. prequalification of the R21 vaccine, a key advocacy target of ours) is exemplary. Prequalification was on the critical path for malaria vaccine deployment. Based on analysis of public sources and conversations with insiders, we came to the view that there was friction and possibly political pressure delaying prequalification from occurring as quickly as would be ideal. We decided to focus public pressure on a faster process (by calling for a prequalification timeline, asking Peter Singer to include the request in his op-ed on the subject, discussing the issue with relevant stakeholders, and asking journalists to inquire about it). We thought it would take at least till January and probably longer. Then a few days before Christmas, a journalist we were talking to sent us a W.H.O. press release -- that morning prequalification had been announced. Did it happen sooner because of us?
The short answer is we don't know. The reason I'm writing about it is that it highlights a ... (read more)
I tried starting from the beginning of the appendix, and almost immediately encountered a claim for which I feel Nonlinear has overstated their evidence.
Were Alice and Chloe "advised not to spend time with ‘low value people’, including their families, romantic partners, and anyone local to where they were staying, with the exception of guests/visitors that Nonlinear invited"? This is split into three separate rebuttals (family, romantic partners, and locals).
Nonlinear provides screenshots demonstrating that they encouraged Alice and Chloe to regularly spend time with their families, and encouraged Chloe to spend time with her boyfriend as well as letting him live with them... and also, in their own words (which I have reproduced verbatim below) they did, in fact, advise Alice to hang out with EAs they knew instead of family once, and instead of locals at least twice.
Their reporting of the family advice:
... (read more)Another random spot check: page 115 of the Google Doc. (I generated a random number between 1 and 135.)
This page is split between two sections. The first starts on page 114:
The quote given in support of this is "I think Emerson is very ambitious and would like a powerful role in EA/X-risk/etc." In my opinion, the quote and the paraphrase are very different things, especially since, as it happens, that quote is not even from the original post, it's from a comment.
The Google Doc then goes on to describe the reasons Drew believes that Emerson is not ambitious for status within EA. This is ultimately a character judgement, and I don't have a strong opinion about who is correct about Emerson's character here. However, I do not think it's actually important to the issue at hand, since the purported ambition was not in fact load-bearing to the original argument in any way.
The second section is longer, and goes on for several pages. It con... (read more)
I find this to be a pretty poor criticism, and its inclusion makes me less inclined to accept the other criticisms in this piece at face value.
Updating your beliefs and changing your mind in light of new evidence is undoubtedly a good thing. To say that doing so leaves you with concerns about Connor's "trustworthiness and character" seems not only unfair, but also creates a disincentive for people to publicly update their views on key issues, for fear of this kind of criticism.
It's not clear to me that the core point of the essay goes through. For instance, the same amount of money as applied to malaria would also have helped many people, driven down prices, encouraged innovation—maybe the equivalent would have been a malaria vaccine, a gene drive, or mass fumigations.
i.e., it seems plausible that both of these could be true:
I don't think I understand the structure of this estimate, or else I might understand and just be skeptical of it. Here are some quick questions and points of skepticism.
Starting from the top, you say:
This section appears to be an estimate of all-things-considered feasibility of transformative AI, and draws extensively on evidence about how lots of things go wrong in practice when implementing complicated projects. But then in subsequent sections you talk about how even if we "succeed" at this step there is still a significant probability of failing because the algorithms don't work in a realistic amount of time.
Can you say what exactly you are assigning a 60% probability to, and why it's getting multiplied with ten other factors? Are you saying that there is a 40% chance that by 2043 AI algorithms couldn't yield AGI no matter how much serial time and compute they had available? (It seems surprising to claim that even by 2023!) Presumably not that, but what exactly are you giving a 60% chance?
(ETA: after reading later sectio... (read more)
I appreciate this post a lot, particularly how you did not take more responsibility than was merited and how you admitted thinking it wasn't a red flag that SBF skirted regulations bc the regulations were probably bad. I appreciated how you noticed hindsight bias and rewritten history creeping in, and I appreciate how you don't claim that more ideal actions from you would have changed the course of history but nonetheless care about your small failures here.
Do you think EA's self-reflection about this is at all productive, considering most people had even less information than you? My (very, very emotional) reaction to this has been that most of the angst about how we somehow should have known or had a different moral philosophy (or decision theory) is a delusional attempt to feel in control. I'm just curious to hear in your words if you think there's any value to the reaction of the broader community (people who knew as much or less about SBF before 11/22 than you).
I don't have terribly organized thoughts about this. (And I am still not paying all that much attention—I have much more patience for picking apart my own reasoning processes looking for ways to improve them, than I have for reading other people's raw takes :-p)
But here's some unorganized and half-baked notes:
I appreciated various expressions of emotion. Especially when they came labeled as such.
I think there was also a bunch of other stuff going on in the undertones that I don't have a good handle on yet, and that I'm not sure about my take on. Stuff like... various people implicitly shopping around proposals about how to readjust various EA-internal political forces, in light of the turmoil? But that's not a great handle for it, and I'm not terribly articulate about it.
There's a phenomenon where a gambler places their money on 32, and then the roulette wheel comes up 23, and they say "I'm such a fool; I should have bet 23".
More useful would be to say "I'm such a fool; I should have noticed that the EV of this gamble is negative." Now at least you are... (read more)
I roll to disbelieve on these numbers. "Multiple reports a week" would be >100/year, which from my perspective doesn't seem consistent with the combination of (1) the total number of reports I'm aware of being a lot smaller than that, and (2) the fact that I can match most of the cases in the Time article (including ones that had names removed) to reports I already knew about.
(It's certainly possible that there was a particularly bad week or two, or that you're getting filled in on some sort of backlog.)
I also don't believe that a law school, or any group with 1300 members in it, would have zero incidents in 3-5 years. That isn't consistent with what we know about the overall rate of sexual misconduct in the US population; it seems far more likely that incidents within those groups are going unreported, or are being reported somewhere you don't see and being kept quiet.
These are quotations from a table that are intended to illustrate "difficult tradeoffs". Does seeing them in context change your view at all?
(Disclosure: married to Wise)
I trade global rates for a large hedge fund so I think i can give the inside view on how financial market participants think about this.
First, the essential claim is true - no one in rates markets talks about the theme of AI driving a massive increase in potential growth.
However, even if this did become accepted as a potential scenario it would be very unlikely to show up in government bond yields so using yields as evidence of the likelihood of the scenario is, imho, a mistake. I'll give a number of reasons.
- Rates markets don't price in events (even ones that are fully known) more than one or two years ahead of time (Y2K, contentious elections in Italy or France, etc). This is generally outside participants time horizons but also...
- A lot can happen in two years (much less ten years). Major terrorist attack, pandemic, nuclear war to name three possibilities all of which would fully torpedo any bet you would make on AI, no matter how certain you are of the outcome.
- The premise is not obviously true that higher growth leads to higher real yields. That is one heuristic among many when thinking about what real yields should do. It's important to think about the mechanism here
... (read more)A bunch of things that all seem true to me:
Here are my high-level thoughts around the comments so far of this report:
- This is a detailed report, where a lot of work has been put in, by one of EA's foremost scholars on the intersection of climate change and other global priorities.
- So it'd potentially be quite valuable for people with either substantial domain expertise or solid generalist judgement to weigh in here on object-level issues, critiques, and cruxes, to help collective decision-making.
- Unfortunately, all of the comments here are overly meta. Out of the ~60 comments so far on this thread, 0.5 of the comments on this thread approach anything like technical criticism, cruxes, or even engagement.
- The 0.5 in question is this comment by Karthik.
- (EDIT: I think Noah's comment here qualifies)
- Compare, for example, the following comments to one of RP's cultured meat reports.
- After saying that, I will hypocritically continue to follow the streak of being meta while not having read the full report.
- I think I'm confused about the quality of the review process so far. Both the number and quality of the reviewers John contacted for this book seemed high. However, I couldn't figure out what the methodology for seeking reviews is here.
- T
... (read more)It might help to imagine a hard takeoff scenario using only known sorts of NN & scaling effects... (LW crosspost, with >82 comments)
Rest of story moved to gwern.net.
First of all, I'm sorry to hear you found the paper so emotionally draining. Having rigorous debate on foundational issues in EA is clearly of the utmost importance. For what it's worth when I'm making grant recommendations I'd view criticizing orthodoxy (in EA or other fields) as a strong positive so long as it's well argued. While I do not wholly agree with your paper, it's clearly an important contribution, and has made me question a few implicit assumptions I was carrying around.
The most important updates I got from the paper:
- Put less weight on technological determinism. In particular, defining existential risk in terms of a society reaching "technological maturity" without falling prey to some catastrophe frames technological development as being largely inevitable. But I'd argue even under the "techno-utopian" view, many technological developments are not needed for "technological maturity", or at least not for a very long time. While I still tend to view development of things like advanced AI systems as hard to stop (lots of economic pressures, geographically dispersed R&D, no expert consensus on whether it's good to slow down/accelerate), I'd certainly like to see mor
... (read more)I personally do not think it is appropriate to include an essay in a syllabus or engage with it in a forum post when (1) this essay characterizes the views it argues against using terms like 'white supremacy' and in a way that suggests (without explicitly asserting it, to retain plausible deniability) that their proponents—including eminently sensible and reasonable people such as Nick Beckstead and others— are white supremacists, and when (2) its author has shown repeatedly in previous publications, social media posts and other behavior that he is not writing in good faith and that he is unwilling to engage in honest discussion.
(To be clear: I think the syllabus is otherwise great, and kudos for creating it!)
EDIT: See Seán's comment for further elaboration on points (1) and (2) above.
Thanks for sharing these studies explaining why you are doing this. Unfortunately, in general I am very skeptical of the sort of studies you are referencing. The researchers typically have a clear agenda - they know what conclusions they want to come to ahead of time, and what conclusions will most advantageous to their career - and the statistical rigour is often lacking, with small sample sizes, lack of pre-registration, p-hacking, and other issues. I took a closer look at the four sources you referenced to see if these issues applied.
The link you provide here, to a 2014 article in National Geographic, has a lot of examples of cases where male researchers supposedly overlooked the needs of women (e.g. not adequately studying how women's biology affects how drugs and seat belts should work, or the importance of cleaning houses), and suggests that increasing number of female scientists helped address this. But female scientists being better a... (read more)
[Written in a personal capacity, etc. This is the first of two comments: second comment here]
Hello Will. Glad to see you back engaging in public debate and thanks for this post, which was admirably candid and helpful about how things work. I agree with your broad point that EA should be more decentralised and many of your specific suggestions. I'll get straight to one place where I disagree and one suggestion for further decentralisation. I’ll split this into two comments. In this comment, I focus on how centralised EA is. In the other, I consider how centralised it should be.
Given your description of how EA works, I don't understand how you reached the conclusion that it's not that centralised. It seems very centralised - at least, for something portrayed as a social movement.
Why does it matter to determine how 'centralised' EA is? I take it the implicit argument is EA should be "not too centralised, not too decentralised" and so if it's 'very centralised' that's a problem and we consider doing something. Let's try to leave aside whether centralisation is a good thing and focus on the factual claim of how centralised EA is.
You say, in effect, "not that centralised",... (read more)
I suspect that if transformative AI is 20 or even 30 years away, AI will still be doing really big, impressive things in 2033, and people at that time will get a sense that even more impressive things are soon to come. In that case, I don't think many people will think that AI safety advocates in 2023 were crying wolf, since one decade is not very long, and the importance of the technology will have only become more obvious in the meantime.
Where I agree:
- Experimentation with decentralised funding is good. I feel it's a real shame that EA may not end up learning very much from the FTX regrant program because all the staff at the foundation quit (for extremely good reasons!) before many of the grants were evaluated.
- More engagement with experts. Obviously, this trades off against other things and it's easier to engage with experts when you have money to pay them for consultations, but I'm sure there are opportunities to engage with them more. I suspect that a lot of the time the limiting factor may simply be people not knowing who to reach out to, so perhaps one way to make progress on this would be to make a list of experts who are willing for people at EA orgs to reach out to them, subject to availability?
- I would love to see more engagement from Disaster Risk Reduction, Future Studies, Science and Technology Studies, ect. I would encourage anyone with such experience to consider posting on the EA forum. You may want to consider extracting out this section in a separate forum post for greater visibility.
- I would be keen to see experiments where people vote on funding decisions (although I would be surprised if this were
... (read more)If 100% of these suggestions were implemented I would expect in 5 years' time EA to look significantly worse (less effective, helping less people/animals and possibly having more FTX type scandals).
If the best 10% were implemented I could imagine that being an improvement.
Nice. Thanks. Really well written, very clear language, and I think this is pointed in a pretty good direction. Overall I learned a lot.
I do have the sense it maybe proves too much -- i.e. if these critiques are all correct then I think it's surprising that EA is as successful as it is, and that raises alarm bells for me about the overall writeup.
I don't see you doing much acknowledging what might be good about the stuff that you critique -- for example, you critique the focus on individual rationality over e.g. deferring to external consensus. But it seems possible to me that the movement's early focus on individual rationality was the cause of attracting great people into the movement, and that without that focus EA might not be anything at all! If I'm right about that then are we ready to give up on whatever power we gained from making that choice early on?
Or, as a metaphor, you might be saying something like "EA needs to 'grow up' now" but I am wondering if EA's childlike nature is part of its success and 'growing up' would actually have a chance to kill the movement.
Let us take a moment of sympathy for the folks at CEA (who are, after all, or allies in the flight to make the world better). Scant weeks ago they were facing harsh criticism for failing to quickly make the conventional statement about the FTX scandal. Now they're facing criticism for doing exactly that. I'm glad I'm not comms director at CEA for sure.
I think this post is very accurate, but I worry that people will agree with it in a vacuous way of "yes, there is a problem, we should do something about it, learning from others is good". So I want to make a more pointed claim: I think that the single biggest barrier to interfacing between EAs and non-EAs is the current structure of community building. Community-building is largely structured around creating highly-engaged EAs, usually through recruiting college students or even high-school students. These students are not necessarily in the best position to interface between EA and other ways of doing good, precisely because they are so early into their careers and don't necessarily have other competencies or viewpoints. So EA ends up as their primary lens for the world, and in my view that explains a sizable part of EA's quasi-isolationist thinking on doing good.
This doesn't mean all EAs who joined as college students (like me) end up as totally insular - life puts you into environments where you can learn from non-EAs. But that isn't the default, and especially outside of global health and development, it is very easy for a young highly-engaged EA to avoid learning about doing good from non-EAs.
Thanks for the question Gideon, I'll just respond to this question directed at me personally.
When preparing for the interview I read about his frugal lifestyle in multiple media profiles of Sam and sadly simply accepted it at face value. One that has stuck in my mind up until now was this video that features Sam and the Toyota Corolla that he (supposedly) drove.
I can't recall anyone telling me that that was not the case, even after the interview went out, so I still would have assumed it was true two weeks ago.
I imagine it feels challenging to share that and I applaud you for that.
While my EA experiences have been much more positive than yours, I do not doubt your account. For many of the points you mention, I can see milder versions in my own experience. I believe your post points towards something important.
Just a note from someone who is an FTX customer.
I moved some of my crypto holding to FTX because I trusted them and Sam and wanted the profits from my crypto holdings to go to EA/FTX Future Fund. FTX have always told me my funds would be secured, I did not trade leveraged funds, so I'm the only rightful owner of that crypto and FTX has likely been using it to make money on leveraged instruments. This seems like fraud, and the optics of this for the EA community, and the already difficult optics of lontermism, seem to me like they will be very bad.
I'm priviliged, my holdings in FTX were 2% of my net worth (I enjoy following crypto) so I'll be fine, but many will not.
Not the intended audience, but as a US person who lives in the Bay Area, I enjoyed reading this really detailed list of what's often unusual or confusing to people from a specific different cultural context
Emile Torres (formerly Phil) just admitted on their Twitter that they were a co-author of a penultimate version of this paper. It is extremely deceptive not to disclose their contribution this in the paper or in the Forum post. At the point this paper was written, Torres had been banned from the EA Forum and multiple people in the community had accused Torres of harassing them. Do you think that that might have contributed to the (alleged) reception to your paper?
This argument has some force but I don't think it should be overstated.
Re perpetual foundations: Every mention of perpetual foundations I can recall has opened with the Franklin example, among other historical parallels, so I don't think its advocates could be accused of being unaware that the idea has been attempted!
It's true at least one past example didn't pan out. But cost-benefit analysis of perpetual foundations builds in an annual risk of misappropriation or failure. In fact such analyses typically expect 90%+ of such foundations to achieve next to nothing, maybe even 99%+. Like business start-ups, the argument is that the 1 in 100 that succeeds will succeed big and pay for all the failures.
So seeing failed past examples is entirely consistent with the arguments for them and the conclusion that they are a good idea.
Re communist revolutions: Many groups have tried to change how society is organised or governed hoping that it will produce a better world. Almost all the past examples of such movements I can think of expected benefits to come fairly soon — within a generation or two at most — and though advocates for such changes usually hoped the benefits will be long-lasting, ... (read more)
Effective Altruists want to effectively help others. I think it makes perfect sense for this to be an umbrella movement that includes a range of different cause areas. (And literally saving the world is obviously a legitimate area of interest for altruists!)
Cause-specific movements are great, but they aren't a replacement for EA as a cause-neutral movement to effectively do good.
I donated $5800.
I also donated $5,800. Thanks Andrew for making this post – this seems like a somewhat rare opportunity for <$10k donations to be unusually impactful
The section on expected value theory seemed unfairly unsympathetic to TUA proponents
So, I think framing it as "here is this gaping hole in this worldview" is a bit unfair. Proponents of TUA pointed out the hole and are the main people trying to resolve the problem, and any alternatives also seem to have dire problems.
Thanks, I definitely agree that there should be more prioritization research. (I work at GPI, so maybe that’s predictable.) And I agree that for all the EA talk about how important it is, there's surprisingly little really being done.
One point I'd like to raise, though: I don’t know what you’re looking for exactly, but my impression is that good prioritization research will in general not resemble what EA people usually have in mind when they talk about “cause prioritization”. So when putting together an overview like this, one might overlook some of even what little prioritization research is being done.
In my experience, people usually imagine a process of explicitly listing causes, thinking through and evaluating the consequences of working in each of them, and then ranking the results (kind of like GiveWell does with global poverty charities). I expect that the main reason more of this doesn’t exist is that, when people try to start doing this, they typically conclude it isn’t actually the most helpful way to shed light on which cause EA actors should focus on.
I think that, more often than not, a more helpful way to go a... (read more)
I am open to trade, but I would like something in return, and my guess is it would have to be pretty valuable since option value and freedom of expression is quite valuable to me. I don't see a basis on which the EA community would have any right to "demand" such a thing from rationalists like myself.
Animal Justice Appreciation Note
Animal Justice et al. v A.G of Ontario 2024 was recently decided and struck down large portions of Ontario's ag-gag law. A blog post is here. The suit was partially funded by ACE, which presumably means that many of the people reading this deserve partial credit for donating to support it.
Thanks to Animal Justice (Andrea Gonsalves, Fredrick Schumann, Kaitlyn Mitchell, Scott Tinney), co-applicants Jessica Scott-Reid and Louise Jorgensen, and everyone who supported this work!
This is really sad news. I hope everyone working there has alternative employment opportunities (far from a given in academia!).
I was shocked to hear that the philosophy department imposed a freeze on fundraising in 2020. That sounds extremely unusual, and I hope we eventually learn more about the reasons behind this extraordinary institutional hostility. (Did the university shoot itself in the financial foot for reasons of "academic politics"?)
A minor note on the forward-looking advice: "short-term renewable contracts" can have their place, especially for trying out untested junior researchers. But you should be aware that it also filters out mid-career academics (especially those with family obligations) who could potentially bring a lot to a research institution, but would never leave a tenured position for short-term one. Not everyone who is unwilling to gamble away their academic career is thereby a "careerist" in the derogatory sense.
Over the years, I’ve done a fair amount of community building, and had to deal with a pretty broad range of bad actors, toxic leadership, sexual misconduct, manipulation tactics and the like. Many of these cases were associated with a pattern of narcissism and dark triad spectrum traits, self-aggrandizing behavior, manipulative defense tactics, and unwillingness to learn from feedback. I think people with this pattern rarely learn and improve, and in most cases should be fired and banned from the community even if they are making useful contributions (and I have been involved with handling several such cases over the last decade). I think it’s important that more people learn to recognize this; I encourage you to read the two above-linked articles.
I feel worried that some readers of this Forum might think Owen matches that pattern. Knowing him professionally and to some degree personally, I think he clearly does not. I’ve collaborated and talked with him for hours in all kinds of settings, and based on my overall impression of his character, I understand his problematic behavior to have arisen from an inability to model others’ emotions, an inability to recognize that he ... (read more)
I think what Jonas has written is reasonable, and I appreciate all the work he did to put in proper caveats. I also don’t want to pick on Owen in particular here; I don’t know anything besides what has been publicly said, and some positive interactions I had with him years ago. That said: I think the fact that this comment is so highly upvoted indicates a systemic error, and I want to talk about that.
The evidence Jonas provides is equally consistent with “Owen has a flaw he has healed” and “Owen is a skilled manipulator who charms men, and harasses women”. And if women (such as myself) report he never harassed them, that’s still consistent with him being a serial predator who’s good at picking targets. I’m not arguing the latter is true- I’m arguing that Jonas’s comment is not evidence either way, and its 100+ karma count has me worried people think it is. There was a similar problem with the supportive comments around Nonlinear from people who had not been in subservient positions while living with the founders, although those were not very highly upvoted.
“If every compliment is equally strong evidence for innocence and skill at manipulation, doesn’t that leave people with n... (read more)
Huh, this feels like a somewhat weird post without mentioning the FTX settlement for $22.5M that EV just signed: https://restructuring.ra.kroll.com/FTX/Home-DocketInfo (Memo number 3745).
My guess is Open Phil is covering this, but my guess is there is a bunch of additional risk that funds you receive right now would become part of this settlement that donors should be able to model.
My guess is you can't talk about this for legal reasons in a post like this (though that does seem sad and my guess is you've been too risk-averse in the domain of sharing any information in this space publicly), but seems important for people to know when someone is assessing what is going on with EV and CEA.
Strongly, strongly, strongly agree. I was in the process of writing essentially this exact post, but am very glad someone else got to it first. The more I thought about it and researched, the more it seemed like convincingly making this case would probably be the most important thing I would ever have done. Kudos to you.
A few points to add
- Under standard EA "on the margin" reasoning, this shouldn't really matter, but I analyzed OP's grants data and found that human GCR has been consistently funded 6-7x more than animal welfare (here's my tweet thread this is from)

- @Laura Duffy's (for Rethink Priorities) recently published risk aversion analysis basically does a lot of the heavy lifting here (bolding mine):
... (read more)FYI, I made a spreadsheet a while ago which automatically pulls the latest OP grants data and constructs summaries and pivot tables to make this type of analysis easier.
I also made these interactive plots which summarise all EA funding:
(COI note: I work at OpenAI. These are my personal views, though.)
My quick take on the "AI pause debate", framed in terms of two scenarios for how the AI safety community might evolve over the coming years:
- AI safety becomes the single community that's the most knowledgeable about cutting-edge ML systems. The smartest up-and-coming ML researchers find themselves constantly coming to AI safety spaces, because that's the place to go if you want to nerd out about the models. It feels like the early days of hacker culture. There's a constant flow of ideas and brainstorming in those spaces; the core alignment ideas are standard background knowledge for everyone there. There are hackathons where people build fun demos, and people figuring out ways of using AI to augment their research. Constant interactions with the models allows people to gain really good hands-on intuitions about how they work, which they leverage into doing great research that helps us actually understand them better. When the public ends up demanding regulation, there's a large pool of competent people who are broadly reasonable about the risks, and can slot into the relevant institutions and make them work well.
- AI sa
... (read more)I think it would be helpful for you to mention and highlight your conflict-of-interest here.
I remember becoming much more positive about ads after starting work at Google. After I left, I slowly became more cynical about them again, and now I'm back down to ~2018 levels.
EDIT: I don't think this comment should get more than say 10-20 karma. I think it was a quick suggestion/correction that Richard ended up following, not too insightful or useful.
Let me justify my complete disagreement.
I read your comment as applying insanely high quality requirements to what's already an absolutely thankless task. The result of applying your standards would be that the OP would not get written. In a world where criticism is too expensive, it won't get produced. This is good if the criticism is substance-less, but bad if it's of substance.
Also, professional journalists are paid for their work. In case of posts like these, who is supposed to pay the wages and provide the manpower to fulfill requirements like "running it by legal"? Are we going to ask all EA organisations to pay into a whistleblower fund, or what?
Also, for many standards and codes of ethics, their main purpose is not to provide a public good, or to improve epistemics, but to protect the professionals themselves. (For example, I sure wish doctors would tell patients if any of their colleagues should be avoided, but this is just not done.) So unequivocally adhering to such professional standards is not the right goal to strive for.
I also read your comment as containing a bunch of leading questions that presupposed a negative conclusion. Over eight paragraphs of questions, you'r... (read more)
To put it more strongly: I would like to make clear that I have never heard any claims of improper conduct by Drew Spartz (in relation to the events discussed in this post or otherwise).
Application forms for EA jobs often give an estimate for how long you should expect it to take; often these estimates are *wildly* too low ime. (And others I know have said this too). This is bad because it makes the estimates unhelpful for planning, and because it probably makes people feel bad about themselves, or worry that they're unusually slow, when they take longer than the estimate.
Imo, if something involves any sort of writing from scratch, you should expect applicants to take at least an hour, and possibly more. (For context, I've seen application forms which say 'this application should take 10 minutes' and more commonly ones estimating 20 minutes or 30 minutes).
It doesn’t take long to type 300 words if you already know what you’re going to say and don’t particularly care about polish (I wrote this post in less than an hour probably). But job application questions —even ‘basic’ ones like ‘why do you want this job?’ and ‘why would you be a good fit?’-- take more time. You may feel intuitively that you’d be a good fit for the job, but take a while to articulate why. You have to think about how your skills might help with the job, perhaps cross-referencing with ... (read more)
Thanks to the authors for taking the time to think about how to improve our organization and the field of AI takeover prevention as a whole. I share a lot of the concerns mentioned in this post, and I’ve been spending a lot of my attention trying to improve some of them (though I also have important disagreements with parts of the post).
Here’s some information that perhaps supports some of the points made in the post and adds texture, since it seems hard to properly critique a small organization without a lot of context and inside information. (This is adapted from my notes over the past few months.)
Most importantly, I am eager to increase our rate of research output – and critically to have that increase be sustainable because it’s done by a more stable and well-functioning team. I don’t think we should be satisfied with the current output rate, and I think this rate being too low is in substantial part due to not having had the right organizational shape or sufficiently solid management practices (which, in empathy with the past selves of the Redwood leadership team, is often a tricky thing for young organizations to figure out, and is perhaps especially tricky in this field).
I t... (read more)
(context: worked at FHI for 2 years, no longer affiliated with it but still in touch with some people who are)
I'd probably frame/emphasize things a bit differently myself but agree with the general thrust of this, and think it'd be both overdue and in everyone's interest.
The obvious lack of vetting of the apology was pretty disqualifying w.r.t. judgment for someone in such a prominent institutional and community position, even before getting to the content (on which I've commented elsewhere).
I'd add, re: pre-existing issues, that FHI as an institution has failed at doing super basic things like at least semi-regularly updating key components of their website*; the org's shortcomings re: diversity have been obvious from the beginning and the apology was the last nail in the coffin re: chances for improving on that front as long as he's in charge; and I don't think I know anyone who thinks he adds net positive value as a manager** (vs. as a researcher, where I agree he has made important contributions, but that could continue without him wasting a critical leadership position, and as a founder, where his work is done).
*e.g. the news banner thing displays 6 yea... (read more)
Removing Claire from the EVF Board because she approved the Wytham Abbey purchase seems tremendously silly to me. FTX is a serious scandal that impacted millions of people; EA projects buying conference venues or offices isn't.
Edward Kmett's take on that topic seems correct to me:
... (read more)Atlas at some point bought this table, I think: https://sisyphus-industries.com/product/metal-coffee-table/. At that link it costs around $2200, so I highly doubt the $10,000 number.
Lightcone then bought that table from Atlas a few months ago at the listing price, since Jonas thought the purchase seemed excessive, so Atlas actually didn't end up paying anything. I am really glad we bought it from them, it's probably my favorite piece of furniture in the whole venue we are currently renovating.
If you think it was a waste of money, I have made much worse interior design decisions (in-general furniture is really annoyingly expensive, and I've bought couches for $2000 that turned out to just not work for us at all and were too hard to sell), and I consider this one a pretty strong hit. (To clarify, the reason why it's so expensive is because it's a kinetic sculpture with a moving magnet and a magnetic ball that draws programmable patterns into the sand at the center of the table, so it's not just like, a pretty coffee table)
The table is currently serving as a centerpiece of our central worksp... (read more)
Hey everyone, the moderators want to point out that this topic is heated for several reasons:
So we want to ask everyone to be especially understanding and generous when discussing topics this sensitive.
And as a reminder, harassment is unacceptable. One resource that exists for this is the Community Health Team at CEA. You can get in touch with the team here. If you ever experience harassment of any kind on the Forum, please reach out to the moderation team.
Edit: added the last bullet point after a useful comment
The casual assumption that people make that obviously the only reason Caroline could have become CEO was because she was sleeping with SBF is annoying when I see it on Twitter or some toxic subreddit. Here I expect better. Plenty of people at FTX and Alameda were equally young and equally inexperienced. The CTO (a similarly important role at a tech company) of FTX, Gary Wang, was 29. Sam Trabucco, the previous Alameda co-CEO, seems to be about the same. I have seen no reason to think that Caroline was particularly unusual in her age or experience relative to others at FTX and Alameda.
Just also want to emphasise Lizka's role in organising and spearheading this, as well as her conscientiousness and clear communication at every step of the process - I've enjoyed being part of this, and am personally super grateful for all the work she has put into this contest.
It seems that half of these examples are from 15+ years ago, from a period for which Eliezer has explicitly disavowed his opinions (and the ones that are not strike me as most likely correct, like treating coherence arguments as forceful and that AI progress is likely to be discontinuous and localized and to require relatively little compute).
Let's go example-by-example:
1. Predicting near-term extinction from nanotech
This critique strikes me as about as sensible as digging up someone's old high-school essays and critiquing their stance on communism or the criminal justice system. I want to remind any reader that this is an opinion from 1999, when Eliezer was barely 20 years old. I am confident I can find crazier and worse opinions for every single leadership figure in Effective Altruism, if I am willing to go back to what they thought while they were in high-school. To give some character, here are some things I believed in my early high-school years:
- The economy was going to collapse because the U.S. was establishing a global surveillance state
- Nuclear power plants are extremely dangerous and any one of them is quite likely to explode in a given year
- We could have e
... (read more)Just to note that the boldfaced part has no relevance in this context. The post is not attributing these views to present-day Yudkowsky. Rather, it is arguing that Yudkowsky's track record is less flattering than some people appear to believe. You can disavow an opinion that you once held, but this disavowal doesn't erase a bad prediction from your track record.
Oregonian here, born and raised. I don’t live in OR-6 but can see it from my home. I’m by no means a member of EA but I’m aware of it and until now had a generally favorable impression of you all.
I hope that rather than donating, folks in this thread will think about what they’re doing and whether it’s a good idea. The most obvious effect of this effort has been to 5-10x the total spending in this race. It’s pretty easy to read it as an experiment to see if CEA can buy seats in congress. Thats not innovative, it’s one of the oldest impulses in politics: we’re rich, let’s put my friend in power.
Further, it sounds like your friend Carrick is a great guy, but he’s got many defects as a candidate. He’s only lived in Oregon for about 18 months since college. From the few interviews he’s given, he doesn’t seem to have much familiarity or even really care about key issues in Oregon (in particular, the few interviews he’s given show that he lacks a nuanced understanding of issues like forest policy and drug decriminalization). He does not appear to have reached out to local leaders or tried to do any of the local network building you’d expect of a good representative. According to OPB he’s... (read more)
Thanks for the thoughtful comment! Without commenting on the candidacy or election overall, a response (lightly edited for clarity) to your point about pandemics:
You emphasize pandemic expertise, but pandemic prevention priorities are arguably more relevant to who will make a difference. It might not take much expertise to think that now is a bad time for Congress to slash pandemic prevention funding, which happened despite some lobbying against it. And for harder decisions, a non-expert member of Congress can hire or consult with expert advisors, as is common practice. Instead of expertise being most important in this case, a perspective I've heard from people very familiar with Congress is that Congress members' priorities are often more important, since members face tough negotiations and tradeoffs. So maybe what's lacking in Congress isn't pandemic-related expertise or lobbying, but willingness to make it a priority to keep something like covid from happening again.
It's a little aside from your point, but good feedback is not only useful for emotionally managing the rejection -- it's also incredibly valuable information! Consider especially that someone who is applying for a job at your organization may well apply for jobs at other organizations. Telling them what is good or bad with their application will help them improve that process, and make them more likely to find something that is the right fit for them. It could be vital in helping them understand what they need to do to position themselves to be more useful to the community, or at least it could save the time and effort of them applying for more jobs that have the same requirements you did, that they didn't meet -- and save the time and effort of the hiring team there rejecting them.
A unique characteristic of EA hiring is that it's often good for your goals to help candidates who didn't succeed at your process succeed at something else nearby. I often think we don't realize how significantly this shifts our incentives in cases like these.
The Symmetry Theory of Valence sounds wrong to me and is not substantiated by any empirical research I am aware of. (Edited to be nicer.) I'm sorry to post a comment so negative and non-constructive, but I just don't want EA people to read this and think it is something worth spending time on.
As far as I can tell, nobody at the Qualia Research Institute has a PhD in Neuroscience or has industry experience doing equivalent level work. Keeping in mind credentialism is bad, I am still pointing out their lack of neuroscience credentials because I am confused by how overwhelmingly confident they are in their claims, their incomprehensible use of neuro jargon, and how dismissive they are of my expertise. (Edited to be nicer.) https://www.qualiaresearchinstitute.org/team
There are a lot of things I don't understand about STV, but the primary one is:
Please provide evidence that "dissonance in the brain" as measured by a "Consonance Dissonance Noise Signature" is associated with suffering? This should be an easy study to run. Put people in an fMRI scanner, ask them ... (read more)
Toby Ordering is really good.
If you agree it is a serious and baseless allegation, why do you keep engaging with him? The time to stop engaging with him was several years ago. You had sufficient evidence to do so at least two years ago, and I know that because I presented you with it, e.g. when he started casually throwing around rape allegations about celebrities on facebook and tagging me in the comments, and then calling me and others nazis. Why do you and your colleagues continue to extensively collaborate with him?
To reiterate, the arguments he makes are not sincere: he only makes them because he thinks the people in question have wronged him.
[disclaimer: I am co-Director at CSER. While much of what I will write intersects with professional responsibilities, it is primarily written from a personal perspective, as this is a deeply personal matter for me. Apologies in advance if that's confusing, this is a distressing and difficult topic for me, and I may come back and edit. I may also delete my comment, for professional or personal/emotional reasons].
I am sympathetic to Halstead's position here, and feel I need to write my own perspective. Clearly to the extent that CSER has - whether directly or indirectly - served to legitimise such attacks by Torres on colleagues in the field, I bear a portion of responsibility as someone in a leadership position. I do not feel it would be right or appropriate for me to speak for all colleagues, but I would like to emphasise that individually I do not, in any way, condone this conduct, and I apologise for it, and for any failings on my individual part that may have contributed.
My personal impression supports the case Halstead makes. Comments about my 'whiteness', and insinuations regarding my 'real' reasons for objecting to positions taken by Torres only came after I objected publicly... (read more)
Addendum: There's a saying that "no matter what side of an argument you're on, you'll always find someone on your side who you wish was on the other side".
There is a seam running through Torres's work that challenges xrisk/longtermism/EA on the grounds of the limitations of being led and formulated by a mostly elite, developed-world community.
Like many people in longtermism/xrisk, I think there is a valid concern here. xrisk/longtermism/EA all started in a combination of elite british universities + US communities e.g. bay. They had to start somewhere. I am of the view that they shouldn't stay that way.
I think it's valid to ask whether there are assumptions embedded within these frameworks at this stage that should be challenged, and to posit that these would be challenged most effectively by people with a very different background and perspective. I think it's valid to argue that thinking, planning for, and efforts to shape the long-term future should not be driven by a community that is overwhelmingly from one particular background and that doesn't draw on and incorporate the perspectives of a community that reflects more of global societies and cultures. Work by such... (read more)
Here are ten reasons you might choose to work on near-term causes. The first five are reasons you might think near term work is more important, while the latter five are why you might work on near term causes even if you think long term future work is more important.
... (read more)You might think the future is likely to be net negative. Click here for why one person initially thought this and here for why another person would be reluctant to support existential risk work (it makes space colonization more likely, which could increase future suffering).
Your view of population ethics might cause you to think existential risks are relatively unimportant. Of course, if your view was merely a standard person affecting view, it would be subject to the response that work on existential risk is high value even if only the present generation is considered. However, you might go further and adopt an Epicurean view under which it is not bad for a person to die a premature death (meaning that death is only bad to the extent it inflicts suffering on oneself or others).
You might have a methodological objection to applying expected value to cases where the probability is small. While the author attributes
I would be very surprised if digital minds work of all things would end up PR-costly in relevant ways. Indeed, my sense is many of the "weird" things that you made a call to defund form the heart of the intellectual community that is responsible for the vast majority of impact of this ecosystem, and I expect will continue to be the attractor for both funding and talent to many of the world's most important priorities.
An EA community that does not consider whether the minds we aim to control have moral value seems to me like one that has pretty seriously lost its path. Not doing so because some people will walk away with a very shallow understanding of "consciousness" does not seem to me like a good reason to not do that work.
I think you absolutely have a right to care and value your personal reputation, but I do not think your judgement of what would hurt the "movement/EA brand" is remotely accurate here.
I met Australia's Assistant Minister for Defence last Friday. I asked him to write an email to the Minister in charge of AI, asking him to establish an AI Safety Institute. He said he would. He also seemed on board with not having fully autonomous AI weaponry.
All because I sent one email asking for a meeting + had said meeting.
Advocacy might be the lowest hanging fruit in AI Safety.
Reading Lukas_Gloor’s comment (and to a lesser extent, this still helpful one from Erica_Edelman) made me realize what I think is the big disagreement between people and why they are talking past each other.
It comes down to how you would feel about doing Alice/Chloe’s job.
Some people, like the Nonlinear folks and most of those sympathetic to them, think something like the following:
“Why is she such an ungrateful whiner? She has THE dream job/life. She gets to travel the world with us (which is awesome since we can do anything and this is what we chose to do), living in some insanely cool places with super cool and successful people AND she has a large degree of autonomy over what she does AND we are building her up and like 15% of her job is some menial tasks that we did right before she joined and come on it’s fine. How can you complain about the smallest unpleasant thing when the rest of your life rocks and this is your FIRST job out of college when this lifestyle is reserved for multimillionaires? She gets to live the life of a multimillionaire and is surrounded by cool EA people”
Others look at Alice/Chloe’s life and think something like the following:
“Wow,... (read more)
Doing some napkin-math:
That seems like a lot! Maybe I should discount a bit as some of this might be for the new Special Projects team rather than research, but it still seems like it'll be over $100k per research output.
Related questions:
- Do you think the calculations above are broadly correct? If not, could you share what the ballpark figures might actually be? Obviously, this will depend a lot on the size of the project and other factors but averages are still useful!
- If they are correct, how come this number is so high? Is it just due to multiple researchers spending a lot of time per report and making sure it's extremely high-quality? FWIW I think the value of some RP projects is very high - and worth more than the costs above - but I'm still surprised at the costs.
- Is the cost something you're assessing when you decide whether to take on a research project (when it'
... (read more)Hey Bob - Howie from EV UK here. Thanks for flagging this! I definitely see why this would look concerning so I just wanted to quickly chime in and let you/others know that we’ve already gotten in touch with relevant regulators about this and I don’t think there’s much to worry about here.
The thing going on is that EV UK has an extended filing deadline (from 30 April to 30 June 2023) for our audited accounts,[1] which are one of the things included in our Annual Return. So back in April, we notified the Charity Commission that we’ll be filing our Annual Return by 30 June.
This is due to a covid extension, which the UK government has granted to many companies.
I notice that I am surprised and confused.
I'd have expected Holden to contribute much more to AI existential safety as CEO of Open Philanthropy (career capital, comparative advantage, specialisation, etc.) than via direct work.
I don't really know what to make of this.
That said, it sounds like you've given this a lot of deliberation and have a clear plan/course of action.
I'm excited about your endeavours in the project!
Firstly, I will say that I'm personally not afraid to study and debate these topics, and have done so. My belief is that the data points to no evidence of significant genetic differences between races when it comes to matters such as intelligence, and i think one downside of being hush hush about the subject is that people miss out on this conclusion, which is the one even a basic wikipedia skim would get you to. (you're free to disagree, that's not the point of this comment).
That being said, I think you have greatly understated the case for not debating the subject on this forum. Remember, this is a forum for doing the most good, not a debate club, and if shunting debate of certain subjects onto a different website does the most good, that's what we should do. This requires a cost/benefit analysis, and you are severely understating the costs here.
Point 1 is that we have to acknowledge the obvious fact that when you make a group of people feel bad, some of them are going to leave your group. I do not think this is a moral failing on their part. We have a limited number of hours in the day, would you hang out in a place where people regularly discuss whether you are gene... (read more)
Hi Simon,
I'm back to work and able to reply with a bit more detail now (though also time-constrained as we have a lot of other important work to do this new year :)).
I still do not think any (immediate) action on our part is required. Let me lay out the reasons why:
(1) Our full process and criteria are explained here. As you seem to agree with from your comment above we need clear and simple rules for what is and what isn't included (incl. because we have a very small team and need to prioritize). Currently a very brief summary of these rules/the process would be: first determine which evaluators to rely on (also note our plans for this year) and then rely on their recommendations. We do not generally have the capacity to review individual charity evaluations, and would only do so and potentially diverge from a trusted evaluator's recommendation under exceptional circumstances. (I don't believe we have had such a circumstance this giving season, but may misremember)
(2) There were no strong reasons to diverge with respect to FP's recommendation of StrongMinds at the time they recommended them - or to do an in-depth review of FP's evaluation ourselves - and I think there still aren... (read more)
To be honest I'm relieved this is one of the top comments. I've seen Kathy mentioned a few times recently in a way I didn't think was accurate and I didn't feel able to respond. I think anyone who comes across her story will have questions and I'm glad someone's addressed the questions even if it's just in a limited way.
Without in any sense wanting to take away from the personal responsibility of the people who actually did the unethical, and probably illegal trading, I think there might be a couple of general lessons here:
1) An attitude of 'I take huge financial risks because I'm trading for others, not myself, and money has approx. 0 diminishing marginal utility for altruism, plus I'm so ethical I don't mind losing my shirt' might sound like a clever idea. But crucially, it is MUCH easier psychologically to think you'll just eat the loss and the attendant humiliation and loss of status, before you are actually facing losing vast sums of money for real. Assuming (as seems likely to me) that SBF started out with genuine good intentions, my guess is this was hard to anticipate because of a self-conception as "genuinely altruistic" blocked him from the idea he might do wrong. The same thing probably stopped others hearing about SBF taking on huge risks, which of course he was open* about, from realizing this danger.
2) On reflection, the following is a failure mode for us as a movement combining a lot of utilitarians (and more generally, people who understand that it is *sometimes, in principle... (read more)
Great comment. First comment from new forum member here. Some background: I was EA adjacent for many years, and donated quite a lot of income through an EA organization, and EA people in my community inspired me to go vegan. Still thankful for that. Then I was heavily turned off by the move towards longtermism, which I find objectionable on many grounds (both philosophical and political). This is just to give you some background on where I'm coming from, so read my comment with that in mind.
I would like to pick up on this part: "Assuming (as seems likely to me) that SBF started out with genuine good intentions, my guess is this was hard to anticipate because of a self-conception as "genuinely altruistic" blocked him from the idea he might do wrong". I think this is true, and I think it's crucial for the EA community to reflect on these things going forward. It's the moral licensing or self-licensing effect, which is well described in moral psychology - individuals who are very confident they are doing good may be more likely to engage in bad acts.
I think, however, that the EA community at large in recent years have started to suffer from a kind of intellectual sel... (read more)
I think "it's easy to overreact on a personal level" is an important lesson from covid, but much more important is "it's easy to underreact on a policy level". I.e. given the level of foresight that EAs had about covid, I think we had a disappointingly small influence on mitigating it, in part because people focused too much on making sure they didn't get it themselves.
In this case, I've seen a bunch of people posting about how they're likely to leave major cities soon, and basically zero discussion of whether there are things people can do to make nuclear war overall less likely and/or systematically help a lot of other people. I don't think it's bad to be trying to ensure your personal survival as a key priority, and I don't want to discourage people from seriously analysing the risks from that perspective, but I do want to note that the overall effect is a bit odd, and may indicate some kind of community-level blind spot.
I've seen the time-money tradeoff reach some pretty extreme, scope-insensitive conclusions. People correctly recognize that it's not worth 30 minutes of time at a multi-organizer meeting to try to shave $10 off a food order, but they extrapolate this to it not being worth a few hours of solo organizer time to save thousands of dollars. I think people should probably adopt some kind of heuristic about how many EA dollars their EA time is worth and stick to it, even when it produces the unpleasant/unflattering conclusion that you should spend time to save money.
Also want to highlight "For example, we should avoid the framing of ‘people with money want to pay for you to do X’ and replace this with an explanation of why X matters a lot and why we don’t want anyone to be deterred from doing X if the costs are prohibitive" as what I think is the most clearly correct and actionable suggestion here.
I haven't received my invite yet (probably because you left out my first name)
I disagree. It seems to me that the EA community's strength, goodness, and power lie almost entirely in our ability to reason well (so as to be actually be "effective", rather than merely tribal/random). It lies in our ability to trust in the integrity of one anothers' speech and reasoning, and to talk together to figure out what's true.
Finding the real leverage points in the world is probably worth orders of magnitude in our impact. Our ability to think honestly and speak accurately and openly with each other seems to me to be a key part of how we access those "orders of magnitude of impact."
In contrast, our ability to have more money/followers/etc. (via not ending up on the wrong side of a cultural revolution, etc.) seems to me to be worth... something, in expectation, but not as much as our ability to think and speak together is worth.
(There's a lot to work out here, in terms of trying to either do the estimates in EV terms, or trying to work out the decision theory / virtue ethics of the matter. I would love to try to discuss in detail, back and forth, and see if we can work this out. I do not think this should be super obvious in either direction from the get go, although at this point my opinion is pretty strongly in the direction I am naming. Please do discuss if you're up for it.)
Yudkowsky's message is "If anyone builds superintelligence, everyone dies." Zvi's version is "If anyone builds superintelligence under anything like current conditions, everyone probably dies."
Yudkowsky contrasts those framings with common "EA framings" like "It seems hard to predict whether superintelligence will kill everyone or not, but there's a worryingly high chance it will, and Earth isn't prepared," and seems to think the latter framing is substantially driven by concerns about what can be said "in polite company."
Obviously I can't speak for all of EA, or all of Open Phil, and this post is my personal view rather than an institutional one since no single institutional view exists, but for the record, my inside view since 2010 has been "If anyone builds superintelligence under anything close to current conditions, probably everyone dies (or is severely disempowered)," and I think the difference between me and Yudkowsky has less to do with social effects on our speech and more to do with differing epistemic practices, i.e. about how confident one can reasonably be about the effects of poorly understood future technologies emerging in future, poorly understood circumstances. (... (read more)
I think it would be phenomenally shortsighted for EA to prioritize its relationship with rationalists over its relationship with EA-sympathetic folks who are put off by scientific racists, given that the latter include many of the policymakers, academics, and professional people most capable of actualizing EA ideas. Most of these people aren't going to risk working/being associated with EA if EA is broadly seen as racist. Figuring out how to create a healthy (and publicly recognized) distance between EAs and rationalists seems much easier said than done, though.
I want to express a few things.
First, empathy, for:
Second, a little local disappointment in OP. At some point in the past it seemed to me like OP was trying pretty hard to be very straightforward and honest. I no longer get that vibe from OP's public comms; they seem more like they're being carefully crafted for something like looking-good or being-defensible while only saying true things. Of course I don’t know all the constraints they’re under so I can’t be sure this is a mistake. But I personally feel a bit sad about it — I think it makes it harder for people to make useful updates from things OP says, which is awkward because I think a bunch of people kind of look to OP for leadership. I don’t think anything is crucially wrong here, but I’m worried about people missing the upside from franker communication, and... (read more)
My understanding (based on talking to people involved in Wytham and knowing the economics of renting and buying large venues in a lot of detail) is that the sale of Wytham (edit: as done here, where the venue will either be sold at a very large discount or lie empty for a long period of time) does not actually make any economic sense for EV in terms of its mission to do as much good as possible. It is plausible that the initial purchase was a mistake, and that it makes sense to set plans in motion to sell the venue, but my understanding is that it will likely take many years for EV to sell during which the venue will be basically completely empty, or the venue will have to be sold at a pretty huge loss. This means at this point, it's likely worth it to keep it running.
Also based on talking to some of the people close to these decisions, and trying to puzzle together how this decision was made, it seems very likely to me that the reason why Wytham is being sold is not based in a cost-effectiveness analysis, but the result of a PR-management strategy which seems antithetical to the principles of Effective Altruism to me.
EV (and Open Phil) are supposed to use its assets an... (read more)
I find it interesting and revealing to look at how Nonlinear re-stated Chloe's initial account of an incident into a shorter version.
First, here's their shortened version (by Nonlinear):
... (read more)Hi Jeff. Thanks for engaging. Three quick notes. (Edit: I see that Peter has made the first already.)
First, and less importantly, our numbers don't represent the relative value of individuals, but instead the relative possible intensities of valenced states at a single time. If you want the whole animal's capacity for welfare, you have to adjust for lifespan. When you do that, you'll end up with lower numbers for animals---though, of course, not OOMs lower.
Second, I should say that, as people who work on animals go, I'm fairly sympathetic to views that most would regard as animal-unfriendly. I wrote a book criticizing arguments for veganism. I've got another forthcoming that defends hierarchicalism. I've argued for hybrid views in ethics, where different rules apply to humans and animals. Etc. Still, I think that conditional on hedonism it's hard to get MWs for animals that are super low. It's easier, though still not easy, on other views of welfare. But if you think that welfare is all that matters, you're probably going to get pretty animal-friendly numbers. You have to invoke other kinds of reasons to really change the calculus (partiality, rights, whatever).
Third, I've been try... (read more)
The problem with Kat’s text is that it’s a very thinly veiled threat to end someone’s career in an attempt to control Nonlinear’s image. There is no context that justifies such a threat.
Shoutout to the 130-ish people in the UK who volunteered to be infected with malaria in two separate studies at various stages of the R21 development process! Those studies helped identify Matrix-M as the ideal adjuvant, and also provided insight into the optimal dose/vaccination schedule.
A man with experience in the London, Bay Area, online communities:
... (read more)Hi James,
Thanks for writing this - its difficult/intimidating to write and post things of this nature on here, and its also really important and valuable. So thanks for sharing your experience.
Please don't read this response as being critical/dismissive of your experiences - I have no doubt that these dynamics do exist, and that these types of interaction do happen (too frequently), in EA spaces. It makes me unhappy to know that well-intentioned people who want to make a different in the world are turned off by interacting with some people in the EA community, or attending some EA events.
I do want to say though, for fairness sake, that as a member of an ethnic, religious, and geographical minority in the EA community, I feel valued and respected, and that I don't think the attitudes or opinions of the people you're reporting in your post are that common in the greater community, and that (the vast majority of the EAs I know) would be upset to hear another EA behave the way you're reporting they did.
^This preempts what is the overall theme of the ideas I had when reading your post: that we make a mistake of thinking about the EA community, and EA events... (read more)
Rather than further praising or critiquing the FTX/Alameda team, I want to flag my concern that the broader community, including myself, made a big mistake in the "too much money" discourse and subsequent push away from earning to give (ETG) and fundraising. People have discussed Open Philanthropy and FTX funding in a way that gives the impression that tens of billions are locked in for effective altruism, despite many EA nonprofits still insisting on their significant room for more funding. (There has been some pushback, and my impression that the "too much money" discourse has been more prevalent may not be representative.)
I've often heard the marginal ETG amount, at which point a normal EA employee should be ambivalent between EA employment and donating $X per year, at well above $1,000,000, and I see many working on megaproject ideas designed to absorb as much funding as possible. I think many would say that these choices make sense in a community with >$30 billion in funding, but not one with <$5 billion in funding, just as ballparks to put numbers on things. I think many of us are in fortunate positions to pivot quickly and safely, but for many, especially f... (read more)
Did a test run with 58 participants (I got two attempted repeats):
So you were right, and I'm super surprised here.
There are very expensive interventions that are financially constrained and could use up ~all EA funds, and the cost-benefit calculation takes probability of powerful AGI in a given time period as an input, so that e.g. twice the probability of AGI in the next 10 years justifies spending twice as much for a given result by doubling the chance the result gets to be applied. That can make the difference between doing the intervention or not, or drastic differences in intervention size.
Just saying I think this would be a terrible idea, both for HIA and for the movement in general. We very obviously don't want to be associated with lying and manufacturing support. Not to mention it might just get you banned from social media.
On one hand it's clear that global poverty does get the most overall EA funding right now, but it's also clear that it's more easy for me to personally get my 20th best longtermism idea funded than to get my 3rd best animal idea or 3rd best global poverty idea funded and this asymmetry seems important.
This post leaves some dots unconnected.
Are you suggesting that people pretend to have beliefs they don't have in order to have a good career and also shift the Republican party from the inside?
Are you suggesting that anyone can be a Republican as long as they have a couple of beliefs or values that are not totally at odds with those of the Republican party — even if the majority of their beliefs and values are far more aligned with another party?
Or by telling people to join the Republican party, are you suggesting they actively change some of their beliefs or stances in order to fit in, but then focus on shaping the party to be aligned with EA values that it is currently kind of neutral about?
It doesn't seem you're saying the first thing, because you don't say anything about hiding one's true beliefs, and you have the example of the openly left-wing acquaintance who got a job at a conservative NGO.
If you're saying the second thing, I think this is more difficult then you're imagining. I don't mean emotionally difficult because of cold uggies. I mean strategically or practically difficult because participation in certain political parities is generally meant ... (read more)
This list should have karma hidden and entries randomised. I guess most poeple do not read and vote all the way to the bottom. I certainly didn't the first time I read it.
Reflection on my time as a Visiting Fellow at Rethink Priorities this summer
I was a Visiting Fellow at Rethink Priorities this summer. They’re hiring right now, and I have lots of thoughts on my time there, so I figured that I’d share some. I had some misconceptions coming in, and I think I would have benefited from a post like this, so I’m guessing other people might, too. Unfortunately, I don’t have time to write anything in depth for now, so a shortform will have to do.
Fair warning: this shortform is quite personal and one-sided. In particular, when I tried to think of downsides to highlight to make this post fair, few came to mind, so the post is very upsides-heavy. (Linch’s recent post has a lot more on possible negatives about working at RP.) Another disclaimer: I changed in various ways during the summer, including in terms of my preferences and priorities. I think this is good, but there’s also a good chance of some bias (I’m happy with how working at RP went because working at RP transformed me into the kind of person who’s happy with that sort of work, etc.). (See additional disclaimer at the bottom.)
First, some vague background on me, in case it’s relevant:
- I finished m
... (read more)Honestly, the biggest benefit to my wellbeing was taking action about depression, including seeing a doctor, going on antidepressants, and generally treating it like a problem that needed to be solved. I really think I might not have done that, or might have done it much later, were it not for EA - EA made me think about things in an outcome-oriented way, and gave me an extra reason to ensure I was healthy and able to work well.
For others: I think that Scott Alexander's posts on anxiety and depression are really excellent and hard to beat in terms of advice. Other things I'd add: I'd generally recommend that your top goal should be ensuring that you're in a healthy state before worrying too much about how to go about helping others; if you're seriously unhappy or burnt our, fixing that first is almost certainly the best altruistic thing you can do. I also recommend maintaining and cultivating a non-EA life: having a multi-faceted identity means that if one aspect of your life isn't going so well, then you can take solace in other aspects.
I don't agree with all of the decisions being made here, but I really admire the level of detail and transparency going into these descriptions, especially those written by Oliver Habryka. Seeing this type of documentation has caused me to think significantly more favorably of the fund as a whole.
Will there be an update to this post with respect to what projects actually fund following these recommendations? One aspect that I'm not clear on is to what extent CEA will "automatically" follow these recommendations and to what extent there will be significant further review.
Thank you so much for everything you've done. You brought such renewed vigour and vision to Giving What We Can that you ushered it into a new era. The amazing team you've assembled and culture you've fostered will put it such good stead for the future.
I'd strongly encourage people reading this to think about whether they might be a good choice to lead Giving What We Can forward from here. Luke has put it in a great position, and you'd be working with an awesome team to help take important and powerful ideas even further, helping so many people and animals, now and across the future. Do check that job description and consider applying!
It's hard to say much about the source of funding without leaking too much information; I think I can say that they're a committed EA who has been around the community a while, who I deeply respect and is generally excited to give the community a voice.
FWIW, I think the connection between Manifest and "receiving funding from Manifund or EA Community Choice" is pretty tenuous. Peter Wildeford who you quoted has both raised $10k for IAPS on Manifund and donated $5k personally towards a EA community project. This, of course, does not indicate that Peter supports Manifest to any degree whatsoever; rather, it shows that sharing a funding platform is a very low bar for association.
This makes sense, but if anything the conflict of interest seems more alarming if you're influencing national policy. For example, I would guess that you are one of the people—maybe literally among the top 10?—who stands to personally lose the most money in the event of an AI pause. Are you worried about this, or taking any actions to mitigate it (e.g., trying to convert equity into cash?)
I’ve heard this claim repeatedly, but it’s not true that EA orgs have no whistleblower systems.
I looked into this as part of this project on reforms at EA organizations: Resource on whistleblowing and other ways of escalating concerns
- Many organizations in EA have whistleblower policies, some of which are public in their bylaws (for example, GiveWell and ACE publish their whistleblower policies among other policies). EV US and EV UK have whistleblower policies that apply to all the projects under their umbrella (CEA, 80,000 Hours, etc.) This is just a normal thing for nonprofits; the IRS asks whether you have one even though they don't strictly require it, and you can look up on a nonprofit’s 990 whether they have such a policy.
- Additionally, UK law, state law in many US states, and lots of other countries provide some legal protections for whistleblowers. Legal protection varies by state in the US, but is relatively strong in California.
- Neither government protections nor organizational policies cover all the scenarios where someone might reasonably want protection from ne
... (read more)I think there is a bit of tendency to assume that it is appropriate to ask for arbitrary amounts of transparency from EA orgs. I don't think this is a good norm: transparency has costs, often significant, and constantly asking for all kinds of information (often with a tone that suggests that it ought to be presented) is I think often harmful.
I do not know Owen. I am however a bit worried to see two people in these comments advocating for Owen while this affair does not look good and the facts speak for themselves; there is a certain irony to see these two people coming to defend Owen while the community health head, Julia, admits to a certain level of bias when handling this affair since he was her friend. It seems that EA people do not learn from the mistakes that are courageously being owned up here. This posts talks about Owen misbehaving: it does not talk about Owen's good deeds. So this kind of comment defeats the point of this post.
Can you put yourself two seconds in the shoes of these women who received unwanted and pressing attention from Owen, with all the power dynamics that are involved, reading comments on how Owen is responsible and a great addition to the community, even after women repeatedly complained about him? What I read is 'He treated me well, so don't be so quick to dismiss him' and 'I've dealt with worse cases, so I can assure you this one is not that bad'.
Do you really think that such attitudes encourage women to speak up? Do you really think that this is the place to do this?
E... (read more)
USAID has announced that they've committed $4 million to fighting global lead poisoning!
USAID Administrator Samantha Power also called other donors to action, and announced that USAID will be the first bilateral donor agency to join the Global Alliance to Eliminate Lead Paint. The Center for Global Development (CGD) discusses the implications of the announcement here.
For context, lead poisoning seems to get ~$11-15 million per year right now, and has a huge toll. I'm really excited about this news.
Also, thanks to @ryancbriggs for pointing out that this seems like "a huge win for risky policy change global health effective altruism" and referencing this grant:
In December 2021, GiveWell (or the EA Funds Global Health and Development Fund?) gave a grant to CGD to "to support research into the effects of lead exposure on economic and educational outcomes, and run a working group that will author policy outreach documents and engage with global policymakers." In their writeup, they recorded a 10% "best case" forecast that in two years (by the end of the grant period), "The U.S. government, other international actors (e.g., bilateral and multilateral donors), and/or national ... (read more)
(I edited an earlier comment to include this, but it's a bit buried now, so I wanted to make a new comment.)
I've read most of the post and appendix (still not everything). To be a bit more constructive, I want to expand on how I think you could have responded better (and more quickly):
... (read more)You say: "This is inaccurate. I don't think there is any evidence that Ben had access to that doesn't seem well-summarized by the two sections above. We had a direct report from Alice, which is accurately summarized in the first quote above, and an attempted rebuttal from Kat, which is accurately summarized in the second quote above. We did not have any screenshots or additional evidence that didn't make it into the post."
Actually, you are mistaken, Ben did have screenshots. I think you just didn't know that he had them. I can send you proof that he had them via DM if you like.
Regarding this: "As Kat has documented herself, she asked Alice to bring Schedule 2 drugs across borders without prescription (whether you need a prescription in the country you buy it is irrelevant, what matters is whether you have one in the country you arrive in), something that can have quite substantial legal consequences (I almost certainly would feel pretty uncomfortable asking my employee to bring prescription medications across borders without appropriate prescription)."
It sounds like you're saying this paragraph by Ben:
"Before she went on vacation, Kat requested that Alice bring a variety of i... (read more)
I’m surprised to hear you say this Habryka: “I think all the specific statements that Ben made in his post were pretty well-calibrated (and still seem mostly right to me after reading through the evidence)”
Do you think Ben was well calibrated/right when he made, for instance, these claims which Nonlinear has provided counter evidence for?
“She [Alice] was sick with covid in a foreign country, with only the three Nonlinear cofounders around, but nobody in the house was willing to go out and get her vegan food, so she barely ate for 2 days. Alice eventually gave in and ate non-vegan food in the house” (from my reading of the evidence this is not close to accurate, and I believe Ben had access to the counter evidence at the time when he published)
“Before she went on vacation, Kat requested that Alice bring a variety of illegal drugs across the border for her (some recreational, some for productivity). Alice argued that this would be dangerous for her personally, but Emerson and Kat reportedly argued that it is not dangerous at all and was “absolutely risk-free” (from my reading of the evidence Nonlinear provided, it seems Alice was asked to buy ADHD medicine that they believed was lega... (read more)
Effective giving quick take for giving season
This is quite half-baked because I think my social circle contains not very many E2G folks, but I have a feeling that when EA suddenly came into a lot more funding and the word on the street was that we were “talent constrained, not funding constrained”, some people earning to give ended up pretty jerked around, or at least feeling that way. They may have picked jobs and life plans based on the earn to give model, where it would be years before the plans came to fruition, and in the middle, they lost status and attention from their community. There might have been an additional dynamic where people who took the advice the most seriously ended up deeply embedded in other professional communities, so heard about the switch later or found it harder to reconnect with the community and the new priorities.
I really don’t have an overall view on how bad all of this was, or if anyone should have done anything differently, but I do have a sense that EA has a bit of a feature of jerking people around like this, where priorities and advice change faster than the advice can be fully acted on. The world and the right priorities really do change, thoug... (read more)
Just noting, for people who might not read the book, that there are many more mentions of "effective altruism":
I agree that EA seems often painted as "High IQ immature children", especially from Chapter 6 or 7.
To me, EA also seems painted as kind of a cult[1], where acolytes sacrifice their lives for "the greater good" according to a weird ideology, and people seem to be considered "effective altruists" mostly based on their social connections with the group.
I'm surprised you didn't mention what was for me the spiciest EA quote, from SBF in ~2018:
Same way as this Washington Post article puts it
Bad Things Are Bad: A Short List of Common Views Among EAs
- No, we should not sterilize people against their will.
- No, we should not murder AI researchers. Murder is generally bad. Martyrs are generally effective. Executing complicated plans is generally more difficult than you think, particularly if failure means getting arrested and massive amounts of bad publicity.
- Sex and power are very complicated. If you have a power relationship, consider if you should also have a sexual one. Consider very carefully if you have an power relationship: many forms of power relationship are invisible, or at least transparent, to the person with power. Common forms of power include age, money, social connections, professional connections, and almost anything that correlates with money (race, gender, etc). Some of these will be more important than others. If you're concerned about something, talk to a friend who's on the other side of that from you. If you don't have any, maybe just don't.
- And yes, also, don't assault people.
- Sometimes deregulation is harmful. "More capitalism" is not the solution to every problem.
- Very few people in wild animal suffering think that we should go and deliberately de
... (read more)Jeff is right: I just returned from my mom's memorial service, which delayed the just-posted FLI statement.
A short note as a moderator (echoing a commenter): People (understandably) have strong feelings about discussions that focus on race, and many of us found the linked content difficult to read. This means that it's both harder to keep to Forum norms when responding to this, and (I think) especially important.
Please keep this in mind if you decide to engage in this discussion, and try to remember that most people on the Forum are here for collaborative discussions about doing good.
If you have any specific concerns, you can also always reach out to the moderation team at forum-moderation@effectivealtruism.org.
I think if he lied publicly about whether FTX's client assets were not invested then I think this should very much reduce a reasonable person's opinion of him. If lying straightforwardly in public does not count against your character, I don't know what else would.
That said, I don't actually know whether any lying happened here. The real situation seems to be messy, and it's plausible that all of FTX's client assets (and not like derivatives) were indeed not invested, but that the thing that took FTX out were the leveraged derivates they were selling, which required more advanced risk-balancing, though I do think that Twitter thread looks really quite suspicious right now.
Thanks for the suggestion, Zach!
I did explain to Constance why she was initially rejected as one of the things we discussed on an hour-long call. We also discussed additional information she was considering including, and I told her I thought she was a better fit for EAGx (she said she was not interested). It can be challenging to give a lot of guidance on how to change a specific application, especially in cases where the goal is to “get in”. I worry about providing information that will allow candidates to game the system.
I don’t think this post reflects what I told Constance, perhaps because she disagrees with us. So, I want to stick to the policy for now.
I agree that S-risks are more neglected by EA than extinction risks, and I think the explanation that many people associate S-risks with negative utilitarianism is plausible. I'm a regular utilitarian and I've reached the conclusion that S-risks are quite important and neglected, and I hope this bucks the perception of those focused on S-risks.
Note that it may be hard to give criticism (even if anonymous) about FTX's grantmaking because a lot of FTX's grantmaking is (currently) not disclosed. This is definitely understandable and likely avoids certain important downsides, but it also does amplify other downsides (e.g., public misunderstanding of FTX's goals and outputs) - I'm not sure how to navigate that trade-off, but it is important to acknowledge that it exists!
I mean, sometimes you have reason to make titles into a simple demand, but I wish there were a less weaksauce justification than “because our standards here are no better than anywhere else”.
If a community claims to be altruistic, it's reasonable for an outsider to seek evidence: acts of community altruism that can't be equally well explained by selfish impulses, like financial reward or desire for praise. In practice, that seems to require that community members make visible acts of personal sacrifice for altruistic ends. To some degree, EA's credibility as a moral movement (that moral people want to be a part of) depends on such sacrifices. GWWC pledges help; as this post points out, big spending probably doesn't.
One shift that might help is thinking more carefully about who EA promotes as admirable, model, celebrity EAs. Communities are defined in important ways by their heroes and most prominent figures, who not only shape behaviour internally, but represent the community externally. Communities also have control over who these representatives are, to some degree: someone makes a choice over who will be the keynote speaker at EA conferences, for instance.
EA seems to allocate a lot of its prestige and attention to those it views as having exceptional intellectual or epistemic powers. When we select EA role models and representatives, we seem to optimise for demonstr... (read more)
Your 3 items cover good+top priority, good+not top priority, and bad+top priority, but not #4, bad+not top priority.
I think people concerned with x-risk generally think that progress studies as a program of intervention to expedite growth is going to have less expected impact (good or bad) on the history of the world per unit of effort, and if we condition on people thinking progress studies does more harm than good, then mostly they'll say it's not important enough to focus on arguing against at the current margin (as opposed to directly targeting urgent threats to the world). Only a small portion of generalized economic expansion will go to the most harmful activities (and damage there comes from expediting dangerous technologies in AI and bioweapons that we are improving in our ability to handle, so that delay would help) or to efforts to avert disaster, so there is much more leverage focusing narrowly on the most important areas.
With respect to synthetic biology in particular, I think there is a good case for delay: right now the capacity to kill most of the world's population with bioweapons is not available in known technologies (although huge secret bioweapons programs... (read more)
I'm extremely skeptical of the finding that reading glasses dramatically increase income. After looking into this topic for the past year (having initially been excited by these studies), I would now guess that the significant findings are more a result of experimenter demand effects than of reading glasses.
For example, in the paper on tea pickers you mention, the research team made seven unannounced visits to assess “compliance with study glasses” over the course of the 11-week trial. But we know from past research (e.g. Zwane et al. 2011) that monitoring changes behavior (as evidenced by the uptick in glasses usage). In that case, the estimate from the trial will capture the effect of monitoring + reading glasses which is not an effect we'd ever observe in the real world. The productivity of the control group also increases by 20% between baseline and endline but I don't believe the authors provide any potential explanations for this substantial increase.
Perhaps more tellingly, I've now visited tea growing regions in rural Kenya multiple times this past year. I've observed hospitals giving reading glasses to tea pluckers and I've asked people if they would use them when plucking ... (read more)
I think I'm broadly sympathetic to arguments against EA orgs doing matching, especially for fundraising within EA spaces. But there are some other circumstances I've encountered that these critiques never capture well, and I don't personally feel very negative when I see organizations doing matching due to them.
- There is at least one EA sympathetic major animal welfare donor who historically has preferred most their gifts to be only via matching campaigns. While I think they would likely donate these funds anyway, donating to matches run by them (which I believe are a large percentage of matches you see run by animal orgs) would cause counterfactual donations to that specific animal charity. So at least some percentage of matches you see in EA causes funding to move from a less preferred charity to a more preferred one for the matched donors. This matching donor also gives to many projects EAs might view as less effective, so giving to these matches is frequently similarly good to getting matched by Facebook on EA Giving Tuesday.
- I think a much larger portion of donation matching than people in EA seem to believe is more like EA Giving Tuesday on Facebook than completely illusory — t
... (read more)Thanks for this update, your leadership, and your hard work over the last year, Zach.
It's great to hear that Mintz's investigation has wrapped (and to hear they found no evidence of knowledge of fraud, though of course I'm not surprised by that). I'm wondering if it would be possible for them to issue an independent statement or comment confirming your summary?
Dear Stephen and the EA community:
Shortly after the early November 2022 collapse of FTX, EV asked me and my law firm, Mintz, to conduct an independent investigation into the relationship between FTX/Alameda and EV. I led our team’s investigation, which involved reviewing tens of thousands of documents and conducting dozens of witness interviews with people who had knowledge about EV’s relationship with FTX and Alameda. As background, I spent 11 years serving as a federal prosecutor in the United States Attorney’s Office for the Southern District of New York, the same USAO that prosecuted Sam Bankman-Fried and the other FTX/Alameda executives.
I can confirm that the statements in Zach Robinson’s post from yesterday, December 13, 2023, about the results of the investigation are 100% true and accurate.
Mintz’s independent investigation found no evidence that anyone at EV knew about the alleged fraudulent criminal conduct at FTX and Alameda. This conclusion was later reinforced by the evidence at this fall’s trial of United States v. Sam Bankman-Fried, where the three cooperating witnesses who had all pled guilty (Caroline Ellis... (read more)
Being mindful of the incentives created by pressure campaigns
I've spent the past few months trying to think about the whys and hows of large-scale public pressure campaigns (especially those targeting companies — of the sort that have been successful in animal advocacy).
A high-level view of these campaigns is that they use public awareness and corporate reputation as a lever to adjust corporate incentives. But making sure that you are adjusting the right incentives is more challenging than it seems. Ironically, I think this is closely connected to specification gaming: it's often easy to accidentally incentivize companies to do more to look better, rather than doing more to be better.
For example, an AI-focused campaign calling out RSPs recently began running ads that single out AI labs for speaking openly about existential risk (quoting leaders acknowledging that things could go catastrophically wrong). I can see why this is a "juicy" lever — most of the public would be pretty astonished/outraged to learn some of the beliefs that are held by AI researchers. But I'm not sure if pulling this lever is really incentivizing the right thing.
As far as I can tell, AI leaders speaking openl... (read more)
I don't really see the "terrible day for EA" part? Maybe you think Nonlinear is more integral to EA as a whole than I do. To me it seems like an allegation of bad behaviour on the part of a notable but relatively minor actor in the space, that doesn't seem to particularly reflect a broader pattern.
I have mixed feelings about this mod intervention. On the one hand, I value the way that the moderator team (including Lizka) play a positive role in making the forum a productive place, and I can see how this intervention plays a role of this sort.
On the other hand:
- Minor point: I think Eliezer is often condescending and disrespectful, and I think it's unlikely that anyone is going to successfully police his tone. I think there's something a bit unfortunate about an asymmetry here.
- More substantially: I think procedurally it's pretty bad that the moderator team act in ways that discourages criticism of influential figures in EA (and Eliezer is definitely such a figure). I think it's particularly bad to suggest concrete specific edits to critiques of prominent figures. I think there should probably be quite a high bar set before EA institutions (like forum moderators) discourage criticism of EA leaders (esp with a post like this that engages in quite a lot of substantive discussion, rather than mere name calling). (ETA: Likewise, with the choice to re-tag this as a personal blogpost, which substantially buries the criticism. Maybe this was the right call, maybe it wasn't, but it cert
... (read more)I'm not sure yet about my overall take on the piece but I do quibble a bit with this; I think that there are lots of simple steps that CEA/Will/various central actors (possibly including me) could do, if we wished, to push towards centralization. Things like:
I didn't start off writing this comment to be snarky, but I realized that we are, kind of, doing most of these things. Do we intend to? Should we maybe not do them if we think we want to push away from centralization?
I'm not sure what can be shared publicly for legal reasons, but would note that it's pretty tough in board dynamics generally to clearly establish counterfactual influence. At a high level, Holden was holding space for safety and governance concerns and encouraging the rest of the leadership to spend time and energy thinking about them.
I believe the implicit premise of the question is something like "do those benefits outweigh the potential harms of the grant." Personally, I see this as a misunderstanding, i.e. that OP helped OpenAI to come into existence and it might not have happened otherwise. I've gone back and looked at some of comms around the time (2016) as well as debriefed with Holden and I think the most likely counterfactual is that the time to the next fundraising (2019) and creation of the for-profit entity would have been shortened (due to less financial runway). Another possibility is that the other funders from the first round would have made larger commitments. I give effectively 0% of the probability mass to OpenAI not starting up.
I think that these are good lessons learned, but regarding the last point, I want to highlight a comment by Oliver Habryka;
This seems really important, and while I'm not sure that politics is the mind-killer, I think that the forum and EA in general needs to be really, really careful about the community dynamics. I think that the principal problem pointed out by the recent "Bad Omens" post was peer pressure towards conformity in ways that lead to people acting like jerks, and I think that we're seeing that play out here as well, but involving central people in EA orgs pushing the dynamics, rather than local EA groups. And that seems far more worrying.
So yes, I think there are lots of important lessons learned about politics, but those matter narrowly. And I think that the biggest ... (read more)
Hey Theo - I’m James from the Global Challenges Project :)
Thanks so much for taking the time to write this - we need to think hard about how to do movement building right, and its great for people like you to flag what you think is going wrong and what you see as pushing people away.
Here’s my attempt to respond to your worries with my thoughts on what’s happening!
First of all, just to check my understanding, this is my attempt to summarise the main points in your post:
My summary of your main points
We’re missing out on great people as a result of how community building is going at student groups. A stronger version of this claim would be that current CB may be selecting against people who could most contribute to current talent bottlenecks. You mention 4 patterns that are pushing people away:
- EA comes across as totalising and too demanding, which pushes away people who could nevertheless contribute to pressing cause areas. (Part 1.1)
- Organisers come across as trying to push particular conclusions to complex questions in a way that is disingenuous and also epistemically unjustified. (Part 1.2)
- EA comes across as cult-like; primarily through appearing to be trying to hard to be persuasiv
... (read more)I think I agree with the general thrust of your post (that mental health may deserve more attention amongst neartermist EAs), but I don't think the anecdote you chose highlights much of a tension.
> I asked them how they could be so sceptical of mental health as a global priority when they had literally just been talking to me about it as a very serious issue for EAs.
I am excited about improving the mental health of EAs, primarily because I think that many EAs are doing valuable work that improves the lives of others and good mental health is going to help them be more productive (I do also care about EAs being happy as much as I care about anyone being happy, but I expect that value produced from this to be much less that the value produced from the EAs actions).
I care much less about the productivity benefits that we'd see from improving the mental health of people outside of the EA community (although of course I do think their mental health matters for other reasons).
So the above claim seems pretty reasonable to me.
As an illustration, I can care about EAs having good laptops much more than I care about random people having good laptops, I am much more sceptical about giving random people good laptops producing impact than giving EAs good laptops.
We could definitely do well to include more people in the movement. For what it's worth, though, EA's core cause areas could be considered among the most important and neglected social justice issues. The global poor, non-human animals, and future generations are all spectacularly neglected by mainstream society, but we (among others) have opted to help them.
You might be interested in the following essays:
This seems a very strange view. Surely an animal rights grantmaking organisation can discriminate in favour of applicants who also care about animals? Surely a Christian grantmaking organisation can condition its funding on agreement with the Bible? Surely a civil rights grantmaking organisation can decide to only donate to people who agree about civil rights?
Unfortunately, a significant part of the situation is that people with internal experience and a negative impression feel both constrained and conflicted (in the conflict of interest sense) for public statements. This applies to me: I left OpenAI in 2019 for DeepMind (thus the conflicted).
I’ve been on the EA periphery for a number of years but have been engaging with it more deeply for about 6 months. My half-in, half-out perspective, which might be the product of missing knowledge, missing arguments, all the usual caveats but stronger:
Motivated reasoning feels like a huge concern for longtermism.
First, a story: I eagerly adopted consequentialism when I first encountered it for the usual reasons; it seemed, and seems, obviously correct. At some point, however, I began to see the ways I was using consequentialism to let myself off the hook, ethically. I started eating animal products more, and told myself it was the right decision because not doing so depleted my willpower and left me with less energy to do higher impact stuff. Instead, I decided, I’d offset through donations. Similar thing when I was asked, face to face, to donate to some non-EA cause: I wanted to save my money for more effective giving. I was shorter with people because I had important work I could be doing, etc., etc.
What I realized when I looked harder at my behavior was that I had never thought critically about most of these “trade-offs,” not even to check whether they were actually trade-offs! ... (read more)
As the Forum’s lead moderator, I’m posting this message, but it was written collaboratively by several moderators after a long discussion.
As a result of several comments on this post, as well as a pattern of antagonistic behavior, Phil Torres has been banned from the EA Forum for one year.
Our rules say that we discourage, and may delete, "unnecessary rudeness or offensiveness" and "behavior that interferes with good discourse". Calling someone a jerk and swearing at them is unnecessarily rude, and interferes with good discourse.
Phil also repeatedly accuses Sean of lying:
After having seen the material shared by Phil and Sean (who sent us some additional material he didn’t want shared on the Forum), we think the claims in question are open to interpretation but clearly not deliberate lies.&... (read more)
Both founders don't seem to have a background in technical AI safety research. Why do you think Nonlinear will be able to research and prioritize these interventions without having prior experience or familiarity in technical AI safety research?
Relatedly, wouldn't the organization be better if it hired for a full-time researcher or have a co-founder who has a background in technical AI safety research? Is this something you're considering doing?
It doesn't seem conservative in practice? Like Vasco, I'd be surprised if aiming for reliable global capacity growth would look like the current GHD portfolio. For example:
I'd guess most proponents of GHD would find (1) and (2) particularly bad.
Going forwards, LTFF is likely to be a bit more stringent (~15-20%?[1] Not committing to the exact number) about approving mechanistic interpretability grants than in grants in other subareas of empirical AI Safety, particularly from junior applicants. Some assorted reasons (note that not all fund managers necessarily agree with each of them):
- Relatively speaking, a high fraction of resources and support for mechanistic interpretability comes from other sources in the community other than LTFF; we view support for mech interp as less neglected within the community.
- Outside of the existing community, mechanistic interpretability has become an increasingly "hot" field in mainstream academic ML; we think good work is fairly likely to come from non-AIS motivated people in the near future. Thus overall neglectedness is lower.
- While we are excited about recent progress in mech interp (including some from LTFF grantees!), some of us are suspicious that even success stories in interpretability are that large a fraction of the success story for AGI Safety.
- Some of us are worried about field-distorting effects of mech interp being oversold to junior researchers and other newcomers as necess
... (read more)Putting this here since this is the active thread on the NL situation. Here's where I currently am:
- I think NL pretty clearly acted poorly towards Alice and Chloe. In addition to what Ozy has in this post, the employment situation is really pretty bad. I don't know how this worked in the other jurisdictions, but Puerto Rico is part of the US and paying someone as an independent contractor when they were really functioning as an employee means you had an employee that you misclassified. And then $1k/mo in PR is well below minimum wage. They may be owed back pay, and consulting an employment lawyer could make sense, though since we're coming up on the two year mark it would be good to move quickly.
- I think some people are sufficiently mature and sophisticated that if they and their employer choose to arrange compensation primarily in kind that's illegal more like jaywalking is illegal than like shoplifting is illegal. But I don't think Alice and Chloe fall into this category.
- Many of the other issues are downstream from the low compensation. For example, if they had wanted to live separately on their own dime to have clearer live/work boundaries that would have eaten up ~all of th
... (read more)The evidence collected here doesn’t convince me that Alice and Chloe were lying, or necessarily that Ben Pace did a bad job investigating this. I regret contributing another long and involved comment to this discourse, but I feel like “actually assessing the claims” has been underrepresented compared to people going to the meta level, people discussing the post’s rhetoric, and people simply asserting that this evidence is conclusive proof that Alice and Chloe lied.
My process of thinking through this has made me wish more receipts from Alice and Chloe were included in Ben’s post, or even just that more of the accusations had come in their own words, because then it would be clear exactly what they were claiming. (I think their claims being filtered through first Ben and then Kat/Emerson causes some confusion, as others have noted).
I want to talk about some parts of the post and why I’m not convinced. To avoid cherry-picking, I chose the first claim, about whether Alice was asked to travel with illegal drugs (highlighted by Kat as “if you read just one illustrative story, read this one”), and then I used a random number generator to pick two pages in the appendix (following the lead ... (read more)
I read the author's intention, when she makes the case for 'forgiveness as a virtue', as a bid to (1) seem more virtuous herself, and (2) make others more likely to forgive her (since she was so generous to her accusers - at least in that section - and we want to reciprocate generosity). I think this is an effective persuasive writing technique, but is not relevant to the questions at issue (who did what).
Another related 'persuasive writing' technique I spotted was that, in general, Kat is keen to phrase the hypothesis where Nonlinear did bad things in an extreme way - effectively challenging skeptics "so, you saying we're completely evil moustache-twirling vagabonds from out of a children's fairytale?". That's a straw person, because what's at issue is the overall character of Nonlinear staff, not whether they're cartoon villains. The word 'witch' is used 7 times in this post, and 'evil' half a dozen times too. Quote:
> 2 EAs are Secretly Evil Hypothesis: 2 (of 21) Nonlinear employees felt bad because while Kat/Emerson seem like kind, uplifting charity workers publicly, behind closed doors they are ill-intentioned ne’er do wells.
Howie – I suspect you’d rather I don’t write anything, but it feels wrong not to thank you for everything you’ve given to this role and to the organisation over the past year. So I hope you’ll forgive a short (and perhaps biased) message of appreciation.
Over the past year, you have taken EV UK through one of the most challenging periods of its history with extraordinary dedication and leadership. I don’t think there are many people who would have taken on a role like yours in the days after FTX collapsed, and fewer still who could have done the job you did.
Throughout this time, I have continually been impressed with your intellect, inspired by your integrity, and in awe of your unceasing commitment to doing good. And I know for a fact that I’m not the only one.
It’s been a privilege to support you for the past year and I’m delighted that you’ll now have a chance to take a proper break, before throwing yourself into the next challenge.
Thank you for everything.
Just want to flag that I'm really happy to see this. I think that the funding space could really use more labor/diversity now.
Some quick/obvious thoughts:
- Website is pretty great, nice work there. I'm jealous of the speed/performance, kudos.
- I imagine some of this information should eventually be private to donors. Like, the medical expenses one.
- I'd want to eventually see Slack/Discord channels for each regrantor and their donors, or some similar setup. I think that communication between some regranters and their donors could be really good.
- I imagine some regranters would eventually work in teams. From being both on LTFF and seeing the FTX regrantor program, I did kind of like the LTFF policy of vote averaging. Personally, I think I do grantmaking best when working on a team. I think that the "regrantor" could be a "team leader", in the sense that they could oversee people under them.
- As money amounts increase, I'd like to see regranters getting paid. It's tough work. I think we could really use more part-time / full-time work here.
- I think if I were in charge of something like this, I'd have a back-office of coordinated investigations for everyone. Like,... (read more)
Another falsehood to add to the list of corrections the Bulletin needs to make to the article. In the article, Torres writes,
However, one of those scientists, Peter Watson, has recently tweeted that Torres did not contact him about the Bulletin article. Torres responds to this claim with an irrelevant question.
As you can see below, Peter Watson is indeed one of the climate scientists who was thanked. If Watson is correct, then the Bulletin needs to correct Torres's claim to have contacted all the climate scientists who were acknowledged in the book.
[edit: I originally wrote and highlighted"Andrew Watson" instead of Peter Watson. Peter Watson, as you can see below, is also acknowledged]
Tara left CEA to co-found Alameda with Sam. As is discussed elsewhere, she and many others split ways with Sam in early 2018. I'll leave it to them to share more if/when they want to, but I think it's fair to say they left at least in part due to concerns about Sam's business ethics. She's had nothing to do with Sam since early 2018. It would be deeply ironic if, given what actually happened, Sam's actions are used to tarnish Tara.
[Disclosure: Tara is my wife]
I think seeing it as "just putting two people in touch" is narrow. It's about judgement on whether to get involved in highly controversial commercial deal which was expected to significantly influence discourse norms, and therefore polarisation, in years to come. As far as I can tell, EA overall and Will specifically do not have skills / knowhow in this domain.
Introducing Elon to Sam is not just like making a casual introduction; if everything SBF was doing was based on EA, then this feels like EA wading in on the future of Twitter via the influence of SBFs money.
Introducing Elon to Holden because he wanted to learn more about charity evaluation? Absolutely - that's EA's bread and butter and where we have skills and credibility. But on this commercial deal and subsequent running of Twitter? Not within anyone's toolbox from what I can tell.
I'd like to know the thinking behind this move by Will and anyone else involved. For my part, I think this was unwise, should have had more consultation around it.
I would consider disavowing the community if people start to get more involved in: 1) big potentially world-changing decisions which - to me - it looks like they don't have the wider knowledge or skillset to take on well, or 2) incredibly controversial projects like the Twitter acquisition, and doing so through covert back-channels with limited consultation.
The main assumption of this post seems to be that, not only are the true values of the parameters independent, but a given person's estimates of stages are independent. This is a judgment call I'm weakly against.
Suppose you put equal weight on the opinions of Aida and Bjorn. Aida gives 10% for each of the 6 stages, and Bjorn gives 99%, so that Aida has an overall x-risk probability of 10^-6 and Bjorn has around 94%.
These give you vastly different results, 47% vs 0.4%. Which one is right? I think there are two related arguments to be made against the geometric mean, although they don't push me all the way towards using the arithmetic mean:
- Aida and Bjorn's wildly divergent estimates on probably come from some underlying diff
... (read more)I would prefer it quite a lot if this post didn't have me read multiple paragraphs (plus a title) that feel kind of clickbaity and don't give me any information besides "this one opportunity that Effective Altruists ignore that's worth billions of dollars". I prefer titles on the EA Forum to be descriptive and distinct, whereas this title could be written about probably hundreds of posts here.
A better title might be "Why aren't EAs spending more effort on influencing individual donations?" or "We should spend more effort on influencing individual donations".
I enjoyed the book and recommend it to others!
In case of of interest to EA forum folks, I wrote a long tweet thread with more substance on what I learned from it and remaining questions I have here: https://twitter.com/albrgr/status/1559570635390562305
This post resonated a lot with me. I was actually thinking of the term 'disillusionment' to describe my own life a few days before reading this.
One cautionary tale I'd offer to readers is don't automatically assume your disillusionment is because of EA and consider the possibility that your disillusionment is a personal problem. Helen suggested leaning into feelings of doubt or assuming the movement is making mistakes. That is good if EA is the main cause, but potentially harmful if the person gets disillusioned in general.
I'm a case study for this. For the past decade, I've been attracted to demanding circles. First it was social justice groups and their infinitely long list of injustices. Then it was EA and its ongoing moral catastrophes. More recently, it's been academic econ debates and their ever growing standards for what counts as truth.
In each instance, I found ways to become disillusioned and to blame my disillusionment on an external cause. Sometimes it was virtue signaling. Sometimes it was elitism. Sometimes it was the people. Sometimes it was whether truth was knowable. Sometimes it was another thing entirely. All my reasons felt incredibly compelling at the time... (read more)
You seem to be jumping to the conclusion that if you don't understand something, it must be because you are dumb, and not because you lack familiarity with community jargon or norms.
For example, take the yudkowsky doompost that's been much discussed recently. In the first couple of paragraphs, he namedrops people that would be completely unknown outside his specific subfield of work, and expects the reader to know who they are. Then there are a lot of paragraphs like the following:
It doesn't matter if you have an oxford degree or not, this will be confusing to anyone who has not been steeped in the jargon and worldview of the rationalist subculture. (My PHD in physics is not helpful at all here)
This isn't necessarily bad writing, because the piece is deliberately targeted at people who have been talking with this jargon for years. It would be bad wri... (read more)
Comments on Jacy Reese Anthis' Some Early History of EA (archived version).
Summary: The piece could give the reader the impression that Jacy, Felicifia and THINK played a comparably important role to the Oxford community, Will, and Toby, which is not the case.
I'll follow the chronological structure of Jacy's post, focusing first on 2008-2012, then 2012-2021. Finally, I'll discuss "founders" of EA, and sum up.
2008-2012
Jacy says that EA started as the confluence of four proto-communities: 1) SingInst/rationality, 2) Givewell/OpenPhil, 3) Felicifia, and 4) GWWC/80k (or the broader Oxford community). He also gives honorable mentions to randomistas and other Peter Singer fans. Great - so far I agree.
What is important to note, however, is the contributions that these various groups made. For the first decade of EA, most key community institutions of EA came from (4) - the Oxford community, including GWWC, 80k, and CEA, and secondly from (2), although Givewell seems to me to have been more of a grantmaking entity than a community hub. Although the rationality community provided many key ideas and introduced many key individuals to EA, the institutions that it ran, such as CFAR, were mostl... (read more)
This post is great, thanks for writing it.
I'm not quite sure about the idea that we should have certain demanding norms because they are costly signals of altruism. It seems to me that the main reason to have demanding norms isn't that they are costly signals, but rather that they are directly impactful. For instance, I think that the norm that we should admit that we're wrong is a good one, but primarily because it's directly impactful. If we don't admit that we're wrong, then there's a risk we continue pursuing failed projects even as we get strong evidence that they have failed. So having a norm that counteracts our natural tendency not to want to admit when we're wrong seems good.
Relatedly, and in line with your reasoning, I think that effective altruism should be more demanding in terms of epistemics than in terms of material resources. Again, that's not because that's a better costly signal, but rather because better epistemics likely makes a greater impact difference than extreme material sacrifices do. I developed these ideas here; see also our paper on real-world virtues for utilitarians.
Like other commenters, to back-up the tone of this piece, I'd want to see further evidence of these kinds of conversations (e.g., which online circles are you hearing this in?).
That said, it's pretty clear that the funding available is very large, and it'd be surprising if that news didn't get out. Even in wealthy countries, becoming a community builder in effective altruism might just be one of the most profitable jobs for students or early-career professionals. I'm not saying it shouldn't be, but I'd be surprised if there weren't (eventually) conversations like the ones you described. And even if I think "the vultures are circling" is a little alarmist right now, I appreciate the post pointing to this issue.
On that issue: I agree with your suggestions of "what not to do" -- I think these knee-jerk reactions could easily cause bigger problems than they solve. But what are we to do? What potential damage could there be if the kind of behaviour you described did become substantially more prevalent?
Here's one of my concerns: we might lose something that makes EA pretty special right now. I'm an early-career employee who just started working at an EA org . And something that's s... (read more)
This is a side-note, but I dislike the EA jargon terms hinge/hingey/hinginess and think we should use the term "critical juncture" and "criticalness" instead. This is the common term used in political science, international relations and other social sciences. Its better theorised and empirically backed than "hingey", doesn't sound silly, and is more legible to a wider community.
Critical Junctures - Oxford Handbooks Online
The Study of Critical Junctures - JSTOR
https://users.ox.ac.uk/~ssfc0073/Writings%20pdf/Critical%20Junctures%20Ox%20HB%20final.pdf
https://en.wikipedia.org/wiki/Critical_juncture_theory
Thank you for writing and sharing. I think I agree with most of the core claims in the paper, even if I disagree with the framing and some minor details.
One thing must be said: I am sorry you seem to have had a bad experience while writing criticism of the field. I agree that this is worrisome, and makes me more skeptical of the apparent matters of consensus in the community. I do think in this community we can and must do better to vet our own views.
Some highlights:
I am big on the proposal to have more scientific inquiry . Most of the work today on existential risk is married to a particular school of Ethics, and I agree it need not be.
On the science side I would be enthusiastic about seeing more work on eg models of catastrophic biorisk infection, macroeconomic analysis on ways artificial intelligence might affect society and expansions of IPCC models that include permafrost methane release feedback loops.
On the humanities side I would want to see for example more work on historical, psychological and anthropological evidence for long term effects and successes and failures in managing global problems like the hole in the ozone layer. I would also like to see surveys ... (read more)
I asked my team about this, and Sky provided the following information. This quarter CEA did a small brand test, with Rethink’s help. We asked a sample of US college students if they had heard of “effective altruism.” Some respondents were also asked to give a brief definition of EA and a Likert scale rating of how negative/positive their first impression was of “effective altruism.”
Students who had never heard of “effective altruism” before the survey still had positive associations with it. Comments suggested that they thought it sounded good - effectiveness means doing things well; altruism means kindness and helping people. (IIRC, the average Likert scale score was 4+ out of 5). There were a small number of critiques too, but fewer than we expected. (Sorry that this is just a high-level summary - we don't have a full writeup ready yet.)
Caveats: We didn't test the name “effective altruism” against other possible names. Impressions will probably vary by audience. Maybe "EA" puts off a small-but-important subsection of the audience we tested on (e.g. unusually critical/free-thinking people).
I don't think this is dispositive - I think that testing other brands might still be a good idea. We're currently considering trying to hire someone to test and develop the EA brand, and help field media enquiries. I'm grateful for the work that Rethink and Sky Mayhew have been doing on this.
What happened was a terrible tragedy and my heart aches for those involved. That said, I'd prefer if there wasn't much content of this type on the Forum. 8 people died in that horrific shooting. If there was a Forum post about every event that killed 8 people, or even just every time 8 people were killed from acts of violence, that might (unfortunately, because there are ways in which the world is a terrible place) dominate the Forum, and make it harder to find and spend time on content relevant to our collective task of finding the levers that will help us help as many people as possible.
I agree that we should attend especially to members of our community who are in a particularly difficult place at a given time, and extend them support and compassion, but felt uneasy about it in this case because of the above, because of Dale's point that the shooting might not have been racially motivated, because Asian EAs I know don't seem bothered, and I think we should have a high bar for asking everyone in the community to attend to something/asserting that they should (thought, I'm not sure whether you were doing that/intending to do that).
I don't have a fully-formed gestalt take yet, other than: thanks for writing this.
I do want to focus on 3.2.2 Communication about our work (it's a very Larissa thing to do to have 3 layers nesting of headers 🙂). You explain why you didn't prioritize public communication, but not why you restricted access to existing work. Scrubbing yourself from archive.org seems to be an action taken not from desire to save time communicating, but from a desire to avoid others learning. It seems like that's a pretty big factor that's going on here and would be worth mentioning.
[Speaking for myself, not my employer.]
Unpopular opinion (at least in EA): it not only looks bad, but it is bad that this is the case. Divest!
AI safety donors investing in AI capabilities companies is like climate change donors investing in oil companies or animal welfare donors investing in factory farming (sounds a bit ridiculous when put like that, right? Regardless of mission hedging arguments).
Why are April Fools jokes still on the front page? On April 1st, you expect to see April Fools' posts and know you have to be extra cautious when reading strange things online. However, April 1st was 13 days ago and there are still two posts that are April Fools posts on the front page. I think it should be clarified that they are April Fools jokes so people can differentiate EA weird stuff from EA weird stuff that's a joke more easily. Sure, if you check the details you'll see that things don't add up, but we all know most people just read the title or first few paragraphs.
Thank you for putting so much effort into helping with this community issue.
What do you think community members should do in situations similar to what Ben and Oliver believed themselves to be in: where a community member believes that some group is causing a lot of harm to the community, and it is important to raise awareness?
Should they do a similar investigation, but better or more fairly? Should they hire a professional? Should we elect a group (e.g., the CEA community health team (or similar)) to do these sorts of investigation?
Insightful and well-argued post!
I found it hard to update throughout this story because the presentation of evidence from both parties was (understandably) biased. As you pointed out, "Sharing Information About Nonlinear" presented sometimes true claims in a way which makes the reader unsympathetic to Nonlinear. Nonlinear's response presented compelling rebuttals in a way which was calculated to increase the reader's sympathy for Nonlinear. Both articles intentionally mix the evidence and the vibes in a way which makes it difficult to readers to separate the two. (I don't blame Nonlinear's response for this as much, since it was tit for tat.)
Thanks again for putting so much time and effort into this, and I'm excited to see what you write next.
I'll just quickly say that my experience of this saga was more like this:
Before BP post: NL are a sort of atypical, low structure EA group, doing entrepreneurial and coordination focused work that I think is probably positive impact.
After BP post: NL are actually pretty exploitative and probably net negative overall. I'll wait to hear their response, but I doubt it will change my mind very much.
After NL post: NL are probably not exploitative. They made some big mistakes (and had bad luck) with some risks they took in hiring and working unconventionally. I think they are probably still likely to have a positive impact on expectation. I think that they have been treated harshly.
After this post: I update to be feeling more confident that this wasn't a fair way to judge NL and that these sorts of posts/investigations shouldn't be a community norm.
Places I think people messed up and where improvement is needed
The Nonlinear Team
Ben Pace
- I think it is pretty reasonable to assume that ~1000-10000 hours and possibly more were spent by the community due to his original post (I am including all the reading and all the
... (read more)I want to try to paraphrase what I hear you saying in this comment thread, Holly. Please feel free to correct any mistakes or misframings in my paraphrase.
I hear you saying...
- Lightcone culture has a relatively specific morality around integrity and transparency. Those norms are consistent, and maybe good, but they're not necessarily shared by the EA community or the broader world.
- Under those norms, actions like threatening your ex-employees's carrer prospects to prevent them from sharing negative info about you are very bad, while in broader culture a "you don't badmouth me, I don't badmouth you" ceasefire is pretty normal.
- In this post, Ben is accusing Nonlinear of bad behavior. In particular, he's accusing them of acting particularly badly (compared to some baseline of EA orgs) according to the integrity norms of lightcone culture.
- My understanding is that the dynamic here that Ben considers particularly egregious is that Nonlinear allegedly took actions to silence their ex-employees, and prevent negative info from propagating. If all of the same events had occurred between Nonlinear, Alice, and Chloe, except for Nonlinear suppressing info about what happened after the
... (read more)Thanks for sharing this - I really appreciate the transparency!
A quick question on the attendees: Are there any other (primarily) animal advocacy-focused folks within the 43 attendees or is it just Lewis? I don't know the exact breakdown of meta EA efforts across various cause areas but I would be somewhat surprised if meta animal work was below 2% of all of meta EA spending (as is implied by your 1/43 ratio). There are several notable meta EA animal orgs doing work in this space (e.g. Animal Charity Evaluators, EA Animal Welfare Fund, Farmed Animal Funders, Focus Philanthropy and Animal Advocacy Careers) so wondering if Lewis is meant to represent them all? If so, I think that's a pretty tough gig! Would be curious to hear more about what determined the relative cause area focuses of the attendees or if there's some dataset that shows meta EA spending across various cause areas.
(Note: I'm aware there is some overlap between other attendees and animal work e.g. Joey and Charity Entrepreneurship, but it's not their primary focus hence me not including them in my count above).
Re: "In the weeks leading up to that April 2018 confrontation with Bankman-Fried and in the months that followed, Mac Aulay and others warned MacAskill, Beckstead and Karnofsky about her co-founder’s alleged duplicity and unscrupulous business ethics" -
I don't remember Tara reaching out about this, and I just searched my email for signs of this and didn’t see any. I'm not confident this didn't happen, just noting that I can't remember or easily find signs of it.
In terms of what I knew/learned 2018 more generally, I discuss that here.
Edit: I want to make it clear that I am talking about “genetic” differences not “environmental” differences in this comment. Thanks to titotal for pointing out I wasn’t clear enough. The survey of experts finds that far more experts believe both genetic factors and environmental factors play a role than just environmental factors. I spend the rest of my comment arguing that even if genetic factors play a role, genetic factors are so heavily influenced by environmental factors that we shouldn’t view them as evidence of innate differences in intelligence between races.
I find the repeated use of the term "discredited" to refer to studies on race and IQ on the forum deeply troubling. Yes, some studies will have flaws, but that means you have conversations about the significance of these flaws and respect that reasonable people can disagree about how best to measure complicated issues. It doesn't mean you dismiss everyone who agrees with the standard perspectives of experts in an academic field as racist. My favorite thing about this community is the epistemic humility. We are supposed to be the people who judge studies on their merits, no matter how uncomfortable they... (read more)
Meta: I’m writing on behalf of the Community Health and Special Projects team (here: Community Health team) at CEA to explain how we’re thinking about next steps. For context, our team consists of:
In this comment I’ll sometimes be referring to Effective Ventures (EV) UK and Effective Ventures (EV) US together as the “EV entities” or as Effective Ventures or EV... (read more)
I think it's not quite right that low trust is costlier than high trust. Low trust is costly when things are going well. There's kind of a slow burn of additional cost.
But high trust is very costly when bad actors, corruption or mistakes arise that a low trust community would have preempted. So the cost is lumpier, cheap in the good times and expensive in the bad.
(I read fairly quickly so may have missed where you clarified this.)
I have to say that I don't find these reasons especially convincing. It might help if you clarified exactly who you were speaking for and what you mean by the short-term, i.e., days or weeks?
Legal risk. I am assuming that you are not suggesting that any of these figureheads have done anything illegal. In which case the risk here is a reputational one: they don't want their words dragged into legal proceedings. But that seems like a nebulous possibility, and legal cases like this can take years in any case. Surely you are not saying that they won't address the subject of FTX or SBF over that entire span lest a lawyer quote them? Or am I misreading you somehow?
Lack of information. I agree there's still uncertainty, but there is certainly enough information for the the movement to assess its position and to take action. SBF and an inner circle at FTX/Alameda committed a fraud whose basic contours are now well-known, even if the exact timeline, motivations and particulars are not yet filled in. As this forum proves, that raises some blindingly obvious questions about the governance, accountability and culture of the movement.
People are busy. People are always busy, and saying 'I'm too busy' generally means 'I'm choosing not to prioritise this'. It's not an explanation so much as a restatement of an unwillingness to speak.
To be clear, I am not writing this because I think the leadership should try and set out a comprehensive position on the debacle as soon as possible. I don't think that.
Thank you for posting this. It very much speaks to how I’m feeling right now. I'm grateful you've expressed and explained it.
Those accusations seem of a dramatically more minor and unrelated nature and don't update me much at all that allegations of mistreatment of employees are more likely.
Excellent post. I hope everybody reads it and takes it onboard.
One failure mode for EA will be over-reacting to black swan events like this that might not carry as much information about our organizations and our culture as we think they do.
Sometimes a bad actor who fools people is just a bad actor who fools people, and they're not necessarily diagnostic of a more systemic organizational problem. They might be, but they might not be.
We should be open to all possibilities at this point, and if EA decides it needs to tweak, nudge, update, or overhaul its culture and ethos, we should do so intelligently, carefully, strategically, and wisely -- rather than in a reactive, guilty, depressed, panicked, or self-flagellating panic.
I strongly disagree -- first, because this is dishonest and dishonorable. And second, because I don't think EA should try to have an immaculate brand.
Indeed, I suspect that part of what went wrong in the FTX case is that EA was optimizing too hard for having an immaculate brand, at the expense of optimizing for honesty, integrity, open discussion of what we actually believe, etc. I don't think this is the only thing that was going on, but it would help explain why people with concerns about SBF/FTX kept quiet about those concerns. Because they either were worried about sullying EA's name, or they were worried about social punishment from others who didn't want EA's name sullied.
IMO, trying super hard to never have your brand's name sullied, at the expense of ordinary moral goals like "be honest", tends to sully one's brand far more than if you'd just ignored the brand and prioritized other concerns. Especially insofar as the people you're trying to appeal to are very smart, informed, careful thinkers; you might be able to trick the Median Voter that EA is cool via a shallow PR campaign and attempts to strategically manipulate the narrative, but you'll have a far harder time trickin... (read more)
A couple of hours ago, I tweeted:
Reimbursing people for the money s... (read more)
I generally think it'd be good to have a higher evidential bar for making these kinds of accusations on the forum. Partly, I think the downside of making an off-base socket-puppeting accusation (unfair reputation damage, distraction from object-level discussion, additional feeling of adversarialism) just tends to be larger than the upside of making a correct one.
Fwiw, in this case, I do trust that A.C. Skraeling isn't Zoe. One point on this: Since she has a track record of being willing to go on record with comparatively blunter criticisms, using her own name, I think it would be a confusing choice to create a new pseudonym to post that initial comment.
Hi Tae, thank you so much for writing this post! I’m coordinating WWOTF ads and this is really helpful feedback to get. We’ve thought a lot about the trade-off between reaching potentially interested audiences while not oversaturating those audiences in a way that’s off-putting, and have taken many steps to avoid doing so (most importantly, by not narrowing our target audience so greatly that the same people get bombarded). Ensuring we don’t oversaturate audiences is a key priority.
If it’s alright, I’d love to hear more details about exactly which ads your friend encountered — I’ll contact you via DM. If other people have other relevant experiences that they want to share, please email me at abie@forethought.org — it’s very helpful and very actionable to get feedback right now, since we can adapt and iterate ads in real-time.
I'm a journalist, and would second this as sound advice, especially the 'guide to responding to journalists'. It explains the pressures and incentives/deterrents we have to work with, without demonising the profession... which I was glad to see!
A couple of things I would emphasise (in the spirit of mutual understanding!):
It can help to look beyond the individual journalist to consider the audience we write for, and what our editors' demands might be higher up in the hierarchy. I know many good, thoughtful journalists who work for publications (eg politically partisan newspapers) where they have to present stories the way they do, because that's what their audience/editors demand... There's often so much about the article they, as the reporter, don't control after they file. (Early career journalists in particular have to make these trade-offs, which is worth bearing in mind.)
Often I would suggest it could be helpful to think of yourself as a guide not a gatekeeper. An obvious point... but this space here [waves arms] is all available to journalists, along with much else in the EA world, via podcasts, public google docs etc. There are vast swathes of material that ... (read more)
At the start you say you are going to argue that "the median EAG London attendee will be less COVID-cautious than they would be under ideal epistemic conditions". So, I was expecting you to discuss the health risks of getting covid for EAG attendants (who will predominantly be between 20 and 40 and will ~all have been triple vaccinated) . Since you don't do that, your post shouldn't update us at all towards your conclusion.
The IFR for covid for all ages is now below seasonal flu. The risk of death for people attending EAG is extremely small given the likely age and vaccination status of attendants.
It is difficult to work out the effects of long covid, but the most reasonable estimates I have seen put the health cost of long covid as equivalent to 0.02 DALYs, or about a week. (I'm actually pretty sceptical that long covid is real (see eg here))
For people aged 20-40 who are triple jabbed, the risks of attending EAG are extremely small, I think on the order of getting a cold. They do not justify "the usual spate of NPIs"
There's also the point that covid seems likely to be endemic so there is little value in a "wait and see" approach
I temporarily left the EA community in 2018 and that ended up well.
I took a time-out from EA to focus on a job search. I had a job that I wanted to leave, but needed a lot of time and energy to handle all the difficulties that come with a job search. My career path is outside of EA organizations.
How I did it practically:
- I had a clear starting point and wrap up existing commitments. I stopped and handed over my involvement in local community building and told my peers about the time-out. I donated my entire year's donation budget in February.
- I set myself some rules for what I would and would not do. No events, no volunteering, no interaction with the community. I deleted social media accounts that I only used for EA. I blocked a few websites, most notably 80000hours.org. I would have donated if my time-out took longer, but without any research.
- I did not set an end point. The time-out would be as long as needed. I returned soon after I signed the new contract, 8 months after my starting point. It could have been much longer.
This helped a lot to get the job search done.
I could not, and did not want to, stop aiming for a positive impact on the world. I probably did more good overall than if I stayed involved in EA during the job search.
I can recommend this to others and my future self in a similar situation.
Everything written in the post above strongly resonates with my own experiences, in particular the following lines:
I think criticism of EA orthodoxy is routinely dismissed. I would like to share a few more stories of being publicly critical of EA in the hope that doing so adds some useful evidence to the discussion:
- Consider systemic change. "Some critics of effective altruism allege that its proponents have failed to engage with systemic change" (source). I have always found the responses (eg here and here) to this critique to be dismissive and miss the point. Why can we not just say: yes we are a new community this area feels difficult and we are not there yet? Why do we have to pretend EA is perfect and does systemic change stu
... (read more)It still seems like you have mischaracterised his view. You say "Take for example Bostrom’s “Vulnerable World Hypothesis”17, which argues for the need for extreme, ubiquitous surveillance and policing systems to mitigate existential threats, and which would run the risk of being co-opted by an authoritarian state." This is misleading imo. Wouldn't it have been better to note the clearly important hedging and nuance and then say that he is insufficiently cognisant of the risks of his solutions (which he discusses at length)?
I don’t have time to write a detailed and well-argued response, sorry. Here are some very rough quick thoughts on why I downvoted. Happy to expand on any points and have a discussion.
In general, I think criticisms of longtermism from people who 'get' longtermism are incredibly valuable to longtermists.
One reason if that if the criticisms carry entirely, you'll save them from basically wasting their careers. Another reason is that you can point out weaknesses in longtermism or in their application of longtermism that they wouldn't have spotted themselves. And a third reason is that in the worlds where longtermism is true, this helps longtermists work out better ways to frame the ideas to not put off potential sympathisers.
Clarity
In general, I found it hard to work out the actual arguments of the book and how they interfaced with the case for longtermism.
Sometimes I found that there were some claims being implied but they were not explicit. So please point out any incorrect inferences I’ve made below!
I was unsure what was being critiqued: longtermism, Bostrom’s views, utilitarianism, consequentialism, or something else.
The thesis of the book (for people readin... (read more)
This post convinced me to sell $200,000 more OpenAI shares than I would otherwise have, in order to have more money available to donate rapidly. Thanks!
I'll ask the obvious awkward question:
Staff numbers are up ~35% this year but the only one of your key metrics that has shown significant movement is "Job Vacancy Clickthroughs".
What do you think explains this? Delayed impact, impact not caught by metrics, impact not scaling with staff - or something else?
When I read your scripts and Rob is interviewing, I like to read Rob’s questions at twice the speed of the interviewees’ responses. Can you accommodate that with your audio version?
I’m the woman who Julia asked on a hunch about her experiences with Owen, and one of the women who Owen refers to when he says there have been four other less egregious occasions where he expressed feelings of attraction that he regrets. I’m sharing my experience with Owen below, because I think it’s probably helpful for people reflecting on this situation (and by default, it would remain confidential indefinitely), but as an FYI, I’m probably unlikely to participate in substantive discussion about it in the comments section. (I’m posting this anonymously because I’d prefer to avoid being pulled into lots of discussions about this in a way that drains my time and emotional energy, not because I’m afraid of retribution from someone or negative consequences for my career.)
- Several years ago, I stayed at Owen’s house for a while while I was visiting Oxford. Owen and I were friends, I had been to his house several times before, and he had previously offered that I could stay there if I was in Oxford. I was working at an EA organization at the time that was not professionally connected to Owen.
- Towards the end of my stay, Owen and I went on a long walk around Oxford, where we ta
... (read more)I wanted to push back on this because most commenters seem to agree with you. I disagree that the writing style on the EA forum, on a whole, is bad. Of course, some people here are not the best writers and their writing isn't always that easy to parse. Some would definitely benefit from trying to make their writing easier to understand.
For context, I'm also a non-native English speaker and during high school, my performance in English (and other languages) was fairly mediocre.
But as a whole, I think there are few posts and comments that are overly complex. In fact, I personally really like the nuanced writing style of most content on the EA forum. Also, criticizing the tendency to "overly intellectualize" seems a bit dangerous to me. I'm afraid that if you go down this route you shut down discussions on complex issues and risk creating a more Twitter-like culture of shoehorning complex topics into simplistic tidbits. I'm sure this is not what you want but I worry that this will be an unintended side effect. (FWIW, in the example thread you give, no comment seemed overly complex to me.)
Of course, in the end, this is just my impression and different people have different preferences. It's probably not possible to satisfy everyone.
Hard disagree on Leverage. They've absorbed a tonne of philanthropic funding over the years to produce nothing but pseudoscience and multiple allegations of emotional abuse.
I'm not saying Kerry wouldn't know about this stuff - I think he likely does. I'm saying a) that he was one of the 'top leaders' he refers to, so had ample chance to do something about this himself, b) he has a track record of questionable integrity, and c) he has potential motive to undermine the people he's criticising.
On the topic of feedback... At Triplebyte, where I used to work as an interviewer, we would give feedback to every candidate who went through our technical phone screen. I wasn't directly involved in this, but I can share my observations -- I know some other EAs who worked at Triplebyte were more heavily involved, and maybe they can fill in details that I'm missing. My overall take is that offering feedback is a very good idea and EA orgs should at least experiment with it.
-
-
-
... (read more)Offering feedback was a key selling point that allowed us to attract more applicants.
As an interviewer, I was supposed to be totally candid in my interview notes, and also completely avoid any feedback during the screening call itself. Someone else in the company (who wasn't necessarily a programmer) would lightly edit those notes before emailing them -- they wanted me to be 100% focused on making an accurate assessment, and leave the diplomacy to others. My takeaway is that giving feedback can likely be "outsourced" -- you can have a contractor / ops person / comms person / intern / junior employee take notes on hiring discussions, then formulate diplomatic but accurate feedback for candidates.
My bo
This seems like a very generous interpretation of her speech to me. I feel like you are seeing what you want to see.
For context, this was a speech given when she came to the UK for the AI Safety Summit, which was explicitly about existential safety. She didn't really have a choice but to mention them unless she wanted to give a major snub to an important US ally, so she did:
... and that's it. That's all she said about existential risks. She then immediately derails the conversation by offering a ser... (read more)
I don't know the full list of sub-areas, so I cannot speak with confidence, but the ones that I have seen defunded so far seem to me like the kind of things that attracted Jed, Vitalik and Jaan. I expect their absence will atrophy the degree to which the world's most ethical and smartest people want to be involved with things.
To be more concrete, I think funding organizations like the Future of Humanity Institute, LessWrong, MIRI, Joe Carlsmith, as well as open investigation of extremely neglected questions like wild animal suffering, invertebrate suffering, decision theory, mind uploads, and macrostrategy research (like the extremely influential Eternity in Six Hours paper) played major roles in people like Jaan, Jed and Vitalik directing resources towards things I consider extremely important, and Open Phil has at points in the past successfully supported those programs, to great positive effect.
In as much as the other resources you are hoping for are things like:
- more serious consideration from the world's smartest people,
- or more people respecting the integrity and honesty of the people working on AI Safety,
- or the ability for people to successfully coordin
... (read more)You have defined socialism here quite broadly, which may be unhelpful to discussing it as it can mean anything between
a. A market-based economy with a significant amount of redistribution from the wealthy to the poor and some business regulations for prosocial reasons.
b. A command economy where a centralized government has control over (or attempts to control) almost all aspects of the economy.
In my view, the former may very well be the ideal for developed countries at the moment but I am rather skeptical of the latter.
Epistemic status: not fleshed out
(This comment is not specifically directed to Rebecca's situation, although it does allude to her situation in one point as an example.)
I observe that the powers-that-be could make it less costly for knowledgeable people to come forward and speak out. For example, some people may have legal obligations, such as the duties a board member owes a corporation (extending in some cases to former board members).[1] Organizations may be able to waive those duties by granting consent. Likewise, people may have concerns[2] about libel-law exposure (especially to the extent they have exposure to the world's libel-tourism capital, the UK). Individuals and organizations can mitigate these concerns by, for instance, agreeing not to sue any community member for libel or any similar tort for FTX/SBF-related speech. (One could imagine an exception for suits brought in the United States in which the individual or organization concedes their status as a public figure, and does not present any other claims that would allow a finding of liability witho... (read more)
As the post says above, I’d like to share updates the team has made on its policies based on the internal review we did following the Time article and Owen’s statement as a manager on the team and the person who oversaw the internal review. (My initial description of the internal review is here). In general, these changes have been progressing prior to knowing the boards’ determinations, though thinking from Zach and the EV legal team has been an important input throughout.
Changes
Overall we spent dozens of hours over multiple calendar months in discussions and doing writeups, both internally to our team and getting feedback from Interim CEA CEO Ben West and others. Several team members did retrospectives or analyses on the case, and we consulted with external people (two EAs with some experience thinking about these topics as well as seven professionals in HR, law, consulting and ombuds) for advice on our processes generally.
From this we created a list of practices to change and additional steps to add. The casework team also reflected on many past cases to check that these changes were robust and applicable across a wide variety of casework.
Our c... (read more)
(My personal views only, and like Nick I've been recused from a lot of board work since November.)
Thank you, Nick, for all your work on the Boards over the last eleven years. You helped steward the organisations into existence, and were central to helping them flourish and grow. I’ve always been impressed by your work ethic, your willingness to listen and learn, and your ability to provide feedback that was incisive, helpful, and kind.
Because you’ve been less in the limelight than me or Toby, I think many people don’t know just how crucial a role you played in EA’s early days. Though you joined shortly after launch, given all your work on it I think you were essentially a third cofounder of Giving What We Can; you led its research for many years, and helped build vital bridges with GiveWell and later Open Philanthropy. I remember that when you launched Giving What We Can: Rutgers, you organised a talk with I think over 500 people. It must still be one of the most well-attended talks that we’ve ever had within EA, and helped the idea of local groups get off the ground.
The EA movement wouldn’t have been the same without your service. It’s been an honour to have worked with you.
I strongly disagree with the claim that the connection to EA and doing good is unclear. The EA community's beliefs about AI have been, and continue to be, strongly influenced by Eliezer. It's very pertinent if Eliezer is systematically wrong and overconfident about being wrong because, insofar as there's some level of defferal to Elizer on AI questions within the EA community which I think there clearly is, it implies that most EAs should reduce their credence in Elizer's AI views.
Some quick thoughts on AI consciousness work, I may write up something more rigorous later.
Normally when people have criticisms of the EA movement they talk about its culture or point at community health concerns.
I think aspects of EA that make me more sad is that there seems to be a few extremely important issues on an impartial welfarist view that don’t seem to get much attention at all, despite having been identified at some point by some EAs. I do think that ea has done a decent job of pointing at the most important issues relative to basically every other social movement that I’m aware of but I’m going to complain about one of it’s shortcomings anyway.
It looks to me like we could build advanced ai systems in the next few years and in most worlds we have little idea of what’s actually going on inside them. The systems may tell us they are conscious, or say that they don’t like the tasks we tell them to do but right now we can’t really trust their self reports. There’ll be a clear economic incentive to ignore self reports that create a moral obligation to using the systems in less useful/efficient ways. I expect the number of deployed systems to be very large and that it’ll be ... (read more)
Thanks for taking the time to write thoughtful criticism. Wanted to add a few quick notes (though note that I'm not really impartial as I'm socially very close with Redwood)
- I personally found MLAB extremely valuable. It was very well-designed and well-taught and was the best teaching/learning experience I've had by a fairly wide margin
- Redwood's community building (MLAB, REMIX and people who applied to or worked at Redwood) has been a great pipeline for ARC Evals and our biggest single source for hiring (we currently have 3 employees and 2 work triallers who came via Redwood community building efforts).
- It was also very useful for ARC Evals to be able to use Constellation office space while we were getting started, rather than needing to figure this out by ourselves.
- As a female person I feel very comfortable in Constellation. I've never felt that I needed to defer or was viewed for my dating potential rather than my intellectual contributions. I do think I'm pretty happy to hold my ground and sometimes oblivious to things that bother other people, so that might not be a very strong evidence that it isn't an issue for other people. However, I have been bothered in the pa... (read more)
Joel’s response
[Michael's response below provides a shorter, less-technical explanation.]
Summary
Alex’s post has two parts. First, what is the estimated impact of StrongMinds in terms of WELLBYs? Second, how cost-effective is StrongMinds compared to the Against Malaria Foundation (AMF)? I briefly present my conclusions to both in turn. More detail about each point is presented in Sections 1 and 2 of this comment.
The cost-effectiveness of StrongMinds
GiveWell estimates that StrongMinds generates 1.8 WELLBYs per treatment (17 WELLBYs per $1000, or 2.3x GiveDirectly[1]). Our most recent estimate[2] is 10.5 WELLBYs per treatment (62 WELLBYs per $1000, or 7.5x GiveDirectly) . This represents a 83% discount (an 8.7 WELLBYs gap)[3] to StrongMinds effectiveness[4]. These discounts, while sometimes informed by empirical evidence, are primarily subjective in nature. Below I present the discounts, and our response to them, in more detail.
Figure 1: Description of GiveWell’s discounts on StrongMinds’ effect, and their source
Notes: The graph shows the factors that make up the 8.7 WELLBY discount.
Table 1: Disagreements on StrongMinds per tre... (read more)
What is the main issue in EA governance then, in your view? It strikes me [I'm speaking in a personal capacity, etc.] the challenge for EA is a combination of the fact the resources are quite centralised and that trustees of charities are (as you say) not accountable to anyone. One by itself might be fine. Both together is tricky. I'm not sure where this fits in with your framework, sorry.
There's one big funder (Open Philanthropy), many of the key organisations are really just one organisation wearing different hats (EVF), and these are accountable only to their trustees. What's more, as Buck notes here, all the dramatis personae are quite friendly ("lot of EA organizations are led and influenced by a pretty tightly knit group of people who consider themselves allies"). Obviously, some people will be in favour of centralised, unaccountable decision-making - those who think it gets the right results - but it's not the structure we expect to be conducive to good governance in general.
If power in effective altruism were decentralised, that is, there were lots of 'buyers' and 'sellers' in the 'EA marketplace', then you'd expect competitive pressure to improve go... (read more)
To try to group/summarize the discussion in the comments and offer some replies:
1. ‘Traders are not thinking about AGI, the inferential distance is too large’; or ‘a short can only profit if other people take the short position too’
(a) Anyone who thinks they have an edge in markets thinks they've noticed something which requires such a large inferential distance that no one else has seen it.
(b) Many financial market participants ARE thinking about these issues.
- Asset manager Cathie Wood has AGI timelines of 6-12 years and is betting the house on that (“AGI could accelerate growth in GDP to 30-50% per year”)
- Masayoshi Son raised $100 billion for Softbank’s Vision Fund on the basis that superintelligence will arrive by 20
... (read more)"Huh, this person definitely speaks fluent LessWrong. I wonder if they read Project Lawful? Who wrote this post, anyway? I may have heard of them.
...Okay, yeah, fair enough."
One thing I definitely believe, and have commented on before[1], is that median EA's (I.e, EA's without an unusual amount of influence) are over-optimising for the image of EA as a whole, which sometimes conflicts with actually trying to do effective altruism. Let the PR people and the intellectual leaders of EA handle that - people outside that should be focusing on saying what we sincerely believe to be true, and worrying much less about whether someone, somewhere, might call us bad people for saying it. That ship has sailed - there are people out there, by now, who already have the conclusion of "And therefore, EA's are bad people" written down - refusing to post an opinion won't stop them filling in the middle bits with something else, and this was true even before the FTX debacle.
In short - "We should give the money back because it would help EA's image" is, imo, a bad take. "We should give the money back because it would be the right thing to do" is, imo, a much better take, which I won't take a stand on ... (read more)
To the extent that Kerry's allegation involves his own judgment of Sam's actions as bad or shady, I think it matters that there's reason not to trust Kerry's judgment or possibly motives in sharing the information. However we should definitely try to find out what actually happened and determine whether it was truly predictive of worse behavior down the line.
I haven't read the comments and this has probably been said many times already, but it doesn't hurt saying it again:
From what I understand, you've taken significant action to make the world a better place. You work in a job that does considerable good directly, and you donate your large income to help animals. That makes you a total hero in my book :-)
This forum has taken off over the past year. Thanks to all the post authors who have dedicated so much time to writing content for us to read!
At present, it is basically impossible to advance any drug to market without extensive animal testing – certainly in the US, and I think everywhere else as well. The same applies to many other classes of biomedical intervention. A norm of EAs not doing animal testing basically blocks them from biomedical science and biotechnology; among other things, this would largely prevent them from making progress across large swathes of technical biosecurity.
This seems bad – the moral cost of failing to avert biocatastrophe, in my view, hugely outweigh the moral costs of animal testing. At the same time, speaking as a biologist who has spent a lot of time around (and on occasion conducting) animal testing, I do think that mainstream scientific culture around animal testing is deeply problematic, leading to large amounts of unnecessary suffering and a cavalier disregard for the welfare of sentient beings (not to mention a lot of pretty blatantly motivated argumentation). I don't want EAs to fall into that mindset, and the reactions to this comment (and their karma totals) somewhat concern me.
I wouldn't support a norm of EAs not doing animal testing. But I think I would support a norm of EAs ap... (read more)
Thanks a lot for sharing this denise. Here are some thoughts on your points.
- On your point about moral realism, I'm not sure how that can be doing much work in an argument against longtermism specifically, as opposed to all other possible moral views. Moral anti-realism implies that longtermism isn't true, but then it also implies that near-termism isn't true. The thought seems to be that there could only be an argument that would give you reason to change your mind if moral realism were true, but if that were true, there would be no point in discussing arguments for and against longtermism because they wouldn't have justificatory force.
- Your argument suggests that you find a person-affecting form of utilitarianism most plausible. But to me we should not reach conclusions about ethics on the basis of what you find intuitively appealing without considering the main arguments for and against these positions. Person-affecting views have lots of very counter-intuitive implications and are actually quite hard to define.
- I don't think it is true that the case for longtermism rests on the total view. As discussed in the Greaves and MacAskill paper, many theories imply longtermism.
- Your
... (read more)I agree with the spirit of "I currently think we should approve if people bring up the energy to voice honest concerns even if they don’t completely follow the ideal playbook".
However, at first glance I don't find the specific "reasons to not contact an org before" that you state convincing:
- "Lacking time" - I think there are ways that require minimal time commitment. For instance, committing to not (or not substantially) revise the post based on an org's response. I struggle to imagine a situation where someone is able to spend several hours writing a post but then absolutely can't find the 10 minutes required to send an email to the org the post is about.
- "Predicting that private communication will not be productive enough to spend the little time we have at our disposal" - I think this misun
... (read more)That flag is cool, but here's an alternative that uses some of the same ideas.
The black background represents the vastness of space, and its current emptiness. The blue dot represents our fragile home. The ratio of their sizes represents the importance of our cosmic potential (larger version here).
It's also a reference to Carl Sagan's Pale Blue Dot - a photo taken of Earth, from a spacecraft that is now further from Earth than any other human-made object, and that was the first to leave our solar system.
Sagan wrote this famous passage about the image:
... (read more)I don't know if you need someone to say this, but:
You can often do more good outside of an EA organisation than inside one. For most people, the EA community is not the only good place to look for grantmaking or research jobs.
If I could be a grantmaker anywhere, I'd probably pick the Gates Foundation or the UK Government's Department for International Development. If I could be a researcher anywhere, I might choose Harvard's Kennedy School of Public Policy or the Institute for Government. None of these are "EA organisations" but they would all most likely allow me to do more good than working at GiveWell. (Although I do love GiveWell and encourage interested applicants to apply!)
Some people already know this and have particular reasons they want to work in an EA organisation, but some don't, so I thought it was worth saying.
In many ways this post leaves me feeling disappointed that 80,000 Hours has turned out the way it did and is so focused on long-term future career paths.
- -
Over the last 5 years I have spent a fair amount of time in conversation with staff at CEA and with other community builders about creating communities and events that are cause-impartial.
This approach is needed for making a community that is welcoming to and supportive of people with different backgrounds, interests and priorities; for making a cohesive community where people with varying cause areas feel they can work together; and where each individual is open-minded and willing to switch causes based on new evidence about what has the most impact.
I feel a lot of local community builders and CEA have put a lot of effort into this aspect of community building.
- -
Meanwhile it seems that 80000 Hours has taken a different tack. They have been more willing, as part of trying to do the most good, to focus on the causes that the staff at 80000 Hours think are most valuable.
Don’t get me wrong I love 80000 Hours, I am super impressed by their content glad to see them doing well. And I think there is a good case to be made fo... (read more)
My sense of what is happening regarding discussions of EA and systemic change is:
- Actual EA is able to do assessments of systemic change interventions including electoral politics and policy change, and has done so a number of times
- Empirical data on the impact of votes, the effectiveness of lobbying and campaign spending work out without any problems of fancy decision theory or increasing marginal returns
- E.g. Andrew Gelman's data on US Presidential elections shows that given polling and forecasting uncertainty a marginal vote in a swing state average something like a 1 in 10 million chance of swinging an election over multiple elections (and one can save to make campaign contributions
- 80,000 Hours has a page (there have been a number of other such posts and discussion, note that 'worth voting' and 'worth buying a vote through campaign spending or GOTV' are two quite different thresholds) discussing this data and approaches to valuing differences in political outcomes between candidates; these suggest that a swing state vote might be worth tens of thousands of dollars of income to rich country citizens
- But if one thinks that charities like AMF do 100x or more g
... (read more)Relative to the base rate of how wannabe social movements go, I’m very happy with how EA is going. In particular: it doesn’t spend much of its time on internal fighting; the different groups in EA feel pretty well-coordinated; it hasn’t had any massive PR crises; it’s done a huge amount in a comparatively small amount of time, especially with respect to moving money to great organisations; it’s in a state of what seems like steady, sustainable growth. There’s a lot still to work on, but things are going pretty well.
What I could change historically: I wish we’d been a lot more thoughtful and proactive about EA’s culture in the early days. In a sense the ‘product’ of EA (as a community) is a particular culture and way of life. Then the culture and way of life we want is whatever will have the best long-run consequences. Ideally I’d want a culture where (i) 10% or so of people interact with the EA community are like ‘oh wow these are my people, sign me up’; (ii) 90% of people are like ‘these are nice, pretty nerdy, people; it’s just not for me’; and (iii) almost no-one is like, ‘wow, these people are jerks’. (On (ii) and (iii): I feel like the Quakers is the sort of thing I’m think... (read more)
Hey Vasco, these are my personal thoughts and not FP’s (I have now left FP, and anything FP says should take precedence). I have pretty limited capacity to respond, but a few quick notes—
First, I think it’s totally true that there are some BOTEC errors, many/ most of them mine (thank you GWWC for spotting them— it’s so crucial to a well-functioning ecosystem, and more selfishly, to improving my skills as a grantmaker. I really value this!)
At the same time—these are hugely rough BOTECs, that were never meant to be rigorous CEA’s: they were being used as decision-making tools to enable quick decision-making under limited capacity (i do not take the exact numbers seriously: i expect they're wrong in both directions), with many factors beyond the BOTEC going into grantmaking decisions.
I don’t want to make judgments about whether or not the fund (while I was there) was surpassing GiveWell or not— super happy to leave this to others. I was focused on funders who would not counterfactually give to GW, meaning that this was less decision-relevant for me.
I think it's helpful to look at the grant history from FP GHDF. Here’s all the grants that I think have been made by FP ... (read more)
Thank you for spending time analyzing our methods. We appreciate those who are willing to engage with our work and help us improve the accuracy of our recommendations and reduce animal suffering as much as possible.
Based on previously received feedback and internal reflection, we have significantly updated our evaluation methods in the past year and will be publishing the details next Tuesday when we release our charity recommendations for 2024. From what we can tell from a quick skim, we think that our changes largely address Vetted Causes’ concerns here, as well as the detailed feedback we received last year from Giving What We Can (see also our response at the time) as part of their program that evaluates evaluators. Our cost-effectiveness analyses no longer use achievement or intervention scores, but rather directly calculate cost-effectiveness by dividing impact by cost, as you suggest. That being said, our work will never be perfect so we invite anyone reading this with the expertise to improve the rigor of our work to reach out, now or in the future.
Although your comments are related to methods that we no longer use, we’d like to spend more time understanding and e... (read more)
The obvious reason to not put too much weight on positive survey results from attendees: the selection effect.
There are surely people (e.g. Peter Wildeford, as he mentioned) who would have contributed to and benefited from Manifest but don't attend because of past and present speaker choices. As others have mentioned, being maximally inclusive will end up excluding people who (justifiably!) don't want to share space with racists. By including people like Hanania, you're making an implicit vote that you'd rather have people with racist views than people who wouldn't attend because of those people. Not a trade I would make.
Please people, do not treat Richard Hannania as some sort of worthy figure who is a friend of EA. He was a Nazi, and whilst he claims he moderated his views, he is still very racist as far as I can tell.
Hannania called for trying to get rid of all non-white immigrants in the US, and the sterilization of everyone with an IQ under 90 indulged in antisemitic attacks on the allegedly Jewish elite, and even post his reform was writing about the need for the state to harass and imprison Black people specifically ('a revolution in our culture or form of government. We need more policing, incarceration, and surveillance of black people' https://en.wikipedia.org/wiki/Richard_Hanania). Yet in the face of this, and after he made an incredibly grudging apology about his most extreme stuff (after journalists dug it up), he's been invited to Manifiold's events and put on Richard Yetter Chappel's blogroll.
DO NOT DO THIS. If you want people to distinguish benign transhumanism (which I agree is a real thing*) from the racist history of eugenics, do not fail to shun actual racists and Nazis. Likewise, if you want to promote "decoupling" factual beliefs from policy recommendations, which ... (read more)
I lead the team at GWWC and thought it might help for me to share some quick context, clarifications, and thoughts (sorry for the delay, I was on leave). I've kept this short and in bullet points.
- Firstly, thank you for writing this. I think that broadly you are correct in the view that FTX has done much more damage than is commonly recognised within the EA community, however, I think that this effect is overstated in your post due to various reasons (some of which have been outlined by others already in the comments).
- Here is our Growth Dashboard (live metrics, unaudited, but mostly accurate) and a specific monthly graph for when pledges are created (as opposed to their start date which can be any date a pledger chooses, although it is often the day they pledge).
- When you get a bit more granular, you can see that GWWC pledge data can be quite spikey due to (a) large advocacy moments (e.g. Sam Harris podcast, What We Owe The Future promotion, news articles etc) that then tend to cool down over the coming months after the spike; and (b) seasonality (e.g. giving season and new years day) where people tend to pledge or donate at key moments (and we also focus our growth activit
... (read more)Yes, unfortunately I've also been hearing negatives about Conjecture, so much so that I was thinking of writing my own critical post (and for the record, I spoke to another non-Omega person who felt similarly). Now that your post is written, I won't need to, but for the record, my three main concerns were as follows:
1. The dimension of honesty, and the genuineness of their business plan. I won't repeat it here, because it was one of your main points, but I don't think that it's a way to run a business, to sell your investors on a product-oriented vision for the company, but to tell EAs that the focus is overwhelmingly on safety.
2. Turnover issues, including the interpretability team. I've encountered at least half a dozen stories of people working at or considering work at Conjecture, and I've yet to hear of any that were positive. This is about as negative a set of testimonials as I've heard about any EA organisation. Some prominent figures like Janus and Beren have left. In the last couple of months, turnover has been especially high - my understanding is that Connor told the interpretability team that they were to work instead on cognitive emulations, and most of them left. Much... (read more)
I believe that’s an oversimplification of what Alexander thinks but don’t want to put words in his mouth.
In any case this is one of the few decisions the 4 of us (including Cari) have always made together so we have done a lot of aligning already. My current view, which is mostly shared, is we’re currently underfunding x-risk even without longtermism math, both because FTXF went away and because I’ve updated towards shorter AI timelines in the past ~5 years. And even aside from that, we weren’t at full theoretical budget last year anyway. So that all nets out that to expected increase, not decrease.
I’d love to discover new large x-risk funders though and think recent history makes that more likely.
A major part of the premise of the OP is something like "the inflammatory nature is a feature, not a bug; sure, you can boil it down to a more sterile sounding claim, but most of the audience will not; they will instead follow the connotation and thus people will essentially 'get away' with the stronger claim that they merely implied."
I think it is a mistake to steelman things like the TIME piece, for precisely this reason, and it's also a mistake to think that most people are steelmanning as they consume it.
So pointing out that it could imply something reasonable is sort of beside the point—it doesn't, in practice.
Chiming in from the EV UK side of things: First, +1 to Nicole’s thanks :)
As you and Nicole noted, Nick and Will have been recused from all FTX-related decision-making. And, Nicole mentioned the independent investigation we commissioned into that.
Like the EV US board, the EV UK board is also looking into adding more board members (though I think we are slightly behind the US board), and plans to do so soon. The board has been somewhat underwater with all the things happening (speaking for myself, it’s particularly difficult because a lot of these things affect my main job at Open Phil too, so there’s more urgent action needed on multiple fronts simultaneously).
(The board was actually planning and hoping to add additional board members even before the fall of FTX, but unfortunately those initial plans had to be somewhat delayed while we’ve been trying to address the most time-sensitive and important issues, even though having more board capacity would indeed help in responding to issues that crop up; it's a bit of a chicken-and-egg dynamic we need to push through.)
Hope this is helpful!
FWIW, the term "talent search" has no connotation of this type to me. To me it just means like, finding top talent, wherever you can find them.
Leaving aside some object-level stuff about Bostrom's views, I still think the apology could be much better without any dishonesty on his part. This is somewhat subjective but things that I think could have been better:
In my opinion it just highlights some basic misunderstandings about communication and our society today, which (I think) was proven by the fairly widespread negative backlash to this incident.
For me, unfortunately, the discourse surrounding Wytham Abbey, seems like a sign of epistemic decline of the community, or at least on the EA forum.
- The amount of attentions spent on this seems to be a textbook example of bikeshedding.
- Repeatedly, the tone of the discussion is a bit like "I've read a twee
... (read more)Quoting Parkinson :"The time spent on any item of the agenda will be in inverse proportion to the sum [of money] involved." A reactor is so vastly expensive and complicated that an average person cannot understand it (see ambiguity aversion), so one assumes that those who work on it understand it. However, everyone can visualize a cheap, simple bicycle shed, so planning one can result in endless discussions because everyone involved wants to implement their own proposal and demonstrate personal contribution.
In case of EAs, there are complicated, high-stakes things, for example what R&D efforts to support around AI. This has scale of billions of dollars now, much higher stakes in the future, and there is a lot to understand.
In contrast, absolutely anyone can easily form opinions about appropriateness of a manor house purchase, based on reading a few tweets.
I think this is a very helpful post.
I think some of the larger, systemically important organisations should either have a balance of trustees and/or a board of advisors who have relevant mission critical experiences such as risk management, legal and compliance, depending on the nature of the organisation. I appreciate senior executives and trustees in these organisations do seek such advice; but often it is too opaque who they consult and which area the advice covers; and there could be a lack of accountability and risk of the advisors lacking sufficient knowledge themselves.
I have raised this directly a number of years ago but perhaps still inadequate. As noted by others this becomes more important as we get bigger.
Ps I don’t post much and not as accurate with my choice of words as other forum users.
The last paragraphs in the article itself point to the most glaring issue IMO-loose norms around board of directors and conflicts of interests (COIs) between funding orgs and grantees. The author presents it in a way that it's self evident the boards were not constructed in a way to be sufficiently independent / objective, and having substantial overlap between the foundation board and the boards of the largest grantees can lead to hazards. These are common industry issues in corporate oversight, curious what policies there are among EA orgs to decrease COIs.
"A significant share of the grants went to groups focused on building the effective altruist movement rather than organizations working directly on its causes. Many of those groups had ties to Mr. Bankman-Fried’s own team of advisers. The largest single grant listed on the Future Fund website was $15 million to a group called Longview, which according to its website counts the philosopher Mr. MacAskill and the chief executive of the FTX Foundation, Nick Beckstead, among its own advisers.
The second-largest grant, in the amount of $13.9 million, went to the Center for Effective Altruism. Mr. MacAskill was a founder of the cent... (read more)
I think the point of most non-profit boards is to ensure that donor funds are used effectively to advance the organization's charitable mission. If that's the case, then having donor representation on the board seems appropriate. Why would this represent a conflict of interest? My impression is that this is quite common amongst non-profits and is not considered problematic. (Note that Holden is on ARC's board.)
I'm also not sure this what the NYT author is objecting to. I think they would be equally unhappy with SBF claiming to have donated a lot, but it secretly went to a DAF he controlled that he could potentially use to have influence later. The problem is more like trying to claim credit for good works despite not having actually given up the influence yet, not a COI issue.
(I don't think it's plausible to call "I gave my money to foundation or DAF, and then I make 100% of the calls about how the foundation donates" a COI issue. )
This actually goes back further, to OpenPhil funding CEA in 2017, with Nick Beckstead as the grant investigator whilst simultaneously being a Trustee of CEA (note that the history of this is now somewhat obscured, given that he later stepped down, but then stepped back up in 2021). The CoI has never been acknowledged or addressed as far as I know. I was surprised that no one seemed to have noticed this (at least publicly), so I (eventually) raised it with Max Dalton (Executive Director of CEA) in March 2021 - at least I anonymously sent a message to his Admonymous. In hindsight, it might've been better to publicly post (e.g. to the EA Forum), but I was concerned about EA's reputation being damaged, and possibly lessening the chances of my own org getting funding (perhaps I was a victim of/too in sway to Ra?). Even now part of me is recognising that this could be seen as "kicking people when they are down", or a betrayal, or mark me out as a troublemaker, and is causing me to pause [I've sat with this comment for hours; if you're reading it, I must've finally pressed "submit"]. Then again, perhaps now is the right time to be airing concerns, lest they never be aired and improvements... (read more)
I don't mean to endorse Holden's actions - they were obviously ill-judged - but this reads as pretty lightweight stuff. He posted a few anonymous comments boosting GiveWell? That is so far away from what it increasingly looks like SBF is responsible for - multi-billion dollar fraud, funneling customer funds to a separate trading entity against trumped-up collateral, and then running an insolvent business, presumably waiting for imminent Series C funding to cover the holes.
Hi all -- Cate Hall from Alvea here. Just wanted to drop in to emphasize the "we're hiring" part at the end there. We are still rapidly expanding and well funded. If in doubt, send us a CV.
FWIW, I've had similar thoughts: I used to think being veg*n was, in some sense, really morally important and not doing it would be really letting the side down. But, after doing it for a few years, I felt much less certain about it.*
To press though, what seems odd about the "the other things I do are so much more impactful, why should I even worry about this?" line is that it has an awkward whisper of self-importance and that it would license all sorts of other behaviours.
To draw this out with a slightly silly and not perfect analogy, imagine we hear a story about some medieval king who sometimes, but not always, kicked people and animals that got in his way. When asked by some brave lackey, "m'lord, but why do you kick them; surely there is no need?" The king replies (imagine a booming voice for best effect) "I am very important and do much good work. Given this, whether I kick or not kick is truly a rounding error, a trifle, on my efforts and I do not propose to pay attention to these consequences".
I think that we might grant that what the king says is true - kicking things is genuinely a very small negative compared to the large positive of his other actions. Howeve... (read more)
Hi Jason,
I think your blog and work is great, and I'm keen to see what comes out of Progress Studies.
I wanted to ask a question, and also to comment on your response to another question, that I think this has been incorrect after about 2017:
More figures here.
The following is more accurate:
(Though even then, Open Philanthropy has allocated $100m+ to scientific research, which would make it a significant fraction of the portfolio. They've also funded several areas of US policy research aimed at growth.)
However, the reason for less emphasis on economic growth is because the community members who are not focused on global health, are mostly focused on longtermism, and have argued it's not the top priority from that perspective. I'm going to try to give a (rather direct) summary of why, and would be interested in your response.
Those focused on longtermism have argued that influencing the trajectory of civilization is far higher value than speeding up progress (e.g. one example of that argument h... (read more)
As someone who's spent a lot of time on EA community-building and also on parenting, I'd caution against any strong weighting on "my children will turn out like me / will be especially altruistic." That seems like a recipe for strained relationships. I think the decision to parent should be made because it's important to you personally, not because you're hoping for impact. You can almost certainly have more impact by talking to existing young people about EA or supporting community-building or field-building in some other way than by breeding more people.
I'd also caution against treating adoption as less intensive in time and effort. The process of adopting internationally or from foster care is intensive and often full of uncertainty and disappointment as placements fall through, policies change, etc. And I think the ongoing task of shoring up attachment with an adopted child is significant.(For example, I have a friend who realized her ten-year-old, adopted before he can remember, had somehow developed the belief that his parents would "give him back" at some point and that he was not actually a permanent member of the family. I think this kind of thing is pretty common.) ... (read more)
I share your concerns about fandom culture / guru worship in EA, and am glad to see it raised as a troubling feature of the community. I don’t think these examples are convincing, though. They strike me as normal, nice things to say in the context of an AMA, and indicative of admiration and warmth, but not reverence.
I've been thinking a lot about this recently too. Unfortunately I didn't see this AMA until now but hopefully it's not too late to chime in. My biggest worry about SJ in relation to EA is that the political correctness / cancel culture / censorship that seems endemic in SJ (i.e., there are certain beliefs that you have to signal complete certainty in, or face accusations of various "isms" or "phobias", or worse, get demoted/fired/deplatformed) will come to affect EA as well.
I can see at least two ways of this happening to EA:
From your answ
... (read more)One thing I often see on the forum is a conflation of 'direct work' and 'working at EA orgs'. These strike me as two pretty different things, where I see 'working at EA orgs' as meaning 'working at an organisation that explicitly identifies itself as EA' and 'direct work' as being work that directly aims to improve lives as opposed to aiming to eg make money to donate. My view is that the vast majority of EAs should be doing direct work but not at EA orgs - working in government, at the think tanks, in foundations and in influential companies. Conflating these two concepts seems really bad because it encourages people to focus on a very narrow subset of 'direct impact' jobs - those that are at the very few, small organisations which explicitly identify with the EA movement.
A trap I think a lot of us fall into at some time or other is thinking that in order to be a 'good EA' you have to do ALL THE THINGS: have a directly impactful job, donate money to a charity you deeply researched, live frugally, eat vegan etc. When, inevitably, you don't live up to a bunch of these standards, it's easy to assume othe... (read more)
FP Research Director here.
I think Aidan and the GWWC team did a very thorough job on their evaluation, and in some respects I think the report serves a valuable function in pushing us towards various kinds of process improvements.
I also understand why GWWC came to the decision they did: to not recommend GHDF as competitive with GiveWell. But I'm also skeptical that any organization other than GiveWell could pass this bar in GHD, since it seems that in the context of the evaluation GiveWell constitutes not just a benchmark for point-estimate CEAs but also a benchmark for various kinds of evaluation practices and levels of certainty.
I think this comes through in three key differences in perspective:
My claim is that, although I'm fairly sure sure GWWC would not explicitly say "yes" to each of these questions... (read more)
We would like to extend our gratitude to Giving What We Can (GWWC) for conducting the "Evaluating the Evaluators" exercise for a second consecutive year. We value the constructive dialogue with GWWC and their insights into our work. While we are disappointed that GWWC has decided not to defer to our charity recommendations this year, we are thrilled that they have recognized our Movement Grants program as an effective giving opportunity alongside the EA Animal Welfare Fund.
Movement Grants
After reflecting on GWWC’s 2023 evaluation of our Movement Grants (MG) program we made several adjustments, all of which are noted in GWWC’s 2024 report. We’re delighted to see that the refinements we made to our program this year have led to grantmaking decisions that meet GWWC’s bar for marginal cost-effectiveness and that they will recommend our MG program on their platform and allocate half of their Effective Animal Advocacy Fund to Movement Grants.
As noted by GWWC, ACE’s MG program is unique in its aims to fund underserved segments of the global animal advocacy movement and address two key limitations to effectiveness within the movement:
- Limited evidence about which interventions a
... (read more)As earn to giver, I found contributing to funding diversification challenging
Jeff Kaufmann posted a different version of the same argument earlier than me.
Some have argued that earning to give can contribute to funding diversification. Having a few dozen mid-sized donors, rather than one or two very large donors, would make the financial position of an organization more secure. It allows them to plan for the future and not worry about fundraising all the time.
As earn to giver, I can be one of those mid-sized donors. I have tried. However, it is challenging.
First of all, I don't have expertise, and don't have much time to build the expertise. I spend most of my time on my day job, which has nothing to do with any cause I care about. Any research must be done in my free time. This is fine, but it has some cost. This is time I could have spent on career development, talking to others about effective giving, or living more frugally.
Motivation is not the issue, at least for me. I've found the research extremely rewarding and intellectually stimulating to do. Yet, fun doesn't necessarily translate to effectiveness.
I've seen peer earn to givers just defer to GiveWell or other charity eva... (read more)
Sean is one of the under-sung heroes who helped build FHI and kept it alive. He did this by--among other things--careful and difficult relationship management with the faculty. I had to engage in this work too and it was less like being between a rock and a hard place and more like being between a belt grinder and another bigger belt grinder.
One can disagree about apportioning the blame for this relationship--and in my mind, I divide it differently than Sean--but after his four years of first-hand experience, my response to Sean is to take his view seriously, listen, and consider it. (And to give it weight even against my 3.5 years of first-hand experience!)
As a tangent, respectfully listening to people's views and expressing gratitude--and avoiding unnecessary blame--was a core part of what allowed ops and admin staff to keep FHI alive for so long against hostile social dynamics. As per Anders' comment posted by Pablo here, it might be useful for extending EA's productive legacy as well.
Sean thank you so much for all you did for FHI.
Is there going to be a post-mortem including an explanation for the decision to sell?
I think it might be helpful to look at a simple case, one of the best cases for the claim that your altruistic options differ in expected impact by orders of magnitude, and see if we agree there? Consider two people, both in "the probably neutral role of someone working a 'bullshit job'". Both donate a portion of their income to GiveWell's top charities: one $100k/y and the other $1k/y. Would you agree that the altruistic impact of the first is, ex-ante, 100x that of the second?
One of the big disputes here is over whether Alice was running her own incubated organization (which she could reasonably expect to spin out) or just another project under Nonlinear. Since Kat cites this as significant evidence for Alice's unreliability, I wanted to do a spot-check.
(Because many of the claims in this response are loosely paraphrased from Ben's original post, I've included a lot of quotes and screenshots to be clear about exactly who said what. Sorry for the length in advance.)
Let's start with claims in Ben's original post:
and
... (read more)I am happy to see that Nick and Will have resigned from the EV Board. I still respect them as individuals but I think this was a really good call for the EV Board, given their conflicts of interests arising from the FTX situation. I am excited to see what happens next with the Board as well as governance for EV as a whole. Thanks to all those who have worked hard on this.
I'm concerned about EA falling into the standard "risk-averse bureaucracy" failure mode. Every time something visibly bad happens, the bureaucracy puts a bunch of safeguards in place. Over time the drag created by the safeguards does a lot of harm, but because the harm isn't as visible, the bureaucracy doesn't work as effectively to reduce it.
I would like to see Fermi estimates for some of these, including explicit estimates of less-visible downsides. For example, consider EA co-living, including for co-workers. If this was banned universally, my guess is that it would mean EAs paying many thousands of dollars extra in rent for housing and/or office space per month. It would probably lead to reduced motivation, increased loneliness, and wasted commute time among EAs. EA funding would become more scarce, likely triggering Goodharting for EAs who want to obtain funding, or people dying senselessly in the developing world.
A ban on co-living doesn't seem very cost-effective to me. It seems to me that expanding initiatives like Basefund would achieve something similar, but be far more cost-effective.
I agree that it's best to think of GPT as a predictor, to expect it to think in ways very unlike humans, and to expect it to become much smarter than a human in the limit.
That said, there's an important further question that isn't determined by the loss function alone---does the model do its most useful cognition in order to predict what a human would say, or via predicting what a human would say?
To illustrate, we can imagine asking the model to either (i) predict the outcome of a news story, (ii) predict a human thinking step-by-step about what will happen next in a news story. To the extent that (ii) is smarter than (i), it indicates that some significant part of the model's cognitive ability is causally downstream of "predict what a human would say next," rather than being causally upstream of it. The model has learned to copy useful cognitive steps performed by humans, which produce correct conclusions when executed by the model for the same reasons they produce correct conclusions when executed by humans.
(In fact (i) is smarter than (ii) in some ways, because the model has a lot of tacit knowledge about news stories that humans lack, but (ii) is smarter than (i) in other ways,... (read more)
Bostrom was essentially still a kid (age ~23) when he wrote the 1996 email. What effect does it have on kids' psychology to think that any dumb thing they've ever said online can and will be used against them in the court of public opinion for the rest of their lives? Given that Bostrom wasn't currently spreading racist views or trying to harm minorities, it's not as though it was important to stop him from doing ongoing harm. So the main justification for socially punishing him would be to create a chilling effect against people daring to spout off flippantly worded opinions going forward. There are some benefits to intimidating people away from saying dumb things, but there are also serious costs, which I think are probably underestimated by those expressing strong outrage.
Of course, there are also potentially huge costs to flippant and crass discussion of minorities. My point is that the stakes are high in both directions, and it's very non-obvious where the right balance to strike is. Personally I suspect the pendulum is quite a bit too far in the direction of trying to ruin people's lives for idiotic stuff they said as kids, but other smart people seem to disagree.
As some othe... (read more)
The following is my personal opinion, not CEA's.
If this is true it's absolutely horrifying. FLI needs to give a full explanation of what exactly happened here and I don't understand why they haven't. If FLI did knowingly agree to give money to a neo-Nazi group, that’s despicable. I don't think people who would do something like that ought to have any place in this community.
I don't plan to engage deeply with this post, but I wanted to leave a comment pushing back on the unsubtle currents of genetic determinism ("individuals from those families with sociological profiles amenable to movements like effective altruism, progressivism, or broad Western Civilisational values are being selected out of the gene pool"), homophobia ("cultures that accept gay people on average have lower birth rates and are ultimately outnumbered by neighboring homophobic cultures", in a piece that is all about how low birth rates are a key problem of our time) , and ethnonationalism ("based in developed countries that will be badly hit by the results of these skewed demographics") running through this piece.
I believe that genetics influence individual personality, but am very skeptical of claims of strong genetic determinism, especially on a societal level. Moreover, it seems to me that one of the core values of effective altruism is that of impartiality― giving equal moral weight to people who are distant from me in space and/or time. The kind of essentialist and elitist rhetoric common among people who concern themselves with demographic collapse seems in direct opposition to... (read more)
Agree. I'd also add that this is a natural effect of the focus EA has put on outreach in universities and to young people. Not to say that the young people are the problem--they aren't, and we are happy to have them. But in prioritizing that, we did deprioritize outreach to mid and late-stage professionals. CEA and grantmakers only had so much bandwidth, and we only had so many people suited to CB/recruiting/outreach-style roles.
We have had glaring gaps for a while in ability to manage people, scale programs, manage and direct projects and orgs, and perform due diligence checks on and advising for EA organisations. In other words, we lack expertise.
I'd say 80K has been somewhat aware of this gap and touched on it lightly, and the community itself has dialled in on the problem by discussing EA recruiters. Yet CEA, funders, and others working on movement-building seem to repeatedly conflate community building with getting more young people to change careers, revealing their priorities, IMO, by what they actually work on.
Open Phil has done this as well. Looking at their Effective Altruism Community Growth focus area , 5 out of the 6 suggestions are focused on young people.... (read more)
Hi Eli, thank you so much for writing this! I’m very overloaded at the moment, so I’m very sorry I’m not going to be able to engage fully with this. I just wanted to make the most important comment, though, which is a meta one: that I think this is an excellent example of constructive critical engagement — I’m glad that you’ve stated your disagreements so clearly, and I also appreciate that you reached out in advance to share a draft.
Thanks for the detailed update!
There was one expectation / takeaway that I was surprised about.
You mentioned the call was open for three weeks. Would that have been sufficient for people who are not already deeply embedded in EA networks to formulate a coherent and fundable idea (especially if they currently have full-time jobs)? It seems likely that this kind of "get people to launch new projects" effect would require more runway. If so, the data from this round shouldn't update one's priors very much on this question.
Thanks for this post. If true, it does describe a pretty serious concern.
One issue I've always had with the "highly engaged EA" metric is that it's only a measure for alignment,* but the people who are most impactful within EA have both high alignment and high competence. If your recruitment selects only on alignment this suggests we're at best neutral to competence and at worst (as this post describes) actively selecting against competence.
(I do think the elite university setting mitigates this harm somewhat, e.g. 25th percentile MIT students still aren't stupid in absolute terms).
That said, I think the student group organizers I recently talked to are usually extremely aware of this distinction. (I've talked to a subset of student group organizers from Stanford, MIT, Harvard (though less granularity), UPenn (only one) and Columbia, in case this is helpful). And they tend to operationalize their targets more in terms of people who do good EA research, jobs, and exciting entrepreneurship projects, rather than in terms of just engagement/identification. Though I could be wrong about what they care about in general (as opposed to just when talking with me).
The pet t... (read more)
Inner Rings and EA
C. S. Lewis' The Inner Ring is IMO, a banger. My rough summary - inner rings are the cool club/ the important people. People spend a lot of energy on trying to be part of the inner rings, and sacrifice things that are truly important.
There are lots of passages that jump out at me, wrt to my experience as an EA. I found it pretty tough reading in a way... in how it makes me reflect on my own motivations and actions.
There's a perrenial discussion of jargon in EA. I've typically thought of jargon as a trade off between havivng more efficient discourse on the one hand, and lower barriers for new people to enter the conversation on the other. Reading things makes me think of jargon more as a mechanism to signal in-group membership.
... (read more)I think that this totally misses the point. The point of this post isn't to inform ACE that some of the things they've done seem bad--they are totally aware that some people think this. It's to inform other people that ACE has behaved badly, in order to pressure ACE and other orgs not to behave similarly in future, and so that other people can (if they want) trust ACE less or be less inclined to support them.
I think this post is fairly uncharitable to ACE, and misrepresents the situations it is describing. My overall take is basically along the lines of "ACE did the right thing in response to a hard situation, and communicated that poorly." Your post really downplays both the comments that the people in question made and actions they took, and the fact that the people in question were senior leadership at a charity, not just random staff.
I also want to note that I've had conversations with several people offline who disagreed pretty strongly with this post, and yet no one has posted major disagreements here. I think the EA Forum is generally fairly anti-social justice, while EAA is generally fairly pro-social justice, so there are norms clashing between the communities.
The blog post
Your main issue seems to be the claim that these harms are linked, but you just respond by only saying how you feel reading the quote, which isn't a particularly valuable approach. It seems like it would be much more productive... (read more)
To better understand your view, what are some cases where you think it would be right to either
but only just?
That is, cases where it's just slightly over the line of being justified.
This post spends a lot of time touting the travel involved in Alice’s and Chloe’s jobs, which seems a bit off to me. I guess some people deeply value living in beautiful and warm locations and doing touristy things year-round, but my impression is that this is not very common. “Tropical paradises” often lack much of the convenience people take for granted in high-income countries, such as quick and easy access to some products and services that make life more pleasant. I also think most people quickly get bored of doing touristy things when it goes beyond a few weeks per year, and value being close to their family, friends, and the rest of their local community. Constantly packing and traveling can also be tiring and stressful, especially when you’re doing it for others.
Putting those things together, it’s plausible that Alice and Chloe eventually started seeing the constant travel as a drawback of the job, rather than as a benefit.
This is the second CEA post to make claims like this without mentioning the FTX fraud.
Wish Swapcard was better?
Swapcard, the networking and scheduling app for EA Global and EAGx events, has published their product roadmap — where anyone can vote on features they want to see!
Two features currently in the "Researching (Vote)" stage have been requested by our attendees since the beginning of us using Swapcard for our events:
1) Reschedule a meeting
2) External Calendar Synchronization
If these sound like features you want, I encourage you to take a moment to vote for them! Every vote counts.
Swapcard product roadmap
One example of the evidence we’re gathering
We are working hard on a point-by-point response to Ben’s article, but wanted to provide a quick example of the sort of evidence we are preparing to share:
Her claim: “Alice claims she was sick with covid in a foreign country, with only the three Nonlinear cofounders around, but nobody in the house was willing to go out and get her vegan food, so she barely ate for 2 days.”
The truth (see screenshots below):
Months later, after our relationship deteriorated, she went around telling many people that we starved her. She included details that depict us in a maximally damaging light - what could be more abusive than refusing to care for a sick girl, alone in a foreign country? And if someone told you that, you’d probably believe them, because who would make something like that up?
Evidence
- The screenshots below show Kat offering Alice the vegan food in the house (oatmeal, quinoa, cereal, etc), on the first day she was sick. Then, when she wasn’t
... (read more)It could be that I am misreading or misunderstanding these screenshots, but having read through them a couple of times trying to parse what happened, here's what I came away with:
On December 15, Alice states that she'd had very little to eat all day, that she'd repeatedly tried and failed to find a way to order takeout to their location, and tries to ask that people go to Burger King and get her an Impossible Burger which in the linked screenshots they decline to do because they don't want to get fast food. She asks again about Burger King and is told it's inconvenient to get there. Instead, they go to a different restaurant and offer to get her something from the restaurant they went to. Alice looks at the menu online and sees that there are no vegan options. Drew confirms that 'they have some salads' but nothing else for her. She assures him that it's fine to not get her anything.
It seems completely reasonable that Alice remembers this as 'she was barely eating, and no one in the house was willing to go out and get her nonvegan foods' - after all, the end result of all of those message exchanges was no food being obtained for Alice and her requests for Burger King being rep... (read more)
I should also add that this (including the question of whether Alice is credible) is not very important to my overall evaluation of the situation, and I'd appreciate it if Nonlinear spent their limited resources on the claims that I think are most shocking and most important, such as the claim that Woods said "your career in EA would be over with a few DMs" to a former employee after the former employee was rumored to have complained about the company.
I agree that this is a way more important incident, but I downvoted this comment because:
- I don't want to discourage Nonlinear from nitpicking smaller claims. A lot of what worries people here is a gestalt impression that Nonlinear is callous and manipulative; if that impression is wrong, it will probably be because of systematic distortions in many claims, and it will probably be hard to un-convince people of the impression without weighing in on lots of the claims, both major and minor.
- I expect some correlation between "this concern is easier to properly and fully address" and "this concern is more minor", so I think it's normal and to be expected that Nonlinear would start with relatively-minor stuff.
- I do think it's good to state your cruxes, but people's cruxes will vary some; I'd rather that Nonlinear overshare and try to cover everything, and I don't want to locally punis
... (read more)I think it's telling, that Kat thinks that the texts speak in their favor. Reading them was quite triggering for me because I see a scared person, who asks for basic things, from the only people she has around her, to help her in a really difficult situation, and is made to feel like she is asking for too much, has to repeatedly advocate for herself (while sick) and still doesn't get her needs met. On one hand, she is encouraged by Kat to ask for help but practically it's not happening. Especially Emerson and Drew in that second thread sounded like she is difficult and constantly pushed to ask for less or for something else than what she asked for. Seriously, it took 2.5 hours the first day to get a salad, which she didn't want in the first place?! And the second day it's a vegetarian, not vegan, burger.
The way Alice constantly mentioned that she doesn't want to bother them and says that things are fine when they are clearly not, is very upsetting. I can't speak to how Alice felt but it's no wonder she reports this as not being helped/fed when she was sick. To me, this is accurate, whether or not she got a salad and a vegetarian burger the next day.
Honestly, the burger... (read more)
Thank you! And a few reflections on recognition.
A few days ago, while I sat at the desk in my summer cabin, an unexpected storm swept in. It was a really bad storm, and when it subsided, a big tree had fallen, blocking the road to the little neighborhood where the cabin lies. Some of my neighbors, who are quite senior, needed to get past the tree and could not move it, so I decided to help. I went out with a chainsaw and quad bike, and soon the road was clear.
The entire exercise took me about two hours, and it was an overall pretty pleasurable experience, getting a break from work and being out in nature working with my body. However, afterward, I was flooded with gratitude, as if I had done something truly praiseworthy. Several neighbors came to thank me, telling me what a very nice young man I was, some even brought small gifts, and I heard people talking about what I had done for days afterward.
This got me thinking.
My first thought: These are very nice people, and it is obviously kind of them to come and thank me. But it seems a little off - when I tell them what I do every day, what I dedicate my life to, most of them nod politely and move on to talk about the weather. It seems... (read more)
Thanks for sharing. I think it was brave and I appreciated getting to read this. I'm sorry you've had to go through this and am glad to hear you're feeling optimistic.
I think the title is misleading. Africa is a large continent, and this was just one fellowship of ~15 people (of which I was one). There are some promising things going on in EA communities in Africa. At the same time, and I speak for several people when I say this, EA community building seems quite neglected in Africa, especially given how far purchasing power goes. And many community building efforts to date have been off the mark in one way or another.
I expect this to improve with time. But I think a better barometer of the health of EA in Africa is the communities that have developed around Africa metropolises (e.g. EA Abuja, EA Nairobi).
I also dislike Fumba being framed to the broader EA community as the perfect compromise. Fumba town was arguably the thing that the residents most disliked. There are a lot of valid reasons as to why the residency took place in Fumba, but this general rosy framing of the residency overlooks the issues it had and, more importantly, the lessons learned from them.
Strong disagree.
Seems kinda strong given this paragraph from Ben: "Perhaps surprisingly, recent polling data from Rethink Priorities indicates that most people still don’t know what EA is, those that do are positive towards it as a brand, overall affect scores haven't noticeably changed post FTX collapse, and only a few percent of respondents mentioned FTX when asked about EA open-ended. It seems like these results hold both in the general US population and amongst students at “elite universities”."
Seems kinda strong given that it was one EA and two(?) other EAs who went along with it.
Seems kinda strong given that I can only think of one leading figure and I'm not even sure I'd call him that.
Right?? Many of us have been depressed for months, but that's just not a sustainable reaction. EA has reached a size and level of visibility now that is sure to keep it continuously embroiled in various controversies and scandals from now on. We can't just mourn and hang our heads in shame for... (read more)
One animal welfare advocate told me something like "You EA's are such babies. There are entire organizations devoted to making animal advocacy look bad, sending "undercover investigators" into organizations to destroy trust, filing frivolous claims and lawsuits to waste time, placing stories in the media which paint us in the worst light possible, etc. Yet EA has a couple of bad months in the press and you all want to give up?"
I found that a helpful reframe.
Isn't that a bit self-aggrandising? I prefer "aspiring EA-adjacent"
To add one more person's impression, I agree with ofer that he apology was "reasonable," I disagree with him that your post "reads as if it was optimized to cause as much drama as possible, rather than for pro-social goals," and I agree with Amber Dawn that the original email is somewhat worse than something I'd have expected most people to have in their past. (That doesn't necessarily mean it deserves any punishment decades later and with the apology –non-neurotyptical people can definitely make a lot of progress between, say, early twenties and later in life, in understanding how their words affect others and how edginess isn't the same as being sophisticated.)
I think this is one of these "struggles of norms" where you can't have more than one sacred principle, and ofer's and my position is something like "it should be okay to say 'I don't know what's true' on a topic where the truth seems unclear ((but not, e.g., something like Holocaust denial))." Because a community that doesn't prioritize truth-seeking will run into massive troubles, so even if there's a sense in which kindness is ultimately more important than truth-seeking (I definitely think so!), it just doesn't make sens... (read more)
In the most respectful way possible, I strongly disagree with the overarching direction put forth here. A very strong predictor of engaged participation and retention in advocacy, work, education and many other things in life is the establishment of strong, close social ties within that community.
I think this direction will greatly reduce participation and engagement with EA, and I'm not even sure it will address the valid concerns you mentioned.
I say this despite the fact that I didn't have super close EA friends in the first 3-4 years, and still managed to motivate myself to work on EA stuff as well as policy successful advocacy in other areas. When it comes to getting new people to partake in self-motivated, voluntary social causes/projects, one of the first things I do is to make sure they find a friend to keep them engaged, and this likelihood is greatly increased if they simply meet more people.
I am also of the opinion that long-term engagement relying on unpaid, ad-hoc community organising is much more unreliable than paid work. I think other organisers will agree when I say: organising a community around EA for the purpose of deeply engaging EAs is time-consuming, and great... (read more)
Hey Richard, thanks for starting the discussion! I'd suggest making it easier to submit answers to these questions anonymously e.g. via an anonymous Google Form. I think that will help with opening up the discussion and making the brainstorming more fruitful.
- Specifically,
... (read more)I think this is a good guide, and thank you for writing it. I found the bit on how to phrase event advertising particularly helpful.
One thing I would like to elaborate on is the 'rent-seekers' bit. I'm going to say something that disagrees with a lot of the other comments here. I think we need to be careful about how we approach such 'rent-seeking' conversations. This isn't a criticism of what you wrote, as you explained it really well, but more of a trend I've noticed recently in EA discourse and this is a good opportunity to mention it.
It's important to highlight that not all groups are equal, demographically. I co-lead a group in a city where the child poverty rate has gone from 24% to a whopping 42% in 5 years, and remains one of the poorest cities in the UK. I volunteer my time at a food bank and can tell you that it's never been under stronger demand. Simply put, things are tough here. One of the things I am proudest about in our EA group is we've done a load of outreach to people who face extra barriers to participating in academia and research, and as a result have a group with a great range of life backgrounds. I'm sure it's not the only EA group to achie... (read more)
I don’t think (or, you have not convinced me that) it’s appropriate to use CEA’s actions as strong evidence against Jacy. There are many obvious pragmatic justifications to do so that are only slightly related to the factual basis of the allegations—I.e., even if the allegations are unsubstantiated, the safest option for a large organization like CEA would be to cut ties with him regardless. Furthermore, saying someone has “incentives to lie” about their own defense also feels inappropriate (with some exceptions/caveats), since that basically applies to almost every situation where someone has been accused. The main thing that you mentioned which seems relevant is his “documented history of lying,” which (I say this in a neutral rather than accusatory way) I haven’t yet seen documentation of.
Ultimately, these accusations are concerning, but I’m also quite concerned of the idea of throwing around seemingly dubious arguments in service of vilifying someone.
I think there's a lot of truth to the points made in this post.
I also think it's worth flagging that several of them: networking with a certain subset of EAs, asking for 1:1 meetings with them, being in certain office spaces - are at least somewhat zero sum, such that the more people take this advice, the less available these things will actually be to each person, and possibly on net if it starts to overwhelm. (I can also imagine increasingly unhealthy or competitive dynamics forming, but I'm hoping that doesn't happen!)
Second flag is that I don't know how many people reading this can expect to have an experience similar to yours. They may, but they may not end up being connected in all the same ways, and I want people to go knowing that they take that as a risk and to decide whether it's worth it for them.
On the other side, people taking this advice can do a lot of great networking and creating a common culture of ambition and taking ideas seriously with each other, without the same set of expectations around what connections they'll end up making.
Third flag is I have an un-fleshed out worry that this advice funges against doing things outside Berkeley/SF that are more valuable c... (read more)
A few thoughts on the democracy criticism. Don't a lot of the criticisms here apply to the IPCC? "A homogenous group of experts attempting to directly influence powerful decision-makers is not a fair or safe way of traversing the precipice." IPCC contributors are disproportionately white very well-educated males in the West who are much more environmentalist than the global median voter, i.e. "unrepresentative of humanity at large and variably homogenous in respect to income, class, ideology, age, ethnicity, gender, nationality, religion, and professional background." So, would you propose replacing the IPCC with something like a citizen's assembly of people with no expertise in climate science or climate economics, that is representative wrt some of the demographic features you mention?
You say that decisions about which risks to take should be made democratically. The implication of this seems to be that everyone, and not just EAs, who is aiming to do good with their resources should donate only to their own government. Their govt could then decide how to spend the money democratically. Is that implication embraced? This would eg include all climate philanthropy, which is now at $5-9bn per year.
I note the rider says it's not directed at regular forum users/people necessarily familiar with longtermism.
The Torres critiques are getting attention in non-longtermist contexts, especially with people not very familiar with the source material being critiqued. I expect to find myself linking to this post regularly when discussing with academic colleagues who have come across the Torres critiques; several sections (the "missing context/selective quotations" section in particular) demonstrate effectively places in which the critiques are not representing the source material entirely fairly.
I'll kick things off!
This month, I finished in second place at the Magic: the Gathering Grand Finals (sort of like the world championship). I earned $20,000 in prize money and declared that I would donate half of it to GiveWell, which gave me an excuse to talk up EA on camera for thousands of live viewers and post about it on Twitter.
This has been a whirlwind journey for me; I did unexpectedly well in a series of qualifying tournaments. Lots of luck was involved. But I think I played well, and I've been thrilled to see how interested my non-EA followers are in hearing about charity stuff (especially when I use Magic-related metaphors to explain cause prioritization).
Thank you for looking into the numbers! While I don't have a strong view on how representative the EA Leaders forum is, taking the survey results about engagement at face value doesn't seem right to me.
On the issue of long-termism, I would expect that people who don't identify as long-termists to now report to be less engaged with the EA Community (especially with the 'core') and identify as EA less. Long-termism has become a dominant orientation in the EA Community which might put people off the EA Community, even if their personal views and actions related to doing good haven't changed, e.g. their donations amounts and career plans. The same goes for looking at how long people have been involved with EA - people who aren't compelled by long-termism might have dropped out of identifying as EA without actually changing their actions.
I don't really agree with your second and third point. Seeing this problem and responding by trying to create more 'capital letter EA jobs' strikes me as continuing to pursue a failing strategy.
What (in my opinion) the EA Community needs is to get away from this idea of channelling all committed people to a few organisations - the community is growing faster* than the organisations, and those numbers are unlikely to add up in the mid term.
Committing all our people to a few organisations seriously limits our impact in the long run. There are plenty of opportunities to have a large impact out there - we just need to appreciate them and pursue them. One thing I would like to see is stronger profession-specific networks in EA.
It's catastrophic that new and long-term EAs now consider their main EA activity to be to apply for the same few jobs instead of trying to increase their donations or investing in non-'capital letter EA' promising careers.
But this is hardly surprising given past messaging. The only reason EA organisations can get away with having very expensive hiring rounds for the applicants is because there are a lot of strongly committed people out there willing to take on that cost. Organisations cannot get away with this in most of the for-profit sector.
*Though this might be slowing down somewhat, perhaps because of this 'being an EA is applying unsuccessfully for the same few jobs' phenomena.
Hi Alexey,
I appreciate that you’ve taken the time to consider what I’ve said in the book at such length. However, I do think that there’s quite a lot that’s wrong in your post, and I’ll describe some of that below. Though I think you have noticed a couple of mistakes in the book, I think that most of the alleged errors are not errors.
I’ll just focus on what I take to be the main issues you highlight, and I won’t address the ‘dishonesty’ allegations, as I anticipate it wouldn’t be productive to do so; I’ll leave that charge for others to assess.
tl;dr:
- Of the main issues you refer to, I think you’ve identified two mistakes in the book: I left out a caveat in my summary of the Baird et al (2016) paper, and I conflated overheads costs and CEO pay in a way that, on the latter aspect, was unfair to Charity Navigator.
- In neither case are these errors egregious in the way you suggest. I think that: (i) claiming that the Baird et al (2016) should cause us to believe that there is ‘no effect’ on wages is a misrepresentation of that paper; (ii) my core argument against Charity Navigator, regarding their focus on ‘financial efficiency’ metrics like overhead costs, is both successful and accurat
... (read more)Yarvin didn't attend.
Also, my sense for Chau is that one of the top reasons he was invited was because he was up for doing a debate with Holly. I personally think one should extend something like "diplomatic immunity" to people from opposing communities if they are participating in a kind of diplomatic role. Facilitating any kind of high-bandwidth negotation between e/acc people and AI-x-risk concerned people seems quite valuable to me, and I e.g. think Manifest should probably invite Sam Altman to debate others on safety if he is up for it for similar reasons, despite me finding him otherwise quite despicable.
(I don't have a strong take on Hanania. It seems pretty plausible to me based on things other people have said that he should be excluded, but I have learned to take things like that with a grain of salt without checking myself)
Re: attack surface in my early comment, I actually meant attacks from EAs. People want to debate the borders, quite understandably. I have folks in my DMs as well as in the comments. Q: "Why did we not communicate more thoroughly on the forum"
A: "Because we've communicated on the forum before"
I don't think endorse vs. not endorse describes everything here, but it describes some if it. I do think I spend some energy on ~every cause area, and if I am lacking conviction, that is a harder expenditure from a resource I consider finite.
An example of a non-monetary cost where I have conviction: anxiety about potential retribution from our national political work. This is arguably not even EA (and not new), but it is a stressful side hustle we have this year. I had hoped it wouldn't be a recurring thing, but here we are.
An example of a non-monetary cost where I have less conviction: the opportunity cost of funding insect welfare instead of chicken, cow, or pig welfare. I think I could be convinced, but I haven't been yet and I've been thinking about it a long time! I'd much prefer to just see someone who actually feels strongly about that take the wheel. It is not a lot of $s in itself, bu... (read more)
Hi, last organizer here, wanted to give my take.
Overall, I’m sympathetic to the point this post is making.
This is tricky because I think I could defend the choice to have any of the individual controversial speakers. Some of them, e.g. Simone and Malcolm Collins, simply do not hold racist views. Sure, they can be edgy and inflammatory — they act this way on the internet strategically as far as I can tell, and it’s not my style. But they’re not scientific racists. Embryo selection has nothing to do with race or reproductive coercion and oppression. Plus they are particularly generous, friendly, and engaging in person, which means they are particularly value-adding as attendees. Others of them, e.g. Brian Chau, I don’t like the style or opinions of about basically anything (though I admit I've hardly engaged with his stuff). I've seen him write about race and gender in a way I perceive to be unnecessarily inflammatory, and like, mean? And I think he’s wrong and doing a lot of harm with the AI stuff. But he came to do a debate with Holly Elmore about acceleration vs. pause. It was a very popular session, and I heard from an AI safety friend I respect a lot that he learned a lot about ... (read more)
Yeah, I don't necessarily mind an informal tone. But the reality is, I read [edit: a bit of] the appendix doc and I'm thinking, "I would really not want to be managed by this team and would be very stressed if my friends were being managed by them. For an organisation, this is really dysfunctional." And not in an, "understandably risky experiment gone wrong" kind of way, which some people are thinking about this as, but in a, "systematically questionable judgement as a manager" way. Although there may be good spin-off convos around, "how risky orgs should be" and stuff. And maybe the point of this post isn't to say, "nonlinear did a reasonably sufficient job managing employees and can expect to do so in the future" but rather, "I feel slandered and lied about and I want to share my perspective."
Just want to signal my agreement with this.
My personal guess is that Kat and Emerson acted in ways that were significantly bad for the wellbeing of others. My guess is also that they did so in a manner that calls for them to take responsibility: to apologise, reflect on their behaviour, and work on changing both their environment and their approach to others to ensure this doesn't happen again. I'd guess that they have committed a genuine wrongdoing.
I also think that Kat and Emerson are humans, and this must have been a deeply distressing experience for them. I think it's possible to have an element of sympathy and understanding towards them, without this undermining our capacity to also be supportive of people who may have been hurt as a result of Kat and Emerson's actions.
Showing this sort of support might require that we think about how to relate with Nonlinear in the future. It might require expressing support for those who suffered and recognising how horrible it must have been. It might require that we think less well of Kat and Emerson. But I don't think it requires that we entirely forget that Kat and Emerson are humans with human emotions and that this must be pretty diffi... (read more)
I’m Chana, a manager on the Community Health team. This comment is meant to address some of the things Ben says in the post above as well as things other commenters have mentioned, though very likely I won’t have answered all the questions or concerns.
High level
I agree with some of those commenters that our role is not always clear, and I’m sorry for the difficulties that this causes. Some of this ambiguity is intrinsic to our work, but some is not, and I would like people to have a better sense of what to expect from us, especially as our strategy develops. I'd like to give some thoughts here that hopefully give some clarity, and we might communicate more about how we see our role in the future.
For a high level description of our work: We aim to address problems that could prevent the effective altruism community from fulfilling its potential for impact. That looks like: taking seriously problems with the culture, and problems from individuals or organizations; hearing and addressing concerns about interpersonal or organizational issues (primarily done by our community liaisons); thinking about community-wide problems and gaps and occasionally trying to fill those;... (read more)
I'm struggling to see how releasing information already provided to the investigation would obstruct it. A self-initiated investigation is not a criminal, or even a civil, legal process -- I am much less inclined to accept it as an adequate justification for a significant delay, especially where potentially implicated people have not been put on full leaves of absence.
Given the TIME article, I thought I should give you all an update. Even though I have major issues with the piece, I don’t plan to respond to it right now.
Since my last shortform post, I’ve done a bunch of thinking, updating and planning in light of the FTX collapse. I had hoped to be able to publish a first post with some thoughts and clarifications by now; I really want to get it out as soon as I can, but I won’t comment publicly on FTX at least until the independent investigation commissioned by EV is over. Unfortunately, I think that’s a minimum of 2 months, and I’m still sufficiently unsure on timing that I don’t want to make any promises on that front. I’m sorry about that: I’m aware that this will be very frustrating for you; it’s frustrating for me, too.
And possibly with those numbers humans shouldn't be dating in general, ignoring EA?
There is absolutely a point to "ritually chanting you did wrong at Owen." It's the same point underlying why a lot of EA leaders issued statements condemning FTX and it's the reason I'm commenting on this post at all: There are a lot of people, particularly women, who are viewing the comment section of this post to see how we as a community respond to allegations like these and deciding whether this is a safe and welcoming space for them. I know because I spent most of my workday yesterday speaking to at least 6 of them, 1 of whom was in floods of tears. For most of yesterday, the 2nd to top comment thanked Owen and essentially told him to take a break before coming back and running boards again) and the top thanked him for doing the right thing. I have to say that undermined my ability to emphasize the community doesn't condone this type of behavior. I'm not into retributive justice (I think it's pretty gross actually) but there are very good reasons to send a solid signal here and people are watching to see if we do.
This is not a fair description. The way people get such statistics is by assuming all accusations are true unless there is strong evidence against them, but there is a large number with no strong evidence either way, and researchers should not just assume they are all true.
A good first place to start is the Wikipedia Article on the subject, which features a wide range of estimates, almost all of which are higher than the 2-3% you say, some of which being dramatically higher.
https://en.wikipedia.org/wiki/False_accusation_of_rape
Alexander also has a good blog post on this:
https://slatestarcodex.com/2014/02/17/lies-damned-lies-and-social-media-part-5-of-%E2%88%9E/
Your own website lists a slightly higher range, 2-4%
[redacted]
If we look at the source you supply for the 2% we see a different story:
... (read more)Your post does not actually say this, but when I read it I thought you were saying that these are all the organizations that have received major funding in technical alignment. I think it would have been clearer if you had said "include the following organizations based in the San Francisco Bay Area:" to make it clearer you're discussing a subset.
Anyway, here are the public numbers, for those curious, of $1 million+ grants in technical AI safety in 2021 and 2022 (ordered by total size) made by Open Philanthropy:
The Alignment Research Center received much less: $265,000.
There isn't actually any public grant saying that Open Phil funded Anthropic. However, that isn't to say that they couldn't have made a non-public grant. It was p... (read more)
I found this clear and reassuring. Thank you for sharing
I don't really like this thing where you speak on behalf of black EAs.
I think you should let black EAs speak for themselves or not comment on it.
In my experience, there seems to be distortionary epistemic effects when someone speaks on behalf of a minority group. Often, the person so speaking assigns them harms, injustices or offenses that the relevant members of those groups may not actually endorse.
When it's done on my behalf, I find it pretty patronising, and it's annoying/icky?
I don't want to speak for black EAs but it's not clear to me that the "hurt" you mention is actually real.
Earlier this year ARC received a grant for $1.25M from the FTX foundation. We now believe that this money morally (if not legally) belongs to FTX customers or creditors, so we intend to return $1.25M to them.
It may not be clear how to do this responsibly for some time depending on how bankruptcy proceedings evolve, and if unexpected revelations change the situation (e.g. if customers and creditors are unexpectedly made whole) then we may change our decision. We'll post an update here when we have a more concrete picture; in the meantime we will set aside the money and not spend it.
We feel this is a particularly straightforward decision for ARC because we haven't spent most of the money and have other supporters happy to fill our funding gap. I think the moral question is more complex for organizations that have already spent the money, especially on projects that they wouldn't have done if not for FTX, and who have less clear prospects for fundraising.
(Also posted on our website.)
This article from The Wall Street Journal suggests that what happened was more like "taking funds from customers with full knowledge" than like a mistake:
(See also this article by The New York Times, which describes the same video meeting.[1])
There are other signs of fraud. For example:
- Reuters reports that FTX had a "backdoor" which "allowed Bankman-Fried to execute commands that could alter the company's financial records without alerting other people, including external auditors," according to their sources.
- On November 10, the official FTX account on Twitter announced that FTX was ordered to facilitate Bahamian withdrawals by Bahamian regulators. Days later, the Securities Commission of the Bahamas claimed that that was a lie. As Scott Alexander put it, "this might have been a ruse to let insiders withdraw first without provoking suspicion."
- FTX's legal and compliance team resigned very
... (read more)To be clear, this is an account that joined from Twitter to post this comment (link).
I have a similar-ish story. I became an EA (and a longtermist, though I think that word did not exist back then) as a high school junior, after debating a lot of people online about ethics and binge-reading works from Nick Bostrom, Eliezer Yudkowsky and Brian Tomasik. At the time, being an EA felt so philosophically right and exhilaratingly consistent with my ethical intuitions. Since then I have almost only had friends that considered themselves EAs.
For three years (2017, 2018 and 2019) my friends recommended I apply to EA Global. I didn’t apply in 2017 because I was underage and my parents didn’t let me go, and didn’t apply in the next two years because I didn’t feel psychologically ready for a lot of social interaction (I’m extremely introverted).
Then I excitedly applied for EAG SF 2020, and got promptly rejected. And that was extremely, extremely discouraging, and played an important role in the major depressive episode I was in for two and a half years after the rejection. (Other EA-related rejections also played a role.)
I started recovering from depression after I decided to distance myself from EA. I think that was the only correct choice for me. I still care a lot about making the future go well, but have resigned to the fact that the only thing I can realistically do to achieve that goal is donate to longtermist charities.
Thank you for writing this - a lot of what you say here resonates strongly with me, and captures well my experience of going from very involved in EA back in 2012-14 or so, to much more actively distancing myself from the community for the last few years. I've tried to write about my perspective on this multiple times (I have so many half written Google docs) but never felt quite able to get to the point where I had the energy/clarity to post something and actually engage with EA responses to it. I appreciate this post and expect to point people to it sometimes when trying to explain why I'm not that involved in or positive about EA anymore.
I think it's especially confusing when longtermists working on AI risk think there is a non-negligble chance total doom may befall us in 15 years or less, whereas so-called neartermists working on deworming or charter cities are seeking payoffs that only get realized on a 20-50 year time horizon.
That can't be right. I think what may have happened is that when you do a search, the results page initially shows you only 6 each of posts and comments, and you have to click on "next" to see the rest. If I keep clicking next until I get to the last pages of posts and comments, I can count 86 blog posts and 158 comments that mention "social justice", as of now.
BTW I find it interesting that you used the phrase "even question its value", since "even" is "used to emphasize something surprising or extreme". I would consider questioning the values of things to be pretty much the core of the EA philosophy...
I want to open by saying that there are many things about this post I appreciate, and accordingly I upvoted it despite disagreeing with many particulars. Things I appreciate include, but are not limited to:
-The detailed block-by-block approach to making the case for both cancel culture's prevalence and its potential harm to the movement.
-An attempt to offer a concrete alternative pathway to CEA and local groups that face similar decisions in future.
-Many attempts throughout the post to imagine the viewpoint of someone who might disagree, and preempt the most obvious responses.
But there's still a piece I think is missing. I don't fault Larks for this directly, since the post is already very long and covers a lot of ground, but it's the area that I always find myself wanting to hear more about in these discussions, and so would like to hear more about from either Larks or others in reply to this comment. It relates to both of these quotes.
... (read more)Yes, it's a tradeoff, but Hanson's being so close to one extreme of the spectrum that it starts to be implausible that anyone can be that bad at communicating carefully just by accident. I don't think he's even trying, and maybe he's trying to deliberately walk as close to the line as possible. What's the point in that? If I'm right, I wouldn't want to gratify that. I think it's lacking nuance if you blanket object to the "misstep" framing, especially since that's still a relatively weak negative judgment. We probably want to be able to commend some people on their careful communication of sensitive topics, so we also have to be willing to call it out if someone is doing an absolutely atrocious job at it.
For reference, I have listened to a bunch of politically controversial podcasts by Sam Harris, and even though I think there's a bit of room to communicate even better, there were no remarks I'd label as 'missteps.' By contrast, several of Hanson's tweets are borderline at best, and at least one now-deleted tweet I saw was utterly insane. I don't think it'... (read more)
Lots! Treat all of the following as ‘things Will casually said in conversation’ rather than ‘Will is dying on this hill’ (I'm worried about how messages travel and transmogrify, and I wouldn't be surprised if I changed lots of these views again in the near future!). But some things include:
- I think existential risk this century is much lower than I used to think — I used to put total risk this century at something like 20%; now I’d put it at less than 1%.
- I find ‘takeoff’ scenarios from AI over the next century much less likely than I used to. (Fast takeoff in particular, but even the idea of any sort of ‘takeoff’, understood in terms of moving to a higher growth mode, rather than progress in AI just continuing existing two-century-long trends in automation.) I’m not sure what numbers I’d have put on this previously, but I’d now put medium and fast takeoff (e.g. that in the next century we have a doubling of global GDP in a 6 month period because of progress in AI) at less than 10%.
- In general, I think it’s much less likely that we’re at a super-influential time in history; my next blog post will be about this idea
- I’m much more worried about a great power war in my lifeti
... (read more)[comment I'm likely to regret writing; still seems right]
It seems lot of people are reacting by voting, but the karma of the post is 0. It seems to me up-votes and down-votes are really not expressive enough, so I want to add a more complex reaction.
So it seems really a pity the post was not framed as a question s... (read more)
[My views only]
Although few materials remain from the early days of Leverage (I am confident they acted to remove themselves from wayback, as other sites link to wayback versions of their old documents which now 404), there are some interesting remnants:
I think this material (and the surprising absence of material since) speaks for itself - although I might write more later anyway.
Per other comments, I'm also excited by the plan of greater transparency from Leverage. I'm particularly eager to find out whether they still work on Connection Theory (and what the current theory is), whether they addressed any of the criticism (e.g. 1, 2) levelled at CT years ago, whether the further evidence and argument mentioned as forthcoming in early documents and comment threads will materialise, and generally what research (on CT or anything else) have they done in the last several years, and when this will be made public.
There are also reasons why this might be the most animal-friendly US administration ever:
Basically, this is a great example of the importance of working in a bipartisan way. If the animal movement had been more focused on building political allies on both sides of the aisle, this could actually be some of the best opportunity to pass anti-factory farming legislation.
A question I genuinely don’t know the answer to, for the anti-donation-match people: why wasn’t any of this criticism directed at Open Phil or EA funds when they did a large donation match?
I have mixed feelings on donation matching. But I feel strongly that it is not tenable to have a general community norm against something your most influential actors are doing without pushback, and checking the comments on the linked post I’m not seeing that pushback.
Relatedly, I didn’t like the assertion that the increased number of matches comes from the ‘fundraising’ people not the ‘community-building and epistemics’ people. I really don’t know who the latter refers to if not Open Phil / EAF.
https://forum.effectivealtruism.org/posts/zt6MsCCDStm74HFwo/ea-funds-organisational-update-open-philanthropy-matching
I've been contemplating writing a post about my side of the issue. I wasn't particularly close, but did get a chance to talk to some of the people involved.
Here's my rough take, at this point:
1. I don't think any EA group outside of FTX would take responsibility for having done a lot ($60k+ worth) of due-diligence and investigation of FTX. My impression is that OP considered this as not their job, and CEA was not at all in a position to do this (to biased, was getting funded by FTX). In general, I think that our community doesn't have strong measures in place to investigate funders. For example, I doubt that EA orgs have allocated $60k+ to investigate Dustin Moskovitz (and I imagine he might complain if others did!).
My overall impression was that this was just a large gap that the EA bureaucracy failed at. I similarly think that the "EA bureaucracy" is much weaker / less powerful than I think many imagine it being, and expect that there are several gaps like this. Note that OP/CEA/80k/etc are fairly limited organizations with specific agendas and areas of ownership.
2. I think there were some orange/red flags around, but that it would have taken some real investigation to figu... (read more)
Certainly the Guardian article had a lot of mistakes and issues, but I don't at all buy that there's nothing meaningfully different between someone like Hanania and most interesting thinkers, just because forcing consistency of philosophical views will inevitably lead to some upsetting conclusions somewhere. If I was to "corner someone in a dark alleyway" about population ethics until I caught them in a gotcha that implied they would prefer the world was destroyed, this updates me ~0 about the likelihood of this person actually going out and trying to destroy the world or causing harm to people. If I see someone consistently tweet and write in racist ways despite a lot of criticism and push-back, this shows me important things about what they value on reflection, and provides fairly strong evidence that this person will act in exclusionary and hateful ways. Trying to say that such racist comments are fine because of impossibility theorems showing everyone has to be committed to some weird views doesn't at all engage with the empirical track record of how people who write like Hanania tend to act.
This sounds plausible. Such evaluation involves time costs, but could yield valuable info about the reliability of the fund's grantmaking decisions, whether the "hits" sufficiently compensate for the "duds", and whether there are patterns among the duds that might helpfully inform future grantmaking.
I'd be a bit surprised if there wasn't already a process in place for retrospective analysis of this sort. Is there any public info available about if/how EA Funds do this?
I'm a bit wary of picking out weird-sounding proposals as "obviously" ex ante duds. Presumably a lot of the "digital content" grants were aimed at raising public awareness of key longtermist issu... (read more)
EA organizations frequently ask for people to run criticism by them ahead of time. I’ve been wary of the push for this norm. My big concerns were that orgs wouldn’t comment until a post was nearly done, and that it would take a lot of time. My recent post mentioned a lot of people and organizations, so it seemed like useful data.
I reached out to 12 email addresses, plus one person in FB DMs and one open call for information on a particular topic. This doesn’t quite match what you see in the post because some people/orgs were used more than once, and other mentions were cut. The post was in a fairly crude state when I sent it out.
Of those 14: 10 had replied by the start of next day. More than half of those replied within a few hours. I expect this was faster than usual because no one had more than a few paragraphs relevant to them or their org, but is still impressive.
It’s hard to say how sending an early draft changed things. One person got some extra anxiety because their paragraph was full of TODOs (because it was positive and I hadn’t worked as hard fleshing out the positive mentions ahead of time). I could maybe have saved myself one stressful interaction if I’d rea... (read more)
This is a cold take that’s probably been said before, but I thought it bears repeating occasionally, if only for the reminder:
The longtermist viewpoint has gotten a lot of criticism for prioritizing “vast hypothetical future populations” over the needs of "real people," alive today. The mistake, so the critique goes, is the result of replacing ethics with math, or utilitarianism, or something cold and rigid like that. And so it’s flawed because it lacks the love or duty or "ethics of care" or concern for justice that lead people to alternatives like mutual aid and political activism.
My go-to reaction to this critique has become something like “well you don’t need to prioritize vast abstract future generations to care about pandemics or nuclear war, those are very real things that could, with non-trivial probability, face us in our lifetimes.” I think this response has taken hold in general among people who talk about X-risk. This probably makes sense for pragmatic reasons. It’s a very good rebuttal to the “cold and heartless utilitarianism/pascal's mugging” critique.
But I think it unfortunately neglects the critical point that longtermism, when taken really seriously — at least the... (read more)
Broadly I think that both Torres and Gebru engage in bullying. They have big accounts and lots of time and will quote tweet anyone who disagrees with them, making the other person seem bad to their huge followings.
The great majority of my post focuses on process concerns. The primary sources introduced by Nonlinear are strong evidence of why those process concerns matter, but the process concerns stand independent. I agree that Nonlinear often paraphrased its subjects before responding to those paraphrases; that's why I explicitly pulled specific lines from the original post that the primary sources introduced by Nonlinear stand as evidence against.
My ultimate conclusion was and is explicitly not that Nonlinear is vindicated on every point of criticism. It is that the process was fundamentally unfair and fundamentally out of line with journalistic standards and a duty to care that are important to uphold. Not everyone who is put in a position of needing to reply to a slanted article about them is going to be capable of a perfectly rigorous, even-keeled, precise response that defuses every point of realistically defusable criticism, which is one reason people should not be put in the position of needing to respond to those articles.
Thanks for asking Yadav. I can confirm that:
Since then we have not invited or permitted Kat or Emerson to run any type of session.
(I was previously a fund manager on the LTFF)
Agree with a lot of what Asya said here, and very appreciative of her taking the time to write it up.
One complimentary point I want to emphasize: I think hiring a full-time chair is great, and that LTFF / EA Funds should in general be more willing to hire fund managers who have more time and less expertise. In my experience fund managers have very little time for the work (they’re both in part-time roles, and often extremely busy people), and little support (there’s relatively little training / infrastructure / guidance), but a fair amount of power. This has a few downsides:
- Insular funding: Fund managers lean heavily on personal networks and defer to other people’s impressions of applicants, which means known applicants are much more likely to be funded. This meant LTFF had an easy time funding EAs/rationalists, but was much less likely to catch promising, non-EA candidates who weren't already known to us. (This is already a common dynamic in EA orgs, but felt particularly severe because of our time constraints.)
- Less ambitious funding: Similarly, it's particularly time-intensive to evaluate new organizations with substantial
... (read more)Reading this post is very uncomfortable in an uncanny valley sort of way. A lot of things said is true and needs to be said, but the all over feeling of the post is off.
I think most of the problem comes from blurring the line between how EA functions in practice for people who are close to money and the rest of us.
Like, sure EA is a do-ocracy, and I can do what ever I want, and no-one is sopping me. But also, every local community organiser I talk to talks about how CEA is controlling and that their funding comes with lots of strings attached. Which I guess is ok, since it's their money. No one is stopping anyone from getting their own funding, and doing their own thing.
Except for the fact that 80k (and other though leaders? I'm not sure who works where), have told the community for years, that funding is solved and no one else should worry about giving to EA, which has stifled all alternative funding in the community.
Great post!
As comments by Max and Vasco hint at, I think it might still be the case that considering effects on wild animals is essential when evaluating any short-termist intervention (including those for farmed animals and human welfare). For example, I remain uncertain whether vegetarianism increases or decreases total suffering because of wild-animal side effects, mainly because beef may (or may not!) reduce a lot of suffering even if all other meat types increase it. (I still hope people avoid eating chicken and other small farmed animals.)
In my opinion, the most important type of WAW research is getting more clarity on big questions, like what the net impact is of cattle grazing, climate change, and crop cultivation on total invertebrate populations. These are some of the biggest impacts that humanity has on wild animals, and the answers would inform analysis of the side effects of various other interventions like meat reduction or family planning.
I haven't followed a lot of the recent WAW work, but my experience is that many other people working on WAW are less focused on these questions about how humans change total population sizes. Researchers more often think about ways ... (read more)
I have to be honest that I’m disappointed in this message. I’m not so much disappointed that you wrote a message along these lines, but in the adoption of perfect PR speak when communicating with the community. I would prefer a much more authentic message that reads like it was written by an actual human (not the PR speak formula) even if that risks subjecting the EA movement to additional criticism and I suspect that this will also be more impactful long term. It is much more important to maintain trust with your community than to worry about what outsiders think, especially since many of our critics will be opposed to us no matter what we do.
Thanks for writing this post!
I feel a little bad linking to a comment I wrote, but the thread is relevant to this post, so I'm sharing in case it's useful for other readers, though there's definitely a decent amount of overlap here.
TL; DR
I personally default to being highly skeptical of any mental health intervention that claims to have ~95% success rate + a PHQ-9 reduction of 12 points over 12 weeks, as this is is a clear outlier in treatments for depression. The effectiveness figures from StrongMinds are also based on studies that are non-randomised and poorly controlled. There are other questionable methodology issues, e.g. surrounding adjusting for social desirability bias. The topline figure of $170 per head for cost-effectiveness is also possibly an underestimate, because while ~48% of clients were treated through SM partners in 2021, and Q2 results (pg 2) suggest StrongMinds is on track for ~79% of clients treated through partners in 2022, the expenses and operating costs of partners responsible for these clients were not included in the methodology.
(This mainly came from a cursory review of StrongMinds documents, and not from examining HLI analyses, though I do think "we’re... (read more)
This break even analysis would be more appropriate if the £15m had been ~burned, rather than invested in an asset which can be sold.
If I buy a house for £100k cash and it saves me £10k/year in rent (net costs), then after 10 years I've broken even in the sense of [cash out]=[cash in], but I also now have an asset worth £100k (+10y price change), so I'm doing much better than 'even'.
[on phone] Thank you so much for all of your hard work managing the fund. I really appreciated it and I think that it did a lot of good. I doubt that you could have ever have reasonably expected this outcome so I don't hold you responsible for it.
Reading this announcement was surprisingly emotional for me. It made me realise how many exceptionally good people who I really admire are going to be deeply impacted by all of this. That's really sad in addition to all the other stuff to be sad about. I probably don't have much to offer other than my thoughts and sympathy but please let me know if I can help.
I suppose that I should disclose that I recently received a regrant from FTX which I will abstain from drawing on for the moment. I don't think that this has much, if any, relevance to my sentiments however.
"It is also similarly the case that EA's should not support policy groups without clear rationale, express aims and an understanding that sponsorship can come with the reasonable assumption from general public, journalists, or future or current members, that EA is endorsing particular political views."
"Other mission statements are politically motivated to a degree which is simply unacceptable for a group receiving major funds from an EA org."
- This seems to imply that EA funde
... (read more)It's very much not obvious to me that EAs should generally prefer progressive democratic candidates in general, or Salinas in particular.
Speaking personally, I am generally not excited about Democratic progressives gaining more power in the party relative to centrists, and I'm pretty confident I'm not alone here in that[1].
I also think it's false to claim that Salinas's platform as linked gives much reason to think she will be a force for good on global poverty, animal welfare, or meaningful voting reform. (I'd obviously change my mind on this if there are other Salinas quotes that pertain more directly to these issues.)
There are also various parts of her platform that make me think there's a decent chance that her time in office will turn out to be bad for the world by my lights (not just relative to Carrick). I obviously don't expect everyone here to agree with me on that, and I'm certainly not confident about it, but I also don't want broad claims that progressives are better by EA values to stand uncontested, because I personally don't think that's true.
- ^
... (read more)To be clear, I think this is very contestable within an EA framework, and am not trying to claim that my political pref
One option here could be to lend books instead. Some advantages:
Implies that when you're done reading the book you don't need it anymore, as opposed to a religious text which you keep and reference.
While the distributors won't get all the books back (and that's fine) the books they do get back they can lend out again.
Less lavish, both in appearance and in reality.
This is what we do at our meetups in Boston.
Agree that X-risk is a better initial framing than longtermism - it matches what the community is actually doing a lot better. For this reason, I'm totally on board with "x-risk" replacing "longtermism" in outreach and intro materials. However, I don't think the idea of longtermism is totally obsolete, for a few reasons:
- Longtermism produces a strategic focus on "the last person" that this "near-term x-risk" view doesn't. This isn't super relevant for AI, but it makes more sense in the context of biosecurity. Pandemics with the potential to wipe out everyone are way worse than pandemics which merely kill 99% of people, and the ways we prepare for them seem likely to differ. On this view, bunkers and civilizational recovery plans don't make much sense.
- S-risks seem like they could very well be a big part of the overall strategy picture (even when not given normative priority and just considered as part of the total picture), and they aren't captured by the short-term x-risk view.
- The numbers you give for why x-risk might be the most important cause areas even if we ignore the long-term future, $30 million for a 0.0001% reduction in X-risk, don't seem totally implausible. The world is b
... (read more)See also Neel Nanda's recent Simplify EA Pitches to "Holy Shit, X-Risk".
This seems overly quick to rule out a large class of potential responses. Assuming there are (or will be) more "vultures," it's not clear to me that the arguments against these "things not to do" are solid. I have these hesitations (among others) [edited for clarity and to add the last two]:
- "The rationale for giving out high risk grants stands and hasn’t changed."
- Sure, but the average level of risk has increased. So accepting the same level of risk means being more selective.
- "decreasing the riskiness of the grants just means we backslide into becoming like any other risk averse institution."
- Even if we put aside the previous point, riskiness can go down without becoming as low as that of typical risk-averse institutions.
- "Increasing purity tests. [...] As a community that values good epidemics, having a purity test on whether or not this person agrees with the EA consensus on [insert topic here] is a death blow to the current very good MO."
- There are other costly signals the community could use.
- "So not funding young people means this t
... (read more)I don't find the racism critique of longtermism compelling. Human extinction would be bad for lots of currently existing non-white people. Human extinction would also be bad for lots of possible future non-white people. If future people count equally, then not protecting them would be a great loss for future non-white people. So, working to reduce extinction risks is very good for non-white people.
I also have not had this experience, though that doesn't mean it didn't happen, and I'd want to take this seriously if it did happen.
However, Phil Torres has demonstrated that he isn't above bending the truth in service of his goals, so I'm inclined not to believe him. See previous discussion here. Example from the new article:
My understanding (sorry that the link is probably private) is that Torres is very aware that Häggström generally agrees with longtermism and provides the example as a way not to do longtermism, but that doesn't stop Torres from using it to argue that this is what longtermism implies and therefore all longtermists are horrible.
I should note that even if this were written by someone else, I probably wouldn't have investigated the supposed intimidation, silencing, or canc... (read more)
Many thanks for this, Rohin. Indeed, your understanding is correct. Here is my own screenshot of my private announcement on this matter.
This is far from the first time that Phil Torres references my work in a way that is set up to give the misleading impression that I share his anti-longtermism view. He and I had extensive communication about this in 2020, but he showed no sympathy for my complaints.
Thanks a lot for writing this up and sharing this. I have little context beyond following the story around CARE and reading this post, but based on the information I have, these seem like highly concerning allegations, and ones I would like to see more discussion around. And I think writing up plausible concerns like this clearly is a valuable public service.
Out of all these, I feel most concerned about the aspects that reflect on ACE as an organisation, rather than that which reflect the views of ACE employees. If ACE employees didn't feel comfortable going to CARE, I think it is correct for ACE to let them withdraw. But I feel concerned about ACE as an organisation making a public statement against the conference. And I feel incredibly concerned if ACE really did downgrade the rating of Anima International as a result.
That said, I feel like I have fairly limited information about all this, and have an existing bias towards your position. I'm sad that a draft of this wasn't run by ACE beforehand, and I'd be keen to hear their perspective. Though, given the content and your desire to remain anonymous, I can imagine it being unusually difficult to hear ACE's thoughts before pu... (read more)
I'm quite concerned about your cost-effectiveness analysis. It seems to have been done in a quite naive way that massively biases the conclusions.
When we do cost-benefit analysis, we need to consider both the costs and the benefits. Yet while your analysis and spreadsheet describe at length the costs of new people (financial, environmental etc.), it does not seem to analyse the benefits at all.
This would not be a big deal if these benefits were small. But they are actually very large!
Firstly, there are a lot of benefits to existing people from larger population sizes:
- Existing people get the benefit of building relationships with these new people. The experience of being a parent, or a grandparent, is one of the biggest sources of meaning in most people's lives, and this is true even for accidental pregnancies. And certainly when people grow old their grandchildren and great-grandchildren seem to provide a source of both joy and support long after they have ceased participating in much of society.
- Many things have increasing returns to scale, and so are more efficient with larger populations - e.g. mass transit, factory size, power plant size.
- Division of Labour - whereby people
... (read more)My reasons for being vegan have little to do with the direct negative effects of factory farming. They are in roughly descending order of importance.
- A constant reminder to myself that non-human animals matter. My current day-to-day activities give nearly no reason to think about the fact that non-human animals have moral worth. This is my 2-5 times per day reminder of this fact.
- Reduction of cognitive dissonance. It took about a year of being vegan to begin to appreciate, viscerally, that animals had moral worth. It's hard to quantify this but it is tough to think that animals have moral worth when you eat them a few times a day. This has flow-through effects on donations, cause prioritization, etc.
- The effect it has on others. I'm not a pushy vegan at all. I hardly tell people but every now and then people notice and ask questions about it.
- Solidarity with non-EAA animal welfare people. For better or worse, outside of EA, this seems to be a ticket to entry to be considered taking the issue seriously. I want to be able to convince them to donate to THL over a pet shelter and to SWP over dog rescue charities and the the EA AWF over Pets for Vets. They are more likely to listen to me
... (read more)AI Safety Needs To Get Serious About Chinese Political Culture
I worry that Leopold Aschenbrenner's "China will use AI to install a global dystopia" take is based on crudely analogising the CCP to the USSR, or perhaps even to American cultural imperialism / expansionism, and isn't based on an even superficially informed analysis of either how China is currently actually thinking about AI, or what China's long term political goals or values are.
I'm no more of an expert myself, but my impression is that China is much more interested in its own national security interests and its own ideological notions of the ethnic Chinese people and Chinese territory, so that beyond e.g. Taiwan there isn't an interest in global domination except to the extent that it prevents them being threatened by other expansionist powers.
This or a number of other heuristics / judgements / perspectives could change substantially how we think about whether China would race for AGI, and/or be receptive to an argument that AGI development is dangerous and should be suppressed. China clearly has a lot to gain from harnessing AGI, but they have a lot to lose too, just like the West.
Currently, this is a pretty superfi... (read more)
Marcus Daniell appreciation note
@Marcus Daniell, cofounder of High Impact Athletes, came back from knee surgery and is donating half of his prize money this year. He projects raising $100,000. Through a partnership with Momentum, people can pledge to donate for each point he gets; he has raised $28,000 through this so far. It's cool to see this, and I'm wishing him luck for his final year of professional play!
I'm curious why there hasn't been more work exploring a pro-AI or pro-AI-acceleration position from an effective altruist perspective. Some points:
- Unlike existential risk from other sources (e.g. an asteroid) AI x-risk is unique because humans would be replaced by other beings, rather than completely dying out. This means you can't simply apply a naive argument that AI threatens total extinction of value to make the case that AI safety is astronomically important, in the sense that you can for other x-risks. You generally need additional assumptions.
- Total utilitarianism is generally seen as non-speciesist, and therefore has no intrinsic preference for human values over unaligned AI values. If AIs are conscious, there don't appear to be strong prima facie reasons for preferring humans to AIs under hedonistic utilitarianism. Under preference utilitarianism, it doesn't necessarily matter whether AIs are conscious.
- Total utilitarianism generally recommends large population sizes. Accelerating AI can be modeled as a kind of "population accelerationism". Extremely large AI populations could be preferable under utilitarianism compared to small human populations, even those with high per-ca
... (read more)I mostly want to +1 to Jonas’ comment and share my general sentiment here, which overall is that this whole situation makes me feel very sad. I feel sad for the distress and pain this has caused to everyone involved.
I’d also feel sad if people viewed Owen here as having anything like a stereotypical sexual predator personality.
My sense is that Owen cares extraordinarily about not hurting others.
It seems to me like this problematic behavior came from a very different source – basically problems with poor theory of mind and underestimating power dynamics. Owen can speak for himself on this; I’m just noting as someone who knows him that I hope people can read his reflections genuinely and with an open mind of trying to understand him.
That doesn’t make Owen’s actions ok – it’s definitely not – but it does make me hopeful and optimistic that Owen has learnt from his mistakes and will be able to tread cautiously and not make problems of this sort again.
Personally, I hope Owen can be involved in the community again soon.
[Edited to add: I’m not at all confident here and just sharing my perspective based on my (limited) experience. I don’t think people should give my opinion/judgment much weight. I haven’t engaged at all deeply in understanding this, and don’t plan to engage more]