This is a linkpost for https://www.bloomberg.com/news/features/2023-03-07/effective-altruism-s-problems-go-beyond-sam-bankman-fried#xj4y7vzkg
Try non-paywalled link here.
More damning allegations:
A few quotes:
At the same time, she started to pick up weird vibes. One rationalist man introduced her to another as “perfect ratbait”—rat as in rationalist. She heard stories of sexual misconduct involving male leaders in the scene, but when she asked around, her peers waved the allegations off as minor character flaws unimportant when measured against the threat of an AI apocalypse. Eventually, she began dating an AI researcher in the community. She alleges that he committed sexual misconduct against her, and she filed a report with the San Francisco police. (Like many women in her position, she asked that the man not be named, to shield herself from possible retaliation.) Her allegations polarized the community, she says, and people questioned her mental health as a way to discredit her. Eventually she moved to Canada, where she’s continuing her work in AI and trying to foster a healthier research environment.
Of the subgroups in this scene, effective altruism had by far the most mainstream cachet and billionaire donors behind it, so that shift meant real money and acceptance. In 2016, Holden Karnofsky, then the co-chief executive officer of Open Philanthropy, an EA nonprofit funded by Facebook co-founder Dustin Moskovitz, wrote a blog post explaining his new zeal to prevent AI doomsday. In the following years, Open Philanthropy’s grants for longtermist causes rose from $2 million in 2015 to more than $100 million in 2021.
Open Philanthropy gave $7.7 million to MIRI in 2019, and Buterin gave $5 million worth of cash and crypto. But other individual donors were soon dwarfed by Bankman-Fried, a longtime EA who created the crypto trading platform FTX and became a billionaire in 2021. Before Bankman-Fried’s fortune evaporated last year, he’d convened a group of leading EAs to run his $100-million-a-year Future Fund for longtermist causes.
Even leading EAs have doubts about the shift toward AI. Larissa Hesketh-Rowe, chief operating officer at Leverage Research and the former CEO of the Centre for Effective Altruism, says she was never clear how someone could tell their work was making AI safer. When high-status people in the community said AI risk was a vital research area, others deferred, she says. “No one thinks it explicitly, but you’ll be drawn to agree with the people who, if you agree with them, you’ll be in the cool kids group,” she says. “If you didn’t get it, you weren’t smart enough, or you weren’t good enough.” Hesketh-Rowe, who left her job in 2019, has since become disillusioned with EA and believes the community is engaged in a kind of herd mentality.
In extreme pockets of the rationality community, AI researchers believed their apocalypse-related stress was contributing to psychotic breaks. MIRI employee Jessica Taylor had a job that sometimes involved “imagining extreme AI torture scenarios,” as she described it in a post on LessWrong—the worst possible suffering AI might be able to inflict on people. At work, she says, she and a small team of researchers believed “we might make God, but we might mess up and destroy everything.” In 2017 she was hospitalized for three weeks with delusions that she was “intrinsically evil” and “had destroyed significant parts of the world with my demonic powers,” she wrote in her post. Although she acknowledged taking psychedelics for therapeutic reasons, she also attributed the delusions to her job’s blurring of nightmare scenarios and real life. “In an ordinary patient, having fantasies about being the devil is considered megalomania,” she wrote. “Here the idea naturally followed from my day-to-day social environment and was central to my psychotic breakdown.”
Taylor’s experience wasn’t an isolated incident. It encapsulates the cultural motifs of some rationalists, who often gathered around MIRI or CFAR employees, lived together, and obsessively pushed the edges of social norms, truth and even conscious thought. They referred to outsiders as normies and NPCs, or non-player characters, as in the tertiary townsfolk in a video game who have only a couple things to say and don’t feature in the plot. At house parties, they spent time “debugging” each other, engaging in a confrontational style of interrogation that would supposedly yield more rational thoughts. Sometimes, to probe further, they experimented with psychedelics and tried “jailbreaking” their minds, to crack open their consciousness and make them more influential, or “agentic.” Several people in Taylor’s sphere had similar psychotic episodes. One died by suicide in 2018 and another in 2021.
Within the group, there was an unspoken sense of being the chosen people smart enough to see the truth and save the world, of being “cosmically significant,” says Qiaochu Yuan, a former rationalist.
Yuan started hanging out with the rationalists in 2013 as a math Ph.D. candidate at the University of California at Berkeley. Once he started sincerely entertaining the idea that AI could wipe out humanity in 20 years, he dropped out of school, abandoned the idea of retirement planning, and drifted away from old friends who weren’t dedicating their every waking moment to averting global annihilation. “You can really manipulate people into doing all sorts of crazy stuff if you can convince them that this is how you can help prevent the end of the world,” he says. “Once you get into that frame, it really distorts your ability to care about anything else.”
That inability to care was most apparent when it came to the alleged mistreatment of women in the community, as opportunists used the prospect of impending doom to excuse vile acts of abuse. Within the subculture of rationalists, EAs and AI safety researchers, sexual harassment and abuse are distressingly common, according to interviews with eight women at all levels of the community. Many young, ambitious women described a similar trajectory: They were initially drawn in by the ideas, then became immersed in the social scene. Often that meant attending parties at EA or rationalist group houses or getting added to jargon-filled Facebook Messenger chat groups with hundreds of like-minded people.
The eight women say casual misogyny threaded through the scene. On the low end, Bryk, the rationalist-adjacent writer, says a prominent rationalist once told her condescendingly that she was a “5-year-old in a hot 20-year-old’s body.” Relationships with much older men were common, as was polyamory. Neither is inherently harmful, but several women say those norms became tools to help influential older men get more partners. Keerthana Gopalakrishnan, an AI researcher at Google Brain in her late 20s, attended EA meetups where she was hit on by partnered men who lectured her on how monogamy was outdated and nonmonogamy more evolved. “If you’re a reasonably attractive woman entering an EA community, you get a ton of sexual requests to join polycules, often from poly and partnered men” who are sometimes in positions of influence or are directly funding the movement, she wrote on an EA forum about her experiences. Her post was strongly downvoted, and she eventually removed it.
The community’s guiding precepts could be used to justify this kind of behavior. Many within it argued that rationality led to superior conclusions about the world and rendered the moral codes of NPCs obsolete. Sonia Joseph, the woman who moved to the Bay Area to pursue a career in AI, was encouraged when she was 22 to have dinner with a 40ish startup founder in the rationalist sphere, because he had a close connection to Peter Thiel. At dinner the man bragged that Yudkowsky had modeled a core HPMOR professor on him. Joseph says he also argued that it was normal for a 12-year-old girl to have sexual relationships with adult men and that such relationships were a noble way of transferring knowledge to a younger generation. Then, she says, he followed her home and insisted on staying over. She says he slept on the floor of her living room and that she felt unsafe until he left in the morning.
On the extreme end, five women, some of whom spoke on condition of anonymity because they fear retribution, say men in the community committed sexual assault or misconduct against them. In the aftermath, they say, they often had to deal with professional repercussions along with the emotional and social ones. The social scene overlapped heavily with the AI industry in the Bay Area, including founders, executives, investors and researchers. Women who reported sexual abuse, either to the police or community mediators, say they were branded as trouble and ostracized while the men were protected.
In 2018 two people accused Brent Dill, a rationalist who volunteered and worked for CFAR, of abusing them while they were in relationships with him. They were both 19, and he was about twice their age. Both partners said he used drugs and emotional manipulation to pressure them into extreme BDSM scenarios that went far beyond their comfort level. In response to the allegations, a CFAR committee circulated a summary of an investigation it conducted into earlier claims against Dill, which largely exculpated him. “He is aligned with CFAR’s goals and strategy and should be seen as an ally,” the committee wrote, calling him “an important community hub and driver” who “embodies a rare kind of agency and a sense of heroic responsibility.” (After an outcry, CFAR apologized for its “terribly inadequate” response, disbanded the committee and banned Dill from its events. Dill didn’t respond to requests for comment.)
Rochelle Shen, a startup founder who used to run a rationalist-adjacent group house, heard the same justification from a woman in the community who mediated a sexual misconduct allegation. The mediator repeatedly told Shen to keep the possible repercussions for the man in mind. “You don’t want to ruin his career,” Shen recalls her saying. “You want to think about the consequences for the community.”
One woman in the community, who asked not to be identified for fear of reprisals, says she was sexually abused by a prominent AI researcher. After she confronted him, she says, she had job offers rescinded and conference speaking gigs canceled and was disinvited from AI events. She says others in the community told her allegations of misconduct harmed the advancement of AI safety, and one person suggested an agentic option would be to kill herself.
For some of the women who allege abuse within the community, the most devastating part is the disillusionment. Angela Pang, a 28-year-old who got to know rationalists through posts on Quora, remembers the joy she felt when she discovered a community that thought about the world the same way she did. She’d been experimenting with a vegan diet to reduce animal suffering, and she quickly connected with effective altruism’s ideas about optimization. She says she was assaulted by someone in the community who at first acknowledged having done wrong but later denied it. That backpedaling left her feeling doubly violated. “Everyone believed me, but them believing it wasn’t enough,” she says. “You need people who care a lot about abuse.” Pang grew up in a violent household; she says she once witnessed an incident of domestic violence involving her family in the grocery store. Onlookers stared but continued their shopping. This, she says, felt much the same.
The paper clip maximizer, as it’s called, is a potent meme about the pitfalls of maniacal fixation.
Every AI safety researcher knows about the paper clip maximizer. Few seem to grasp the ways this subculture is mimicking that tunnel vision. As AI becomes more powerful, the stakes will only feel higher to those obsessed with their self-assigned quest to keep it under rein. The collateral damage that’s already occurred won’t matter. They’ll be thinking only of their own kind of paper clip: saving the world.
Do you have details of his college expulsion and accusations? I honestly couldn't find them. After going through the whole discussion of his apology I could only find his own letter about it from 10 years prior saying it was an incorrect expulsion and also someone linked some other cases of Brown doing a poor job on sexual misconduct cases: IIRC other courts deemed that the brown committee mishandled cases of students accused of sexual misconduct. It appears in one case (not necessarily Jacy's but I've seen this happen myself elsewhere, so I'd actually bet more likely than not that if it was allowed to happen one time it happened in Jacy's case too) that the students had banded together and written letters of unsubstantiated rumors to the Brown committee (eg, assuming what they'd heard in the gossip mill to be true and then trying to make sure the committee "knew" the unsubstantiated rumors, perhaps stating them as fact not even relaying how they had heard it), and then the Brown committee actually did use the letters as evidence in the University tribunal. The actual US court said that Brown, in doing this, went against due process. To reiterate, that was another Brown case not Jacy's, but I'd like to hear what actually happened in Jacy's case if we were to count an offense from 10 years ago (which I now think CEA also mostly did not).
I'm really not trying to defend Jacy here. Actually after reading more and someone even DMed to have a conversation, I do expect he did worse than mentioned in his apology but that any victim won't go public so those of us on the outside will never know for sure. But I'd also like to exhibit why I didn't much discuss the college expulsion, and I still won't jump the gun that, whatever he did, it necessarily deserved expulsion because it looks like Brown at that time may have been both incredibly bad at handling such cases and incredibly rife with rumor mill.
Plus it was still 10 years ago, and as I said elsewhere he has been punished (possibly overpunished) for that. I know that punishment might not assuage concerns of safety (I've been repeatedly surprised that questions of rehabilitation and self improvement have been so missing from the discussion of him, like no one seems to care that he also sent apologies directly to the women and also no one has wondered if there is a way he could make it up to the community via self-improvement efforts, although I don't think he has focused on this), but to me safety is the important thing. I guess I'm still unsure what safety level to put Jacy at in my mind today even if I'm becoming more sure he did some troubling things left out of his apology in his past.
In pushing back on bringing up the college thing, I see myself not as defending Jacy, but as pushing back on an instinct to trust decisions of others, which might lead us into unwarranted disgust reactions and type-casting, which, to me, gets in the way of figuring out what matters about his presence, which, to me, is how safe he is to have around today (10 years after the expulsion).
I know that some people don't find his work the most worth doing/granting to, but some people do, and if it is worth doing, his actual safety would be worth figuring out and making transparent.
(That said, as I conclude here, I'm now more interested in what is going on upstream, as to why this is so hard to figure out)
[Additional Reflection: I wish potential granters or collaborators of Jacy would speak to the women (maybe CEA would put them in contact?), and see what they think. While I don't think their perspectives should be "the be all end all", I find myself really wishing I could defer to their thoughts today about concrete actions like grantmaking given the passage of time. There are cases in my own life regarding men who I've had complaints about, where I would continue to have concerns about safety and I'd want others to act as though he is still a risk (forever or for some very long amount of time). But there are also other cases, from my own experience as a victim, where depending on the person's evident growth, I might say, "I think it's been long enough and it's probably okay now".
If I were a potential collaborator with Jacy I'd personally be very reluctant to assume that victims and people in the know feel the former or the latter, which in my case would mean I'd dig deeper with the EA Community Health Team. I'd also feel frustrated and concerned if I couldn't find out more, and probably not grant but feel there was some informational injustice occurring. I hope that CEAs processes allow for thorough understanding by well-meaning parties who need the info, and even potential requests to be put in contact with the victims respectfully. If SFF did not go looking for opportunities to thoroughly check things, I do find that troubling/risky/bad of SFF.
But if systems are not in place for that, I'm not sure we can expect potential collaborative actors such as SFF to just trust nontransparent decisions for the rest of time. It will depend on the case as to exactly how long, but after some amount of time without more complaints we should expect the scales for actors who would otherwise collaborate with past-accused to sort of tip against trusting the old nontransparent decision. They will at some point put much higher probability that it is not relevant to decisions they are faced with today. There will also, simultaneously, begin a period of time where people who view the old decision with different credences get upset at those whose scales tipped toward disregarding the old decision sooner than their own scales lead them too. This means there will be division and some predictable social unrest, until enough time has passed that basically everyone is ready to make peace with/disregard the nontransparent case (which may take 50 years idk). This is a bug of the world which will occur within communities of good people, because communities of good people still put different credences on things. It is not fully- mitigated by people trying to be "better" so it has to be fixed on a system level.
Since I started this topic of checking in about Jacy, I'm becoming more sure that Jacy did some serious things, but I'm also becoming less sure we can judge actors like SFF for attempting to collaborate anyway in cases of non-transparency. Jeff K just wrote a good and short piece about this a couple days ago. I see 4 possible cases here:
I'm pretty sure multiple of these possibilities can be ruled out by the people in the know, or even random people who do a little digging, but I'm burned out on it for now.