Beautiful and inspiring. Thanks for sharing this.
I hope more EAs think about turning abstract longtermist ideas into more emotionally compelling media!
mikbp: good question.
Finding meaningful roles for ordinary folks ('mediocrities') is a big challenge for almost every human organization, movement, and subculture. It's not unique to EA -- although EA does tend to be quite elitist (which is reasonable, given that many of its core insights and values require a very high level of intelligence and openness to understand.)
The usual strategy for finding roles for ordinary people in organizations is to create hierarchical structures in which the ordinary people are bossed around/influenced/deployed b...
Counterpoints:
My more general worry is that this kind of narrative th...
A brief meta-comment on critics of EAs, and how to react to them:
We're so used to interacting with each other in good faith, rationally and empirically, constructively and sympathetically, according to high ethical and epistemic standards, that we EAs have real trouble remembering some crucial fact of life:
This seems to me to be a self-serving, Manichean, and psychologically implausible account of why people write criticisms of EA.
I think there's a huge difference in potential reach between a major TV series and a LessWrong post.
According to this summary from Financial Times, as of March 27, '3 Body Problem' had received about 82 million view-hours, equivalent to about 10 million people worldwide watching the whole 8-part series. It was a top 10 Netflix series in over 90 countries.
Whereas a good LessWrong post might get 100 likes.
We should be more scope-sensitive about public impact!
PS: Fun fact: after my coauthor Peter Todd (Indiana U.) and I read '3 Body Problem' novel in 2015, we were invited to a conference on 'active Messaging to Extraterrestrial Intelligence' ('active METI') at Arecibo radio telescope in Puerto Rico. Inspired by Liu Cixin's book, we gave a talk about the extreme risks of active METI, which we then wrote up as this journal paper, published in 2017:
PDF here
Journal link here
Title: The Evolutionary Psychology of Extraterrestrial Intelligence: Are There
Universal Adaptations in Search, Aversion, and Signaling?
Ab...
'3 Body Problem' is a new 8-episode Netflix TV series that's extremely popular, highly rated (7.8/10 on IMDB), and based on the bestselling 2008 science fiction book by Chinese author Liu Cixin.
It raises a lot of EA themes, e.g. extinction risk (for both humans & the San-Ti aliens), longtermism (planning 400 years ahead against alien invasion), utilitarianism (e.g. sacrificing a few innocents to save many), cross-species empathy (e.g. between humans & aliens), global governance to coordinate against threats (e.g. Thomas Wade, the UN, the Wallfacers), etc.
Curious what you all think about the series as an entry point for talking about some of these EA issues with friends, family, colleagues, and students?
Well Leif Wenar seems to have written a hatchet job that's deliberately misleading about EA values, priorities, and culture.
The usual anti-EA ideologues are celebrating about Wired magazine taking such a negative view of EA.
For example, leader of the 'effective accelerationist' movement 'Beff Jezos' (aka Guillaume Verdon) wrote this post on X, linking to the Wenar piece, saying simply 'It's over. We won'. Which is presumably a reference to EA people working on AI safety being a bunch of Luddite 'decels' who want to stop the glorious progress towards ...
David - this is a helpful and reasonable comment.
I suspect that many EAs tactically and temporarily suppressed their use of EA language after the FTX debacle, when they knew that EA had suffered a (hopefully transient) setback.
This may actually be quite analogous to the cyclical patterns of outreach and enthusiasm that we see in crypto investing itself. The post-FTX 2022-2023 bear market in crypto was reflected in a lot of 'crypto influencers' just not talking very much about crypto for a year or two, when investor sentiment was very low. Then, as the pric...
Nicholas - thanks for posting this helpful summary of these empirical studies.
I do find it somewhat sad and alarming that so many EAs seem to be delaying or avoiding having kids, out of fear that this will 'impair productivity'.
Productivity-maxxing can be a false god - and this is something that's hard to understand until one becomes a parent.
Just as money sent to charities can vary 100x in terms of actual effectiveness, 'productivity' can vary hugely in terms of actual impact in the world.
Lots of academic parents I know (including me) realized...
Jason - fair point.
Except that all psychological traits are heritable, so offspring of smart, conscientious, virtuous EAs are likely to be somewhat smarter, more conscientious, and more virtuous than average offspring.
I think it's important for EA to avoid partisan political fights like this - they're not neglected cause areas, and they're often not tractable.
It's easy for the Left to portray the 'far right' as a 'threat to democracy, in the form of 'fascist authoritarians'.
It's also easy for the Right to portray the 'far left' as a 'threat to democracy' in the form of 'socialist authoritarians'.
The issue of immigration (e.g. as considered by AfD) is especially tricky and controversial, in terms of whether increased immigration into Western democracies of people w...
I think it's important that EA analysis not start with its bottom line already written. In some situations the most effective altruistic interventions (with a given set of resources) will have partisan political valence and we need to remain open to those possibilities; they're usually not particularly neglected or tractable but occasional high-leverage opportunities can arise. I'm very skeptical of Effektiv-Spenden's new fund because it arbitrarily limits its possible conclusions to such a narrow space, but limiting one's conclusions to exclude that space would be the same sort of mistake.
Agreed. Being able to identify effective interventions that support or protect democracy in certain contexts doesn't necessarily seem like a bad idea.
The challenge with the AfD is that they seem to be the victims of behaviour that could be considered antidemocratic: lawmakers are considering banning the party, and the state has put the party under surveillance. This would be unconstitutional in many countries. I think there could be legitimate arguments that "protecting democracy" could sometimes involve defending groups like the AfD, as well as defe...
Kyle - I just completed the survey yesterday. I did find it very long and grueling. I worry that you might get lower quality data in the last 1/2 of the survey, due to participant fatigue and frustration.
My suggestion -- speaking as a psych professor who's run many surveys over the last three decades -- is to develop a shorter survey (no more than 25 minutes) that focuses on your key empirical questions, and try to get a good large sample for that.
I just reposted your X/Twitter recruitment message, FWIW:
https://twitter.com/law_fiore/status/1706806416931987758
Good luck! I might suggest doing a shorter follow-up survey in due course -- 90 minutes is a big time commitment for $15 payment!
Johanna - thanks very much for sharing this fascinating, important, and useful research! Hope lots of EAs pay attention to it.
Hayven - there's a huge, huge middle ground between reckless e/acc ASI accelerationism on the one hand, and stagnation on the other hand.
I can imagine a moratorium on further AGI research that still allows awesome progress on all kinds of wonderful technologies such as longevity, (local) space colonization, geoengineering, etc -- none of which require AGI.
Isaac -- good, persuasive post.
I agree that p(doom) is rhetorically ineffective -- to normal people, it just looks weird, off-putting, pretentious, and depressing. Most folks out there have never taken a probability and statistics course, and don't know what p(X) means in general, much less p(doom).
I also agree that p(doom) is way too ambiguous, in all the ways you mentioned, plus another crucial way: it isn't conditioned on anything we actually do about AI risk. Our p(doom) given an effective global AI regulation regime might be a lot lower th...
Caleb - thanks for this helpful introduction to Zach's talents, qualifications, and background -- very useful for those of us who don't know him!
I agree that EA organizations should try very hard to avoid entanglements with AI companies such as Anthropic - however well-intentioned they seem. We need to be able to raise genuine concerns about AI risks without feeling beholden to AI corporate interests.
Malo - bravo on this pivot in MIRI's strategy and priorities. Honestly it's what I've hoped MIRI would do for a while. It seems rational, timely, humble, and very useful! I'm excited about this.
I agree that we're very unlikely to solve 'technical alignment' challenges fast enough to keep AI safe, given the breakneck rate of progress in AI capabilities. If we can't speed up alignment work, we have to slow down capabilities work.
I guess the big organizational challenge for MIRI will be whether its current staff, who may have been recruited largely for ...
Will - we seem to be many decades away from being able to do 'mind uploading' or serious levels of cognitive enhancement, but we're probably only a few years away from extremely dangerous AI.
I don't think that betting on mind uploading or cognitive enhancement is a winning strategy, compared to pausing, heavily regulating, and morally stigmatizing AI development.
(Yes, given a few generations of iterated embryo selection for cognitive ability, we could probably breed much smarter people within a century or two. But they'd still run a million times slo...
Remmelt - I agree. I think EA funders have been way too naive in thinking that, if they just support the right sort of AI development, with due concern for 'alignment' issues, they could steer the AI industry away from catastrophe.
In hindsight, this seems to have been a huge strategic blunder -- and the big mistake was under-estimating the corporate incentives and individual hubris that drives unsafe AI development despite any good intentions of funders and founders.
Chris - this is all quite reasonable.
However, one could dispute 'Premise 2: AGI has a reasonable chance of arriving in the next 30 or 40 years.'
Yes, without any organized resistance to the AI industry, the AI industry will develop AGI (if AGI is possible) -- probably fairly quickly.
But, if enough people accept Premise 5 (likely catastrophe) and Premise 6 (we can make a difference), then we can prevent AGI from arriving.
In other words, the best way to make 'AI go well' may be to prevent AGI (or ASI) from happening at all.
Thanks for this provocative and timely post.
I agree that EAs have been far too friendly to AI companies, too eager to get hired within these companies as internal AI safety experts, too willing to give money to support their in-house safety work, and too wary about upsetting AI leaders and developers.
This has diluted our warnings about extinction risks from AI. I've noticed that on social media like X, ordinary folks get very confused about EA attitudes towards AI. If we really think AI is extraordinarily dangerous, why would we be working with...
Alix - Thanks for writing this. I think it is a serious issue in terms of spreading EA from being a mostly Anglosphere movement (in UK, US, Australia, etc) to becoming a global movement,
There seem to be about 400 million native English speakers in the world, plus around another 1.5 billion people who have English as their second language (e.g. many in India and China), with varying degrees of fluency. From my experience of teaching college classes in China, often people there have much higher English fluency in writing and reading than in speaking.
So, roug...
I think the concern about jargon is misplaced in this context. Jargon is learned by native and non-native speakers alike as they engage with the community: it's specifically the stuff that already knowing the language doesn't help you with, which means not knowing the language doesn't disadvantage you. That's not to say jargon doesn't have its own problems, but I think that someone who attempts to reduce jargon specifically as a way to reach non-native speakers better has probably misdirected their focus.
Jeff - thanks very much for sharing the link to that post. I encourage others to read it - it's fairly short. It nicely sets out some of the difficulties around anonymity, doxxing, accusations, counter-accusations, etc.
I can't offer any brilliant solutions to these issues, but I am glad to see that the risks of false or exaggerated allegations are getting some serious attention.
Will - thanks very much for sharing your views, and some of the discussion amongst the EA Forum moderators.
These are tricky issues, and I'm glad to see that they're getting some serious attention, in terms of the relative costs, benefits, and risks of different possible politicies.
I'm also concerned about 'setting a precedent of first-mover advantage'. A blanket policy of first-mover (or first-accuser) anonymity would incentivize EAs to make lots of allegations before the people they're accusing could make counter-allegations. That seems likely to create massive problems, conflicts, toxicity, and schisms within EA.
Victor - thanks for elaborating on your views, and developing this sort of 'career longtermist' thought experiment. I did it, and did take it seriously.
However.
I've known many, many academics, researchers, writers, etc who have been 'cancelled' by online mobs, who have made mountains out of molehills. In many cases, the reputations, careers, and prospects of the cancelled people are ruined. Which is, of course, the whole point of cancelling them -- to silence them, to ostracize them, and to keep them from having any public influence.
In some cases, the canc...
Jeff -- actual 'whistleblowers' make true and important allegations that withstand scrutiny and fact-checking. I agree that legit whistleblowers need the protection of anonymity.
But not all disgruntled ex-employees with a beef against their former bosses are whistleblowers in this sense. Many are pursuing their own retaliation strategies, often turning trivial or imagined slights into huge subjective moral outrages -- and often getting credulous friends, family, journalists, or activists to support their cause and amplify their narrative.
It's true th...
So, arguably, we have a case here of two disgruntled ex-employees retaliating against a former employer. Why should their retaliation be protected by anonymity?
Highlighting that is an important crux (and one on which I have mixed feelings). Not all allegations of incorrect conduct rise to the level of "whistleblowing." A whistleblower brings alleged misconduct on a matter of public importance to light. We grant lots of protections in furtherance of that public interest, not out of regard for the whistleblower's private interests.
Is this a garden-variety di...
Victor - this is total victim-blaming. Good people trying to hire good workers for their organizations can be exploited and ruined by bad employees, just as much as good employees can be exploited and ruined by bad employers.
You said 'If making a bad hire doesn't get in the way of success and doing good, does it even make sense to fixate on it?'
Well, we've just seen an example of two very bad hires ('Alice' and 'Chloe') almost ruin an organization permanently. They very much got in the way of success and doing good. I would not wish their personaliti...
TImon - the whole point of EA was to get away from the kind of vacuous, feel-good empathy-signaling that animated most charitable giving before EA.
EA focuses on causes that have large scope, but that are tractable and neglected. These three criteria are the exact opposite of what one would focus on if one simply wanted to signal being 'warm' and 'empathic' -- which works best when focusing on specific identifiable lives (small scope), facing problems that are commonly talked about (not neglected), and that are intractable (so the charity can keep run...
I agree with all this, and I also think the OP might be speaking to some experiences in EA you might not have had which could result in you talking past each other.
Well all three key figures at Nonlinear are also real people, and they got deanonymized by Ben Pace's highly critical post, which had the likely effect (unless challenged) of stopping Nonlinear from doing its work, and of stigmatizing its leaders.
So, I don't understand the double standard, where those subject to false allegations don't enjoy anonymity, and those making the false allegations do get to enjoy anonymity.
So, I don't understand the double standard, where those subject to false allegations don't enjoy anonymity, and those making the false allegations do get to enjoy anonymity.
I don't think all people in the replies were arguing that Ben's initial post was okay and deanonymizing Alice and or Chloe would be bad (which I think you would call a double standard, which I'm not commenting on right now). Some probably do but some probably think that Ben's initial post was bad and that deanonymizing Alice and or Chloe would also be bad and that we shouldn't try to correct one bad with another bad, which doesn't look like a double standard to me.
Lorenzo - yes, I'm complying with that request.
I'm just puzzled about the apparent double standard where the first people to make allegations enjoy privacy & anonymity (even if their allegations seem to be largely false or exaggerated), but the people they're accusing don't enjoy the same privilege.
Writing in a personal capacity.
Hi Geoffrey, I think you raise a very reasonable point.
There’s some unfortunate timing at play here: 3/7 of the active mod team—Lizka, Toby, and JP—have been away at a CEA retreat for the past ~week, and have thus mostly been offline. In my view, we would have ideally issued a proper update by now on the earlier notice: “For the time being, please do not post personal information that would deanonymize Alice or Chloe.”
In lieu of that, I’ll instead publish one of my comments from the moderators’ Slack thread, along w...
I agree that the Forum's rules and norms on privacy protection are confused. A few observations:
(1) Suppose a universe in which the first post on this topic had been from from Nonlinear, and had accused Alice and Chloe (by their real names) of a pattern of mendaciously spreading lies about Nonlinear. Would that post have been allowed to stay up? If yes, it is hard to come up with a principled reason why Alice and Chloe can't be named now.
If no, we would need to think about why this hypothetical post would have been disallowed. The best argument I cam...
PS For the people downvoting and disagree-voting on my comment here:
I raised some awkward questions, without offering any answers, conclusions, or recommendations.
Are you disagreeing that it's even legitimate to raise any issues about the ethics of 'whistleblower' anonymity in cases of potential false allegations?
I'd really like to understand what you're disagreeing about here.
I raised some awkward questions, without offering any answers, conclusions, or recommendations.
I don't feel like you raised discussion with no preference for what the community decided. When I gave my answer, which many people seem to agree with, your response was to question whether that's REALLY what the EA community wants. I think it's a bit disingenuous to suggest that you're just asking a question when you clearly have a preference for how people answer!
I think the questions you're raising are important. I got kind of triggered by the issue I pointed out (and the fact that it's something that has already been discussed in the comments of the other post), so I downvoted the comment overall. (Also, just because Chloe is currently anonymous doesn't mean it's risk-free to imply misleading and damaging things about her – anonymity can be fragile.)
There were many parts of your comment that I agree with. I agree that we probably shouldn't have a norm that guarantees anonymity unconditionally. (But the anonymity ...
Ivy - I really appreciate your long, thoughtful comment here. It's exactly the sort of discussion I was hoping to spark.
I resonate to many of your conflicted feelings about these ethically complicated situations, given the many 'stakeholders' involved, and the many ways we can get our policies wrong in all kinds of ways.
Lukas - I guess one disadvantage of pseudonyms like 'Alice' and 'Chloe' is that it's quite difficult for outsiders who don't know their real identities to distinguish between them very clearly -- especially if their stories get very intertwined.
If we can't attach real faces and names to the allegations, and we can't connect their pseudonyms to any other real-world information about them, such as LinkedIn profiles, web pages, EA Forum posts, etc., then it's much harder to remember who's who, and to assess their relatively degrees of reliability or culpabili...
Even if the whistleblowers seem to be making serial false allegations against former employers?
Does EA really want to be a community where people can make false allegations with total impunity and no accountability?
Doesn't that incentivize false allegations?
Has there been a suggestion that Chloe has made serial false allegations against former employers? I thought that was only Alice.
TracingWoodgrains - thanks for an excellent post. I think it should lead many EAs to develop a new and more balanced perspective on this controversy.
And thanks for mentioning my EA Forum comments about Ben Pace doing amateur investigative reporting -- reporting that doesn't seem, arguably, to have lived up to the standards of basic journalistic integrity (regardless of how much time he and the Lightcone team may have put into it.)
This leaves us with a very awkward question about the ongoing anonymity of 'Alice' and 'Chloe', and I don't know what the ...
A quick reminder that moderators have asked, at least for the time being, to please not post personal information that would deanonymize Alice or Chloe.
PS For the people downvoting and disagree-voting on my comment here:
I raised some awkward questions, without offering any answers, conclusions, or recommendations.
Are you disagreeing that it's even legitimate to raise any issues about the ethics of 'whistleblower' anonymity in cases of potential false allegations?
I'd really like to understand what you're disagreeing about here.
So, what do you all think?
I continue to think that something went wrong for people to come away with takes that lump together Alice and Chloe in these ways.
Not because I'm convinced that Alice is as bad as Nonlinear makes it sound, but because, even based on Nonlinear's portrayal, Chloe is portrayed as having had a poor reaction to the specific employment situation, and (unlike Alice) not as having a general pattern/history of making false/misleading claims. That difference matters immensely regarding whether it's appropriate to warn future potential...
Short answer: I think Ben should defer to the community health team as to whether to reveal identities to them or not (I'm guessing they know). And probably the community health team should take their names and add it to their list where orgs can ask CH about any potential hires and learn of red flags in their past. I think Alice should def be included on that list, and Chloe should maybe be included (that's the part I'd let the CH team decide if it is was bad enough). It's possible Alice should be revealed publicly, or maybe just revealed to community org...
Whistleblower anonymity should remain protected in the vast majority of situations, including this one, imo
Jason - yes, fair points.
Hopefully any donations from individual donors who have benefitted from actually taking profits (into fiat currency) from volatile assets (such as crypto) would be less subject to corporate collapses, scandals, and clawbacks than donations from crypto companies such as FTX. But it's well worth thinking about these kinds of financial and legal risks.
People who are disagree-voting with me on this:
Please explain how a 120-fold difference in population sizes between groups wouldn't yield any bias in the global influence those groups would tend to have at the United Nations?
I didn't vote on your post, but I could imagine disagree voting to indicate disagreement with the implication that Muslims are fundamentally 'enemies of Israel'.
Jason - this is a reasonable concern. The 4-year crypto asset cycle could indeed lead to cycles of windfalls and dry spells for donations. But I guess the burden would be on EA organizations to smooth this out by saving up some of the windfall money to cover the dry spells -- rather than the donors trying to avoid high-volatility assets that show such cycles?
David - I mention the gender bias in moral typecasting in this context because (1) moral typecasting seems especially relevant in these kinds of organizational disputes, (2) I've noticed some moral typecasting in this specific discussion on EA Forum, and (3) many EAs are already familiar with the classical cognitive biases, many of which have been studied since the early 1970s, but may not be familiar with this newly researched bias.
To be honest, I'm facing a difficult trade-off between whether I should donate more money now to the organizations I traditionally support (e.g. Vegan Outreach), versus investing in the crypto market before the next expected bull run in 2024-2025, after the bitcoin ETF approvals, the bitcoin halving, the next hype cycle, etc -- in hopes that an investment now could yield 10x more money to give later.
I'm curious if any other EAs are thinking about this tradeoff. I know a lot of us got stung, both financially and emotionally, by the FTX disaster. But IMHO, crypto is here to stay -- at least as a hyper-volatile risk asset that can be used to leverage wealth, by those with the knowledge and risk-tolerance to buy low and sell high.
Copying from the Facebook Group
Fellow crypto investor here (since 2015/16), I now run a crypto fund. A few points to make.
1. In the short term, way more money is made on getting in on trends before they are big than on fundamentals, at least they have historically. You wanted to get in on NFTs early. You wanted to get in on "AI tokens" early. You wanted to get in on yield farming early. You wanted to get in on ICOs early. You wanted to get in on dog tokens early. You wanted to get in on anything trending early. Etc. It didn't matter what project fundamenta...
Zachary - thanks very much for this update. I imagine a lot of us were pretty worried about how the FTX debacle would affect the core EA organizations and their budgets.
Is it fair to say that the organizations under the EV umbrella are still doing OK in terms of their budgets and funding situation, at least in the short term? (eg being able to pay salaries, rents, funding grants they're committed to, etc)? Or is there any urgent need for fund-raising that EAs could help with?
Back in the 1990s, some of us were working on using genetic algorithms (simulated evolutionary methods) to evolve neural network architectures. This was during one of the AI winters, between the late 1980s flurry of neural network research based on back-propagation, and the early 2000s rise of deep learning in much larger networks.
Some examples of this work are here (designing neural networks with genetic algorithms, 1989), here (genetic algorithms for autonomous robot control systems, 1994), here (artificial evolution as a path towards AI, 1997), an...
There's a human cognitive bias that may be relevant to this whole discussion, but that may not be widely appreciated in EA yet: gender bias in 'moral typecasting'.
In a 2020 paper, my UNM colleague Tania Reynolds and coauthors found a systematic bias for women to be more easily categorized as victims and men as perpetrators, in situations where harm seems to have been done. The ran six studies in four countries (total N=3,317).
(Ever since a seminal paper by Gray & Wegner (2009), there's been a fast-growing literature on moral typecasting. Beyond t...
Where is the evidence people are seeing this as primarily E vs A&C rather than K vs A&C? The post is written by Kat, and the comments on this and other recent posts are from Kat…
I don't think it's productive to name just one or two of the very many biases one could bring up. I would need some reason to think this bias is more worth mentioning than other biases (such as Ben's payment to Alice and Chloe, or commenters' friendships, etc.).
My key point about investigative journalist expertise is that amateurs can invest a huge amount of time, money, and effort into investigations that are not actually very effective, fair, constructive, or epistemically sound.
As EAs know, charities vary hugely in their cost effectiveness. Investigative activities can also vary hugely in their time-effectiveness.
Yarrow - I'm curious which bits of what I wrote you found 'psychologically implausible'?