Holly --
I think the frustrating thing here, for you and me, is that, compared to its AI safety fiascos, EA did so much soul-searching after the Sam Bankman-Fried fiasco with the FTX fraud in 2022. We took the SBF/FTX debacle seriously as a failure of EA people, principles, judgment, mentorship, etc. We acknowledged that it hurt EA's public reputation, and we tried to identify ways to avoid making the same catastrophic mistakes again.
But as far as I've seen, EA has done very little soul-searching for its complicity in helping to launch OpenAI, and the...
Holly --
Thanks for this assertive, candid, blunt, challenging post.
You and I have, I think, reached similar views on some of the critical weaknesses of EA as it's currently led, run, funded, and defended.
All too often, 'EA discourse norms' have been overly influenced by LessWrong discourse norms, where an ivory-tower fetishization of 'rational discourse', 'finding cruxes', 'updating priors', 'avoiding ad hominems', 'steel-manning arguments', etc becomes a substitute for effective social or political action in the world as it is, given human nat...
Matt - thanks for the quick and helpful reply.
I think the main benefit of explicitly modeling ASI as being a 'new player' in the geopolitical game is that it highlights precisely the idea that the ASI will NOT just automatically be a tool used by China or the US -- but rather than it will have its own distinctive payoffs, interests, strategies, and agendas. That's the key issue that many current political leaders (e.g. AI Czar David Sacks) do not seem to understand -- if America builds an ASI, it won't be 'America's ASI', it will be the ASI's ASI, so to sp...
Matt -- thanks for an insightful post. Mostly agree.
However, on your point 2 about 'technological determinism': I worry that way too many EAs have adopted this view that building ASI is 'inevitable', and that the only leverage we have over the future of AI X-risk is to join AI companies explicitly trying to build ASI, and try to steer them in benign directions that increase control and alignment.
That seems to be the strategy that 80k Hours has actively pushed for years. It certainly helps EAs find lucrative, high-prestige jobs in the Bay Area, and gives th...
Thanks for this analysis. I think your post deserves more attention, so I upvoted it.
We need more game-theory analyses like this, of geopolitical arms race scenarios.
Way too often, people just assume that the US-China rivalry can be modelled simply as a one-shot Prisoner's Dilemma, in which the only equilibrium is mutual defection (from humanity's general interests) through both sides trying to build ASI as soon as possible.
As your post indicates, the relevant game theory must include incomplete and asymmetric information, possible mixed-strategy equ...
Tobias -- I take your point. Sort of.
Just as they say 'There are no atheists in foxholes' [when facing risk of imminent death during combat], I feel that it's OK to pray (literally and/or figuratively) when facing AI extinction risk -- even if one's an atheist or agnostic. (I'd currently identify as an 'agnostic', insofar as the Simulation Hypothesis might be true).
My X handle 'primalpoly' is polysemic, and refers partly to polyamory, but partly to polygenic traits (which I've studied extensively), and partly to some of the hundreds of other wo...
My new interview (48 mins) on AI risks for Bannon's War room: https://rumble.com/v6z707g-full-battleground-91925.html
This was my attempt to try out a few new arguments, metaphors, and talking points to raise awareness about AI risks among MAGA conservatives. I'd appreciate any feedback, especially from EAs who lean to the Right politically, about which points were most or least compelling.
PS the full video of my 15-minute talk was just posted today on the NatCon YouTube channel; here's the link
David -- I considered myself an atheist for several decades (partly in alignment with my work in evolutionary psychology), and would identify now as an agnostic (insofar as the Simulation Hypothesis has some slight chance of being true, and insofar as 'Simulation-Coders' aren't functionally any different from 'Gods', from our point of view).
And I'm not opposed to various kinds of reproductive tech, regenerative medicine research, polygenic screening, etc.
However, IMHO, too many atheists in the EA/Rationalist/AI Safety subculture have been too hostile or di...
Arepo - thanks for your comment.
To be strictly accurate, perhaps I should have said 'the more you know about AI risks and AI safety, the higher your p(doom)'. I do think that's an empirically defensible claim. Especially insofar as most of the billions of people who know nothing about AI risks have a p(doom) of zero.
And I might have added that thousands of AI devs employed by AI companies to build AGI/ASI have very strong incentives not to learn about too much about AI risks and AI safety of the sort that EAs have talked about for years, because such knowl...
Remmelt - thanks for posting this.
Senator Josh Hawley is a big deal, with a lot of influence. I think building alliances with people like him could help slow down reckless AGI development. He may not be as tuned into AI X-risk as your typical EA is, but he is, at least, resisting the power of the pro-AI lobbyists.
Thanks for sharing this.
IMHO, if EAs really want effective AI regulation & treaties, and a reduction in ASI extinction risk, we need to engage more with conservatives, including those currently in power in Washington. And we need to do so using the language and values that appeal to conservatives.
Joel -- have you actually read the Bruce Gilley book?
If you haven't, maybe give it a try before dismissing it as something that's 'extremely useful to avoid associating ourselves with'.
To me, EA involves a moral obligation to seek the truth about contentious political topics, especially those that concern the origins and functioning of successful institutions -- which is what the whole colonialism debate is centrally about. And not ignoring these topics just to stay inside the Overton window.
Jason -- your reply cuts to the heart of the matter.
Is it ethical to try to do good by taking a job within an evil and reckless industry? To 'steer it' in a better direction? To nudge it towards minimally-bad outcomes? To soften the extinction risk?
I think not. I think the AI industry is evil and reckless, and EAs would do best to denounce it clearly by warning talented young people not to work inside it.
JackM - these alleged 'tremendous' benefits are all hypothetical and speculative.
Whereas the likely X risk from ASI have been examined in detail by thousands of serious people, and polls show that most people, both inside and outside the AI industry, are deeply concerned by them.
This is why I think it's deeply unethical for 80k Hours to post jobs to work on ASI within AI companies.
Conor -- yes, I understand that you're making judgment calls about what's likely to be net harmful versus helpful.
But your judgment calls seem to assume -- implicitly or explicitly -- that ASI alignment and control are possible, eventually, at least in principle.
Why do you assume that it's possible, at all, to achieve reliable long-term alignment of ASI agents? I see no serious reason to think that it is possible. And I've never seen a single serious thinker make a principled argument that long-term ASI alignment with human values is, in fact, ...
This is a good video; thanks for sharing.
But I have to ask: why is 80k Hours still including job listings for AGI development companies that are imposing extinction risks on humanity?
I see dozens of jobs on the 80k Hours job board for positions at OpenAI, Anthropic, xAI, etc -- and not just in AI safety roles, but in capabilities development, lobbying, propaganda, etc. And even the 'AI safety jobs' seem to be there for safety-washing/PR purposes, with no real influence on slowing down AI capabilities development.
If 80k Hours wants to take a principled stand against reckless AGI development, then please don't advertise jobs where EAs are enticed by $300,000+ salaries to push AGI development.
Good post. Thank you.
But, I fear that you're overlooking a couple of crucial issues:
First, ageism. Lots of young people are simply biased against older people -- assuming that we're closed-minded, incapable of learning, ornery, hard to collaborate with, etc. I've encountered this often in EA.
Second, political bias. In my experience, 'signaling value-alignment' in EA organizations and AI safety groups isn't just a matter of showing familiarity with EA and AI concepts, people, strategies, etc. It's also a matter of signaling left-leaning political valu...
I trust my kids and grandkids to solve their own problems in the future.
I don't trust our generation to make sure our kids and grandkids survive.
Avoiding extinction is the urgent priority; all else can wait. (And, life is already getting better at a rapid rate for the vast majority of the world's people. We don't face any urgent or likely extinction risks other than technologies of our own making.)
I generally support the idea of 80k Hours putting more emphasis on AI risk as a central issue facing our species.
However, I think it's catastrophically naive to frame the issue as 'helping the transition to AGI go well'. This presupposes that there is a plausible path for (1) AGI alignment to be solved, for (2) global AGI safety treaties to be achieved and enforced in time, and for (3) our kids to survive and flourish in a post-AGI world.
I've seen no principled arguments to believe that any of these three things can be achieved. At all. And certainly not i...
Hey Geoffrey,
Niel gave a response to a similar comment below -- I'll just add a few things from my POV:
Strongly endorsing Greg Colbourn's reply here.
When ordinary folks think seriously about AGI risks, they don't need any consequentialism, or utilitarianism, or EA thinking, or the Sequences, or long-termism, or anything fancy like that.
They simply come to understand that AGI could kill all of their kids, and everyone they ever loved, and could ruin everything they and their ancestors ever tried to achieve.
Alex - thanks for the helpful summary of this exciting new book.
It looks like a useful required textbook for my 'Psychology of Effective Altruism' course (syllabus here), next time I teach it!
Well, the main asymmetry here is that the Left-leaning 'mainstream' press doesn't understand or report the Right's concerns about Leftist authoritarianism, but it generates and amplifies the Left's concerns about 'far Right authoritarianism'.
So, any EAs who follow 'mainstream' journalism (e.g. CNN, MSNBC, NY Times, WaPo) will tend to repeat their talking points, their analyses, and their biases.
Most reasonable observers, IMHO, understand that the US 'mainstream' press has become very left-leaning and highly biased over the last few decades, especially sinc...
It's unclear what your specific disagreements with my comment are.
Take what I think is the most crucial point I made: that there doesn't seem to be a democratic country in which a major candidate refused to accept defeat in a national election.
Which of these 4 best represents your position?
Yelnats - thanks for this long, well-researched, and thoughtful piece.
I agree that political polarization, destabilization, and potential civil war in the US (and elsewhere) are worthy of more serious consideration within EA, since they amplify many potential catastrophic risks and extinction risks.
However, I would urge you to try much harder to develop a less partisan analysis of these issues. This essay comes across (to me, as a libertarian centrist with some traditionalist tendencies) as a very elaborate rationalization for 'Stop Trump at all costs!', b...
While I agree that the post suffers from an unfortunate left-wing bias, I don't think this bias weakens its conclusions. Most of the discussed anti-polarization interventions are applicable to both right-wing and left-wing autocracy and extremism, so, for the sake of depolarization efforts, it matters relatively little how much authoritarianism is coming from each side of the aisle. The fact that you can also identify anti-democratic tendencies on the left strengthens the case for depolarization.
I would urge you to try much harder to develop a less partisan analysis of these issues. This essay comes across (to me, as a libertarian centrist with some traditionalist tendencies) as a very elaborate rationalization for 'Stop Trump at all costs!', based on the commonly-repeated claim that 'Trump is an existential threat to democracy'.
Threats to democracy aren’t always distributed evenly across party lines. It’s unclear why that should be your prior.
Let’s see what Manifold markets think about this.
Raemon -- I strongly agree, and I don't think EAs should be overthinking this as much as we seem to be in the comments here. Some ethical issues are, actually, fairly simple.
OpenAI, Deepmind, Meta, and even Anthropic are pushing recklessly ahead with AGI capabilities development. We all understand the extinction risks and global catastrophic risks that this imposes on humanity. These companies are not aligned with EA values of preserving human life, civilization, and sentient well-being.
Therefore, instead of 80k Hours advertising jobs at such compani...
Michael -- I agree with your assessment here, both that the CEARCH report is very helpful and informative, but also that their estimated likelihood of nuclear (only 10% per century) seems much lower than seems reasonable, and much lower than other expert estimates that I've seen.
Just as a lot can happen in a century of AI development, a lot can happen over the next century that could increase the likelihood of nuclear war.
sammyboiz - I strongly agree. Thanks for writing this.
There seems to be no realistic prospect of solving AGI alignment or superalignment before the AI companies develop AGI or ASI. And they don't care. There are no realistic circumstances under which OpenAI, or DeepMind, or Meta, would say 'Oh no, capabilities research is far outpacing alignment; we need to hire 10x more alignment researchers, put all the capabilities researchers on paid leave, and pause AGI research until we fix this'. It will not happen.
Alternative strategies include formal governance wo...
I think it's still good for some people to work on alignment research. The future is hard to predict, and we can't totally rule out a string of technical breakthroughs, and the overall option space looks gloomy enough (at least from my perspective) that we should be pursuing multiple options in parallel rather than putting all our eggs in one basket.
That said, I think "alignment research pans out to the level of letting us safely wield vastly superhuman AGI in the near future" is sufficiently unlikely that we definitely shouldn't be predicating our plans o...
Richard - this is an important point, nicely articulated.
My impression is that a lot of anti-EA critics actually see scope-sensitivity as actively evil, rather than just a neutral corollary of impartial beneficence or goal-directed altruism. One could psychoanalyze why they think this -- I suspect it's usually more of an emotional defense than a thoughtful application of deontology. But I think EAs need to contend with the fact that to many non-EAs, scope-sensitive reasoning about moral issues comes across as somewhat sociopathic. Which is bizarre, and tragic, but seems often true.
Linch - I agree with your first and last paragraphs.
I have my own doubts about our political institutions, political leaders, and regulators. They have many and obvious flaws. But they're one of the few tools we have to hold corporate power accountable to the general public. We might as well use them, as best we can.
Neel - am I incorrect that Anthropic and DeepMind are still pursuing AGI, despite AI safety and alignment research still lagging far behind AI capabilities research? If they are still pursuing AGI, rather than pausing AGI research, they are no more ethical than OpenAI, in my opinion.
The OpenAI debacles and scandals help illuminate some of the commercial incentives, personal egos, and systemic hubris that sacrifices safety for speed in the AI industry. But there's no reason to think those issues are unique to OpenAI.
If Anthropic came out tomorrow and ...
Manuel - thanks for your thoughts on this. It is important to be politically and socially savvy about this issue.
But, sometimes, a full-on war mode is appropriate, and trying to play nice with an industry just won't buy us anything. Trying to convince OpenAI to pause AGI development until they solve AGI alignment, and sort out other key safety issues, seems about as likely to work as nicely asking Cargill Meat Solutions (which produces 22% of chicken meat in the US) to slow down their chicken production, until they find more humane ways to raise and slaugh...
Ulrik - I understand your point, sort of, but feel free to reverse any of these human-human alignment examples in whatever ways seem more politically palatable.
Personally, I'm fairly worried about agentic, open-source AGIs being used by Jihadist terrorists. But very few of the e/accs and AI devs advocating open-source AGI seem worried by such things.
Scott - thanks for the thoughtful reply; much appreciated.
I think a key strategic difference here is that I'm willing to morally stigmatize the entire AI industry in order to reduce extinction risk, along the lines of this essay I published on EA Forum a year ago.
Moral stigmatization is a powerful but blunt instrument. It doesn't do nuance well. It isn't 'epistemically responsible' in the way that Rationalists and EAs prefer to act. It does require dividing the world into Bad Actors and Non-Bad Actors. It requires, well, stigmatization. And most peop...
I respect you and your opinions a lot, Geoffrey Miller, but I feel Scott is really on the right on this one. I fear that EA is right now giving too much the impression of being in full-drawn war mode against Sam Altman, and can see this backfiring in a spectacular way, as in him (and the industry) burning all the bridges with any EA and Rationalist-adjacent AI safety. It looks too much like Classical Greek Tragedy - actions to avoid a certain outcome actually making it come into pass. I do understand this is a risk you might consider worth taking if you are completely convinced of the need to dynamite and stop the whole AI industry.
Benjamin - thanks for a thoughtful and original post. Much of your reasoning makes sense from a strictly financial, ROI-maximizing perspective.
But I don't follow your logic in terms of public sentiment regarding AI safety.
Your wrote 'Second, an AI crash could cause a shift in public sentiment. People who’ve been loudly sounding caution about AI systems could get branded as alarmists, or people who fell for another “bubble”, and look pretty dumb for a while.'
I don't see why an AI crash would turn people against AI safety concerns.
Indeed, a logical implicati...
adekcz - thanks for writing this. I'm also horrified by OpenAI turning from well-intentioned to apparently reckless and sociopathic, in pushing forward towards AGI capabilities without any serious commitment to AI safety.
The question is whether withholding a bit of money from OpenAI will really change their behavior, or whether a 'ChatGPT boycott' based on safety concerns could be more effective if our money-withholding is accompanied by some noisier public signaling of our moral outrage. I'm not sure what this would look like, exactly, but I imagine it co...
Yes, and also:
(1) PR benefits: Being fit makes people doing public outreach to raise awareness of X-risks more credible, persuasive, charismatic, & energetic, and better able to handle the physical & mental stresses of public engagement.
(2) Survival benefits given global catastrophic (if not X-risk) scenarios: As every serious prepper & survivalist knows, physical fitness is a crucial element of surviving in case of 'SHTF' or 'TEOTWAWKI' scenarios. If infrastructure fails (e.g. internet fails, electricity/water/gas fails, or supply chains... (read more)