After Sam Bankman-Fried proved to be a sociopathic fraudster and a massive embarrassment to EA, we did much soul-searching about what EAs did wrong, in failing to detect and denounce his sociopathic traits. We spent, collectively, thousands of hours ruminating about what we can do better, next time we encounter an unprincipled leader who acts like it's OK to abuse and betray people to pursue their grandiose vision, who gets caught up in runaway greed for wealth and power, who violates core EA values, and who threatens the long-term flourishing of sentient beings.

Well, that time is now.

Sam Altman at OpenAI has been proving himself, again and again, in many different domains and issues, to be a manipulative, deceptive, unwise, and arrogant leader, driven by hubris to build AGI as fast as possible, with no serious concern about the extinction risks he's imposing on us all. 

We are all familiar with the recent controversies and scandals at OpenAI, from the boardroom coup, to the mass violations of intellectual property in training LLMs, to the collapse of the Superalignment Team, to the draconian Non-Disparagement Agreements, to the new Scarlett Johansson voice emulation scandal this week.

The evidence for Sam Altman being a Bad Actor seems, IMHO, at least as compelling as the evidence for Sam Bankman-Fried being a Bad Actor before the FTX collapse in Nov 2022.  And the stakes are much, much higher for humanity (if not for EA's reputation). 

So what are we going to do about it? 

Should we keep encouraging young talented EAs to go work in the AI industry, in the hopes that they can nudge the AI companies from the inside towards safe AGI alignment -- despite the fact that many of them end up quitting, disillusioned and frustrated?

Should we keep making excuses for OpenAI, and Anthropic, and DeepMind, pursuing AGI at recklessly high speed, despite the fact that AI capabilities research is far out-pacing AI safety and alignment research?

Should we keep offering the public the hope that 'AI alignment' is a solvable problem, when we have no evidence that aligning AGIs with 'human values' would be any easier than aligning Palestinians with Israeli values, or aligning libertarian atheists with Russian Orthodox values -- or even aligning Gen Z with Gen X values?

I don't know. But if we feel any culpability or embarrassment about the SBF/FTX debacle, I think we should do some hard thinking about how to deal with the OpenAI debacle. 

Many of us work on AI safety, and are concerned about extinction risks. I worry that all of our efforts in these directions could be derailed by a failure to call out the second rich, influential, pseudo-EA, sociopathic Sam that we've learned about in the last two years. If OpenAI 'succeeds' in developing AGI within a few years, long before we have any idea how to control AGI, that could be game over for our species. Especially if Sam Altman and his supporters and sycophants are still running OpenAI.

[Epistemic note: I've written this hastily, bluntly, with emotion, because I think there's some urgency to EA addressing these issues.]

31

8
10
1

Reactions

8
10
1

More posts like this

Comments24
Sorted by Click to highlight new comments since:

I think this is more over-learning and institutional scar tissue from FTX. The world isn't divided into Bad Actors and Non-Bad-Actors such that the Bad Actors are toxic and will destroy everything they touch.

There's increasing evidence that Sam Altman is a cut-throat businessman who engages in shady practices. This also describes, for example, Bill Gates and Elon Musk, both of whom also have other good qualities. I wouldn't trust either of them to single-handedly determine the fate of the world, but they both seem like people who can be worked with in the normal paradigm of different interests making deals with each other while appreciating a risk of backstabbing.

I think "Sam Altman does shady business practices, therefore all AI companies are bad actors and alignment is impossible" is a wild leap. We're still in the early (maybe early middle) stages of whatever is going to happen. I don't think this is the time to pick winners and put all eggs in a single strategy. Besides, what's the alternative? Policy? Do you think politicians aren't shady cut-throat bad actors? That the other activists we would have to work alongside aren't? Every strategy involves shifting semi-coalitions with shady cut-throat bad actors of some sort of another, you just try to do a good job navigating them and keep your own integrity intact.

If your point is "don't trust Sam Altman absolutely to pursue our interests above his own", point taken. But there are vast gulfs between "don't trust him absolutely" and "abandon all strategies that come into contact with him in any way". I think the middle ground here is to treat him approximately how I think most people here treat Elon Musk. He's a brilliant but cut-throat businessman who does lots of shady practices. He seems to genuinely have some kind of positive vision for the world, or want for PR reasons to seem like he has a positive vision for the world, or have a mental makeup incapable of distinguishing those two things. He's willing to throw the AI safety community the occasional bone when it doesn't interfere with business too much. We don't turn ourselves into the We Hate Elon Musk movement or avoid ever working with tech companies because they contain people like Elon Musk. We distance ourselves from him enough that his PR problems aren't our PR problems (already done in Sam's case; thanks to the board the average person probably thinks of us as weird anti-Sam-Altman fanatics) describe his positive and negative qualities honestly if asked, try to vaguely get him to take whatever good advice we have that doesn't conflict with his business too much, and continue having a diverse portfolio of strategies at any given time. Or, I mean, part of the shifting semi-coalitions is that if some great opportunity to get rid of him comes, we compare him to the alternatives and maybe take it. But we're so far away from having that alternative that pining after it is a distraction from the real world.

Scott - thanks for the thoughtful reply; much appreciated.

I think a key strategic difference here is that I'm willing to morally stigmatize the entire AI industry in order to reduce extinction risk, along the lines of this essay I published on EA Forum a year ago. 

Moral stigmatization is a powerful but blunt instrument. It doesn't do nuance well. It isn't 'epistemically responsible' in the way that Rationalists and EAs prefer to act. It does require dividing the world into Bad Actors and Non-Bad Actors. It requires, well, stigmatization. And most people aren't comfortable stigmatizing people who 'seem like us' -- e.g. AI devs who share with most EAs traits such as high intelligence, high openness, technophilia, liberal values, and 'good intentions', broadly construed.

But, I don't see any practical way of slowing AI capabilities development without increasing the moral stigmatization of the AI industry. And Sam Altman has rendered himself highly, highly stigmatizable. So, IMHO, we might as well capitalize on that, to help save humanity from his hubris, and the hubris of other AI leaders.

(And, as you point out, formal regulation and gov't policy also come with their own weaknesses, vested interests, and bad actors. So, although EAs tend to act as if formal gov't regulation is somehow morally superior to the stigmatization strategy, it's not at all clear to me that it really is.)

I respect you and your opinions a lot, Geoffrey Miller, but I feel Scott is really on the right on this one. I fear that EA is right now giving too much the impression of being in full-drawn war mode against Sam Altman, and can see this backfiring in a spectacular way, as in him (and the industry) burning all the bridges with any EA and Rationalist-adjacent AI safety. It looks too much like Classical Greek Tragedy - actions to avoid a certain outcome actually making it come into pass. I do understand this is a risk you might consider worth taking if you are completely convinced of the need to dynamite and stop the whole AI industry. 

Manuel - thanks for your thoughts on this. It is important to be politically and socially savvy about this issue.

But, sometimes, a full-on war mode is appropriate, and trying to play nice with an industry just won't buy us anything. Trying to convince OpenAI to pause AGI development until they solve AGI alignment, and sort out other key safety issues, seems about as likely to work as nicely asking Cargill Meat Solutions (which produces 22% of chicken meat in the US) to slow down their chicken production, until they find more humane ways to raise and slaughter chickens. 

I don't really care much if the AI industry severs ties with EAs and Rationalists. Instead, I care whether we can raise awareness of the AI safety issues with the general public, and politicians, quickly and effectively enough to morally stigmatize the AI industry. 

Sometimes, when it comes to moral issues, the battle lines have already been drawn, and we have to choose sides. So far, I think EAs have been far too gullible and naive about AI safety and the AI industry, and have chosen too often to take the side of the AI industry, rather than the side of humanity.

But we’re so far away from having that alternative that pining after it is a distraction from the real world.

For one thing, we could try to make OpenAI/SamA toxic to invest in or do business with, and hope that other AI labs either already have better governance / safety cultures, or are greatly incentivized to improve on those fronts. If we (EA as well as the public in general) give him a pass (treat him as a typical/acceptable businessman), what lesson does that convey to others?

Yeah, I also don't think we are that far away. OpenAI seems like it's just a few more scandals-similar-to-the-past-week's away from implosion. Or at least, Sam's position as CEO seems to be on shaky ground again, and this time he won't have unanimous support from the rank-and-file employees.

I think there's ample public evidence that Sam Altman is substantially less trustworthy than average for tech CEOs. Hopefully more private evidence would come out later that mostly exonerates him and puts him closer in character to "typical tech CEO", but I don't think that will happen. My guess right now is that the private evidence that will slowly filter out would make him look worse than what the public currently thinks, not better. 

That said, I agree that "abandon all strategies that come into contact with him in any way" is probably unrealistic. Churchill worked with Stalin, there was a post-Cuban missile crisis hotline between the US and USSR, etc. 

I also agree that OP was vastly overreaching when he said the public will identify EA with Sam Altman. I think that's pretty unlikely as of the board exodus, if not earlier. 

We're still in the early (maybe early middle) stages of whatever is going to happen. I don't think this is the time to pick winners and put all eggs in a single strategy. Besides, what's the alternative? Policy? Do you think politicians aren't shady cut-throat bad actors? That the other activists we would have to work alongside aren't? Every strategy involves shifting semi-coalitions with shady cut-throat bad actors of some sort of another, you just try to do a good job navigating them and keep your own integrity intact.

I sort of agree, but I also think policy has more natural checks-and-balances. Part of the hard work of doing good as a society is that you try to shape institutions and incentives to create behavior, rather than rely primarily on heroic acts of personal integrity. My own guess is that thinking of "AI company" as an institution and set of incentives would make it clear that it's worse for safety than other plausible structures, though I understand that some within EA disagree. 

Linch - I agree with your first and last paragraphs. 

I have my own doubts about our political institutions, political leaders, and regulators. They have many and obvious flaws. But they're one of the few tools we have to hold corporate power accountable to the general public. We might as well use them, as best we can.

Agreed with the general thrust of this post. I'm trying to do my part, despite a feeling of "PR/social/political skills is so far from what I think of as my comparative advantage. What kind of a world am I living in, that I'm compelled to do these things?"

I should add that there may be a risk of over-correcting (focusing too much on OpenAI and Sam Altman), and we shouldn't forget about other major AI labs, how to improve their transparency, governance, safety cultures, etc. This project (Zach Stein-Perlman's AI Lab Watch) seems a good start, if anyone is interested in a project to support or contribute ideas to.

Should we keep making excuses for OpenAI, and Anthropic, and DeepMind, pursuing AGI at recklessly high speed, despite the fact that AI capabilities research is far out-pacing AI safety and alignment research?

I don't at all follow your jump from "OpenAI is wracked by scandals" to "other AGI labs bad" - Anthropic and GDM had nothing to do with Sam's behaviour, and Anthropic co-founders actively chose to leave OpenAI. I know you already believed this position, but it feels like you're arguing that Sam's scandals should change other people's position here. I don't see how it gives much evidence either way for how the EA community should engage with Anthropic or DeepMind?

I definitely agree that this gives meaningful evidence on whether eg 80K should still recommend working at OpenAI (or even working on alignment at OpenAI, though that's far less clear cut IMO)

Neel - am I incorrect that Anthropic and DeepMind are still pursuing AGI, despite AI safety and alignment research still lagging far behind AI capabilities research? If they are still pursuing AGI, rather than pausing AGI research, they are no more ethical than OpenAI, in my opinion. 

The OpenAI debacles and scandals help illuminate some of the commercial incentives, personal egos, and systemic hubris that sacrifices safety for speed in the AI industry. But there's no reason to think those issues are unique to OpenAI.

If Anthropic came out tomorrow and said, 'OK, everyone, this AGI stuff is way too dangerous to pursue at the moment; we're shutting down capabilities research for a decade until AI safety can start to catch up', then they would have my respect. 

when we have no evidence that aligning AGIs with 'human values' would be any easier than aligning Palestinians with Israeli values, or aligning libertarian atheists with Russian Orthodox values -- or even aligning Gen Z with Gen X values?

When I ask an LLM to do something it usually outputs something that is its best attempt at being helpful. How is this not some evidence of alignment that is easier than inter-human alignment?

LLMs are not AGIs in the sense being discussed, they are at best proto-AGI. That means the logic fails at exactly the point where it matters.

When I ask a friend to give me a dollar when I'm short, they often do so. Is this evidence that I can borrow a billion dollars?  Should I go on a spending spree on the basis that I'll be able to get the money to pay for it from those friends?

When I lift, catch, or throw a 10 pound weight, I usually manage it without hurting myself. Is this evidence that weight isn't an issue? Should I try to catch a 1,000 pound boulder?

'AI alignment' isn't about whether a narrow, reactive, non-agentic AI system (such as a current LLM) seems 'helpful'.

It's about whether an agentic AI that can make its own decision and take its own autonomous actions will make decisions that are aligned with general human values and goals.

I would suggest striking the following:

"Palestinians with Israeli values"

Perhaps unnecessary to say this, but in case it is helpful: The reason being that the way this is structured in relation to the preceding part on AI and humans could be perceived as equating Palestinians with (potentially dangerous) machines and Israelis with humans. The piece stands very well on its own without these 4 words.

Ulrik - I understand your point, sort of, but feel free to reverse any of these human-human alignment examples in whatever ways seem more politically palatable.

Personally, I'm fairly worried about agentic, open-source AGIs being used by Jihadist terrorists. But very few of the e/accs and AI devs advocating open-source AGI seem worried by such things.

I think this comment makes this even worse, some readers might perceive you as now equating Palestinians with terrorists. I really do not think this sort of language belongs on a forum with a diversity of people from all walks of life (and ideally does not belong anywhere). That people upvote your comment is also worrying. Let us try to keep the forum a place where as many people as possible feel comfortable and where we check our own biases and collaborate on creating an atmosphere reflecting wide ranging altruism.

I think you're significantly misinterpreting what Geoffrey is trying to say and I don't like the chilling effect caused by trying to avoid making an analogy that could be offensive to anyone who misinterprets you.

Curated and popular this week
Relevant opportunities