All of Geoffrey Miller's Comments + Replies

Yes, and also: 

(1) PR benefits: Being fit makes people doing public outreach to raise awareness of X-risks more credible, persuasive, charismatic, & energetic, and better able to handle the physical & mental stresses of public engagement.

(2) Survival benefits given global catastrophic (if not X-risk) scenarios: As every serious prepper & survivalist knows, physical fitness is a crucial element of surviving in case of 'SHTF' or 'TEOTWAWKI' scenarios. If infrastructure fails (e.g. internet fails, electricity/water/gas fails, or supply chains... (read more)

Holly -- 

I think the frustrating thing here, for you and me, is that, compared to its AI safety fiascos, EA did so much soul-searching after the Sam Bankman-Fried fiasco with the FTX fraud in 2022. We took the SBF/FTX debacle seriously as a failure of EA people, principles, judgment, mentorship, etc. We acknowledged that it hurt EA's public reputation, and we tried to identify ways to avoid making the same catastrophic mistakes again.

But as far as I've seen, EA has done very little soul-searching for its complicity in helping to launch OpenAI, and the... (read more)

7
Holly Elmore ⏸️ 🔸
I thought EA was too eager to accept fault for a few people committing financial crimes out of their sight. The average EA actually is complicit in the safetywashing of OpenAI and Anthropic! Maybe that’s why the don’t want to think about it…

Holly --

Thanks for this assertive, candid, blunt, challenging post. 

You and I have, I think, reached similar views on some of the critical weaknesses of EA as it's currently led, run, funded, and defended.

All too often,  'EA discourse norms' have been overly influenced by LessWrong discourse norms, where an ivory-tower fetishization of 'rational discourse', 'finding cruxes', 'updating priors', 'avoiding ad hominems', 'steel-manning arguments', etc becomes a substitute for effective social or political action in the world as it is, given human nat... (read more)

Matt - thanks for the quick and helpful reply.

I think the main benefit of explicitly modeling ASI as being a 'new player' in the geopolitical game is that it highlights precisely the idea that the ASI will NOT just automatically be a tool used by China or the US -- but rather than it will have its own distinctive payoffs, interests, strategies, and agendas. That's the key issue that many current political leaders (e.g. AI Czar David Sacks) do not seem to understand -- if America builds an ASI, it won't be 'America's ASI', it will be the ASI's ASI, so to sp... (read more)

3
enterthewoods
Having given this a bit more thought, I think the starting point for something like this might be to generalize and assume the ASI just has "different" interests (we don't know what those interests are right now both because we don't know how ASI will be developed and because we haven't solved alignment yet), and then also to assume that the ASI has just enough power to make it interesting to model (not because this assumption is realistic, but because if the ASI was too weak or too strong relative to humans, the modeling exercise would be uninformative).  I don't know where to go from here, however. Maybe Buterin's def/acc world that I linked in my earlier comment would be a good scenario to start with. 

Matt -- thanks for an insightful post. Mostly agree.

However, on your point 2 about 'technological determinism': I worry that way too many EAs have adopted this view that building ASI is 'inevitable', and that the only leverage we have over the future of AI X-risk is to join AI companies explicitly trying to build ASI, and try to steer them in benign directions that increase control and alignment.

That seems to be the strategy that 80k Hours has actively pushed for years. It certainly helps EAs find lucrative, high-prestige jobs in the Bay Area, and gives th... (read more)

Thanks for this analysis. I think your post deserves more attention, so I upvoted it.

We need more game-theory analyses like this, of geopolitical arms race scenarios. 

Way too often, people just assume that the US-China rivalry can be modelled simply as a one-shot Prisoner's Dilemma, in which the only equilibrium is mutual defection (from humanity's general interests) through both sides trying to build ASI as soon as possible.

As your post indicates, the relevant game theory must include incomplete and asymmetric information, possible mixed-strategy equ... (read more)

3
enterthewoods
Thanks for the comment! I agree that more game-theory analysis of arms race scenarios could be useful. I haven't been able to find much other analysis, but if you know of any sources where I can learn more, that would be great.  As for the ASI being "another player", my naive initial reaction is that it feels like an ASI that isn't 100% controlled/aligned probably just results in everyone dying really quickly, so it feels somewhat pointless to model our conflicting interests with it using game theory. However, maybe there are worlds such as this one where the playing field is even enough such that complex interactions between humans and the ASI could be interesting to try to model. If you have any initial thoughts on this I would love to hear them. 

Tobias -- I take your point. Sort of. 

Just as they say 'There are no atheists in foxholes' [when facing risk of imminent death during combat], I feel that it's OK to pray (literally and/or figuratively) when facing AI extinction risk -- even if one's an atheist or agnostic. (I'd currently identify as an 'agnostic', insofar as the Simulation Hypothesis might be true). 

My X handle 'primalpoly' is polysemic, and refers partly to polyamory, but partly to polygenic traits (which I've studied extensively), and partly to some of the hundreds of other wo... (read more)

My new interview (48 mins) on AI risks for Bannon's War room: https://rumble.com/v6z707g-full-battleground-91925.html

This was my attempt to try out a few new arguments, metaphors, and talking points to raise awareness about AI risks among MAGA conservatives. I'd appreciate any feedback, especially from EAs who lean to the Right politically, about which points were most or least compelling.

2
Tobias Häberli
You mention having “an ambition, even a prayer for [the AI developers]” (~12 min). You might mean this figuratively, but many viewers of that channel will probably take it literally. When you admit elsewhere that you don't believe in god and don't practice any religion, they likely see this as a contradiction and suspect you’re not being genuine once you become more popular. My guess is that it's better to be upfront about not being a Christian in order to retain authenticity. You'll probably always be regarded as an outsider by the conservative right (just for another example: your twitter handle includes 'poly', and you've spoken at length online about being polyamorous), but you could hope to be perceived as 'the outsider who gets us'. This kinda worked for a while for Sam Harris or Milo Yiannopoulos.

PS the full video of my 15-minute talk was just posted today on the NatCon YouTube channel; here's the link

David -- I considered myself an atheist for several decades (partly in alignment with my work in evolutionary psychology), and would identify now as an agnostic (insofar as the Simulation Hypothesis has some slight chance of being true, and insofar as 'Simulation-Coders' aren't functionally any different from 'Gods', from our point of view).

And I'm not opposed to various kinds of reproductive tech, regenerative medicine research, polygenic screening, etc.

However, IMHO, too many atheists in the EA/Rationalist/AI Safety subculture have been too hostile or di... (read more)

Arepo - thanks for your comment.

To be strictly accurate, perhaps I should have said 'the more you know about AI risks and AI safety, the higher your p(doom)'. I do think that's an empirically defensible claim. Especially insofar as most of the billions of people who know nothing about AI risks have a p(doom) of zero.

And I might have added that thousands of AI devs employed by AI companies to build AGI/ASI have very strong incentives not to learn about too much about AI risks and AI safety of the sort that EAs have talked about for years, because such knowl... (read more)

4
PeterMcCluskey
This seems pretty false. E.g. see this survey.
4
Arepo
That makes it sound like a continuous function when it isn't really. Sure, people who've never or barely thought about it and then proceed to do so are likely to become more concerned - since they have a base of ~0 concern. That doesn't mean the effect will have the same shape or even same direction for people who have a reasonable initial familiarity with the issue. Inasmuch as there is such an effect, it's also hard to separate from reverse causation, where people who are more concerned about such outcomes tend to engage more with the arguments for them. As for incentives - sure, that's an effect. I also think the AI safety movement has its own cognitive biases: orgs like MIRI have an operational budget in the 10s if not 100s of millions; people who believe in high p(doom) and short timelines have little reason to present arguments fairly, leading to silly claims like the orthogonality thesis showing far more than it does, or to gross epistemic behaviour by AI safety orgs. In any case, if the claim is that knowing more makes people have a higher p(doom), then you have to evidence that claim, not argue that they would do if it weren't for cognitive biases. Finally if you want to claim that the people working at those orgs don't actually much about 'know about AI risks and AI safety' and so wouldn't be counterpoints to your revised claim, I think you need to evidence that. The arguments really aren't that complicated, and they've been out there for decades, and more recently shouted loudly by people trying to stop, pause or otherwise impede the work of the people working on AI capabilities - to the point where I find it hard to imagine there's anyone working on capabilities who doesn't have some level of familiarity with them (and, I would guess substantially more so than people on the left hand side of the discontinuous function).

Remmelt - thanks for posting this. 

Senator Josh Hawley is a big deal, with a lot of influence. I think building alliances with people like him could help slow down reckless AGI development. He may not be as tuned into AI X-risk as your typical EA is, but he is, at least, resisting the power of the pro-AI lobbyists.

Thanks for sharing this. 

IMHO, if EAs really want effective AI regulation & treaties, and a reduction in ASI extinction risk, we need to engage more with conservatives, including those currently in power in Washington. And we need to do so using the language and values that appeal to conservatives.  

Joel -- have you actually read the Bruce Gilley book? 

If you haven't, maybe give it a try before dismissing it as something that's 'extremely useful to avoid associating ourselves with'.

To me, EA involves a moral obligation to seek the truth about contentious political topics, especially those that concern the origins and functioning of successful institutions -- which is what the whole colonialism debate is centrally about. And not ignoring these topics just to stay inside the Overton window.

7
JoelMcGuire
I think this maybe highlights that it's best when possible to bypass these acrimonious debates and avoid the loaded language they use. We probably shouldn't take them too seriously (as in assuming anything that vaguely pattern matches with colonialism is immediately bad), but this doesn't meaning embracing colonialism revisionism either.  I think it's probably extremely useful to avoid associating ourselves with arguments such as those that try to revise colonialism into "actually good".  I think that even if we can think of cases where one could argue "colonialism good" (maybe French Guiana), it's probably best to dissociate from such arguments and instead have a conversation about the origin of institutions and which ones lead to better and worse outcomes.  "Colonialism good" feels sort of like trying to reclaim eugenics. Just come up with a different term that doesn't mean the thing the people who you're arguing again will think of like "slavery, the Belgian Congo, or the partition of India". And if you're trying to defend those things, please for the love of the Good, keep EA out of it. 

Jason -- your reply cuts to the heart of the matter.

Is it ethical to try to do good by taking a job within an evil and reckless industry? To 'steer it' in a better direction? To nudge it towards minimally-bad outcomes? To soften the extinction risk?

I think not. I think the AI industry is evil and reckless, and EAs would do best to denounce it clearly by warning talented young people not to work inside it.

JackM - these alleged 'tremendous' benefits are all hypothetical and speculative. 

Whereas the likely X risk from ASI have been examined in detail by thousands of serious people, and polls show that most people, both inside and outside the AI industry, are deeply concerned by them.

This is why I think it's deeply unethical for 80k Hours to post jobs to work on ASI within AI companies. 

2
JackM
I share your concern about x-risk from ASI, that's why I want safety-aligned people in these roles as opposed to people who aren't concerned about the risks. There are genuine proposals on how to align ASI, so I think it's possible. I'm not sure what the chances are, but I think it's possible. I think the most promising proposals involve using advanced AI to assist with oversight, interpretability, and recursive alignment tasks—eventually building a feedback loop where aligned systems help align more powerful successors. I don't agree that benefits are speculative by the way. DeepMind has already won the Nobel prize for Chemistry for their work on protein folding. EDIT: 80,000 Hours also doesn't seem to promote all roles, only those which contribute to safety, which seems reasonable to me.

Conor -- yes, I understand that you're making judgment calls about what's likely to be net harmful versus helpful.

But your judgment calls seem to assume -- implicitly or explicitly -- that ASI alignment and control are possible, eventually, at least in principle. 

Why do you assume that it's possible, at all, to achieve reliable long-term alignment of ASI agents?  I see no serious reason to think that it is possible. And I've never seen a single serious thinker make a principled argument that long-term ASI alignment with human values is, in fact, ... (read more)

This is a good video; thanks for sharing.

But I have to ask: why is 80k Hours still including job listings for AGI development companies that are imposing extinction risks on humanity?

I see dozens of jobs on the 80k Hours job board for positions at OpenAI, Anthropic, xAI, etc -- and not just in AI safety roles, but in capabilities development, lobbying, propaganda, etc. And even the 'AI safety jobs' seem to be there for safety-washing/PR purposes, with no real influence on slowing down AI capabilities development.

If 80k Hours wants to take a principled stand against reckless AGI development, then please don't advertise jobs where EAs are enticed by $300,000+ salaries to push AGI development.

6
Conor Barnes 🔶
Hi Geoffrey, I'm curious to know which roles we've posted which you consider to be capabilities development -- our policy is to not post capabilities roles at the frontier companies. We do aim to post jobs that are meaningfully able to contribute to safety and aren’t just safety-washing (and our views are discussed much more in depth here). Of course, we're not infallible, so if people see particular jobs they think are safety in name only, we always appreciate that being raised.
1
JackM
My view is these roles are going to filled regardless. Wouldn't you want someone who is safety-conscious in them?

Good post. Thank you.

But, I fear that you're overlooking a couple of crucial issues:

First, ageism. Lots of young people are simply biased against older people -- assuming that we're closed-minded, incapable of learning, ornery, hard to collaborate with, etc. I've encountered this often in EA. 

Second, political bias. In my experience, 'signaling value-alignment' in EA organizations and AI safety groups isn't just a matter of showing familiarity with EA and AI concepts, people, strategies, etc. It's also a matter of signaling left-leaning political valu... (read more)

8
Patrick Gruban 🔸
I'm not sure what age group you're referring to, but as someone who just turned 50, I can't relate. I did have to upskill not only on subject matter expertise (as mentioned in the post) but also on ways that people of the age group and the community are communicating, but this didn't seem much different than switching fields. The field emphasizes open-minded truth-seeking, and my experience has shown that people are receptive to my ideas if I am open to theirs. The EA community as a whole is indeed more left-leaning, but I feel that this is less the case in AI safety nonprofits than in other nonprofit fields. It took me some time to realize that my discomfort about being the only person with different views in the room didn't mean that I was unwelcome. At least I was with people who were more engaged in EA or who were working in this field. At the same time, organizations that are not aware of their own biases sometimes end up hiring people who are very similar to their founders or are unable to integrate more experienced professionals. This is something to be aware of.
Geoffrey Miller
0
0
1
100% agree

I trust my kids and grandkids to solve their own problems in the future.

I don't trust our generation to make sure our kids and grandkids survive.

Avoiding extinction is the urgent priority; all else can wait. (And, life is already getting better at a rapid rate for the vast majority of the world's people. We don't face any urgent or likely extinction risks other than technologies of our own making.)

I generally support the idea of 80k Hours putting more emphasis on AI risk as a central issue facing our species.

However, I think it's catastrophically naive to frame the issue as 'helping the transition to AGI go well'. This presupposes that there is a plausible path for (1) AGI alignment to be solved, for (2) global AGI safety treaties to be achieved and enforced in time, and for (3) our kids to survive and flourish in a post-AGI world.

I've seen no principled arguments to believe that any of these three things can be achieved. At all. And certainly not i... (read more)

Hey Geoffrey,

Niel gave a response to a similar comment below -- I'll just add a few things from my POV:

  • I'd guess that pausing (incl. for a long time) or slowing downAGI development would be good for helping AGI go well if it could be done by everyone / enforced / etc- so figuring out how to do that would be in scope re this more narrow focus. SO e.g. figuring out how an indefinite pause could work (maybe in a COVID-crisis like world where the overton window shifts?) seems helpful
  • I (& others at 80k) are just a lot less pessimistic vis a vis the prospect
... (read more)

Strongly endorsing Greg Colbourn's reply here. 

When ordinary folks think seriously about AGI risks, they don't need any consequentialism, or utilitarianism, or EA thinking, or the Sequences, or long-termism, or anything fancy like that.

They simply come to understand that AGI could kill all of their kids, and everyone they ever loved, and could ruin everything they and their ancestors ever tried to achieve.

Alex - thanks for the helpful summary of this exciting new book.

It looks like a useful required textbook for my 'Psychology of Effective Altruism' course (syllabus here), next time I teach it!

Well, the main asymmetry here is that the Left-leaning 'mainstream' press doesn't understand or report the Right's concerns about Leftist authoritarianism, but it generates and amplifies the Left's concerns about 'far Right authoritarianism'.

So, any EAs who follow 'mainstream' journalism (e.g. CNN, MSNBC, NY Times, WaPo) will tend to repeat their talking points, their analyses, and their biases.

Most reasonable observers, IMHO, understand that the US 'mainstream' press has become very left-leaning and highly biased over the last few decades, especially sinc... (read more)

It's unclear what your specific disagreements with my comment are. 

Take what I think is the most crucial point I made: that there doesn't seem to be a democratic country in which a major candidate refused to accept defeat in a national election.

Which of these 4 best represents your position?

  • Trump won the 2020 election
  • Trump did not refuse to accept the results of the 2020 election or try to subvert it, that's just a leftist media narrative. If you talk to him, he'll say he accepts that Biden won fair and square.
  • Trump did try to subvert the election or
... (read more)

Yelnats - thanks for this long, well-researched, and thoughtful piece.

I agree that political polarization, destabilization, and potential civil war in the US (and elsewhere) are worthy of more serious consideration within EA, since they amplify many potential catastrophic risks and extinction risks.

However, I would urge you to try much harder to develop a less partisan analysis of these issues. This essay comes across (to me, as a libertarian centrist with some traditionalist tendencies) as a very elaborate rationalization for 'Stop Trump at all costs!', b... (read more)

While I agree that the post suffers from an unfortunate left-wing bias, I don't think this bias weakens its conclusions. Most of the discussed anti-polarization interventions are applicable to both right-wing and left-wing autocracy and extremism, so, for the sake of depolarization efforts, it matters relatively little how much authoritarianism is coming from each side of the aisle. The fact that you can also identify anti-democratic tendencies on the left strengthens the case for depolarization.

I would urge you to try much harder to develop a less partisan analysis of these issues. This essay comes across (to me, as a libertarian centrist with some traditionalist tendencies) as a very elaborate rationalization for 'Stop Trump at all costs!', based on the commonly-repeated claim that 'Trump is an existential threat to democracy'.

Threats to democracy aren’t always distributed evenly across party lines. It’s unclear why that should be your prior.

Let’s see what Manifold markets think about this.

https://manifold.markets/Siebe/if-trump-is-... (read more)

Peter -- This is a valuable comment; thanks for adding a lot more detail about this lab.

Vasco -- understood. The estimate still seems much lower than most other credible estimates I've seen. And much lower than it felt when we were living through the 70s and 80s, and the Cold War was still very much a thing.

Raemon -- I strongly agree, and I don't think EAs should be overthinking this as much as we seem to be in the comments here. Some ethical issues are, actually, fairly simple.

OpenAI, Deepmind, Meta, and even Anthropic are pushing recklessly ahead with AGI capabilities development. We all understand the extinction risks and global catastrophic risks that this imposes on humanity. These companies are not aligned with EA values of preserving human life, civilization, and sentient well-being. 

Therefore, instead of 80k Hours advertising jobs at such compani... (read more)

Michael -- I agree with your assessment here, both that the CEARCH report is very helpful and informative, but also that their estimated likelihood of nuclear (only 10% per century) seems much lower than seems reasonable, and much lower than other expert estimates that I've seen. 

Just as a lot can happen in a century of AI development, a lot can happen over the next century that could increase the likelihood of nuclear war.

4
Vasco Grilo🔸
Hi Geoffrey, I just wanted to clarify your "likelihood of nuclear (only 10% per century [9.44 % = 1 - (1 - 9.91*10^-4)^100])" refers to a nuclear conflict with at least 100 nuclear detonations involving China, the US and Russia, not just to the chance of at least 1 nuclear detonation (which would be higher).

sammyboiz - I strongly agree. Thanks for writing this.

There seems to be no realistic prospect of solving AGI alignment or superalignment before the AI companies develop AGI or ASI. And they don't care. There are no realistic circumstances under which OpenAI, or DeepMind, or Meta, would say 'Oh no, capabilities research is far outpacing alignment; we need to hire 10x more alignment researchers, put all the capabilities researchers on paid leave, and pause AGI research until we fix this'. It will not happen.

Alternative strategies include formal governance wo... (read more)

1
sammyboiz🔸
Thanks for your comment!

Will - please expand a little bit more on what you're looking for? I found this question a little bit too abstract to answer, and others might share this confusion.

Rob - excellent post. Wholeheartedly agree. 

This is the time for EAs to radically rethink our whole AI safety strategy. Working on 'technical AI alignment' is not going to work in the time that we probably have, given the speed of AI capabilities development.

I think it's still good for some people to work on alignment research. The future is hard to predict, and we can't totally rule out a string of technical breakthroughs, and the overall option space looks gloomy enough (at least from my perspective) that we should be pursuing multiple options in parallel rather than putting all our eggs in one basket.

That said, I think "alignment research pans out to the level of letting us safely wield vastly superhuman AGI in the near future" is sufficiently unlikely that we definitely shouldn't be predicating our plans o... (read more)

Richard - this is an important point, nicely articulated. 

My impression is that a lot of anti-EA critics actually see scope-sensitivity as actively evil, rather than just a neutral corollary of impartial beneficence or goal-directed altruism. One could psychoanalyze why they think this -- I suspect it's usually more of an emotional defense than a thoughtful application of deontology. But I think EAs need to contend with the fact that to many non-EAs, scope-sensitive reasoning about moral issues comes across as somewhat sociopathic. Which is bizarre, and tragic, but seems often true.

I think, at this point, EAs (including 80k Hours) publicly boycotting OpenAI, and refusing to work there, and explaining why, clearly and forcefully, would do a lot more good than trying to work there and nudge them from the inside towards not imposing X risks on humanity. 

Linch - I agree with your first and last paragraphs. 

I have my own doubts about our political institutions, political leaders, and regulators. They have many and obvious flaws. But they're one of the few tools we have to hold corporate power accountable to the general public. We might as well use them, as best we can.

Neel - am I incorrect that Anthropic and DeepMind are still pursuing AGI, despite AI safety and alignment research still lagging far behind AI capabilities research? If they are still pursuing AGI, rather than pausing AGI research, they are no more ethical than OpenAI, in my opinion. 

The OpenAI debacles and scandals help illuminate some of the commercial incentives, personal egos, and systemic hubris that sacrifices safety for speed in the AI industry. But there's no reason to think those issues are unique to OpenAI.

If Anthropic came out tomorrow and ... (read more)

Manuel - thanks for your thoughts on this. It is important to be politically and socially savvy about this issue.

But, sometimes, a full-on war mode is appropriate, and trying to play nice with an industry just won't buy us anything. Trying to convince OpenAI to pause AGI development until they solve AGI alignment, and sort out other key safety issues, seems about as likely to work as nicely asking Cargill Meat Solutions (which produces 22% of chicken meat in the US) to slow down their chicken production, until they find more humane ways to raise and slaugh... (read more)

Abby - good suggestions, thank you. I think I will assign some Robert Miles videos! And I'll think about the human value datasets.

Ulrik - I understand your point, sort of, but feel free to reverse any of these human-human alignment examples in whatever ways seem more politically palatable.

Personally, I'm fairly worried about agentic, open-source AGIs being used by Jihadist terrorists. But very few of the e/accs and AI devs advocating open-source AGI seem worried by such things.

1
Benevolent_Rain
I think this comment makes this even worse, some readers might perceive you as now equating Palestinians with terrorists. I really do not think this sort of language belongs on a forum with a diversity of people from all walks of life (and ideally does not belong anywhere). That people upvote your comment is also worrying. Let us try to keep the forum a place where as many people as possible feel comfortable and where we check our own biases and collaborate on creating an atmosphere reflecting wide ranging altruism.

'AI alignment' isn't about whether a narrow, reactive, non-agentic AI system (such as a current LLM) seems 'helpful'.

It's about whether an agentic AI that can make its own decision and take its own autonomous actions will make decisions that are aligned with general human values and goals.

Scott - thanks for the thoughtful reply; much appreciated.

I think a key strategic difference here is that I'm willing to morally stigmatize the entire AI industry in order to reduce extinction risk, along the lines of this essay I published on EA Forum a year ago. 

Moral stigmatization is a powerful but blunt instrument. It doesn't do nuance well. It isn't 'epistemically responsible' in the way that Rationalists and EAs prefer to act. It does require dividing the world into Bad Actors and Non-Bad Actors. It requires, well, stigmatization. And most peop... (read more)

I respect you and your opinions a lot, Geoffrey Miller, but I feel Scott is really on the right on this one. I fear that EA is right now giving too much the impression of being in full-drawn war mode against Sam Altman, and can see this backfiring in a spectacular way, as in him (and the industry) burning all the bridges with any EA and Rationalist-adjacent AI safety. It looks too much like Classical Greek Tragedy - actions to avoid a certain outcome actually making it come into pass. I do understand this is a risk you might consider worth taking if you are completely convinced of the need to dynamite and stop the whole AI industry. 

Benjamin - thanks for a thoughtful and original post. Much of your reasoning makes sense from a strictly financial, ROI-maximizing perspective.

But I don't follow your logic in terms of public sentiment regarding AI safety.

Your wrote 'Second, an AI crash could cause a shift in public sentiment. People who’ve been loudly sounding caution about AI systems could get branded as alarmists, or people who fell for another “bubble”, and look pretty dumb for a while.'

I don't see why an AI crash would turn people against AI safety concerns.

Indeed, a logical implicati... (read more)

5
Benjamin_Todd
I should maybe have been more cautious - how messaging will pan out is really unpredictable. However, the basic idea is that if you're saying "X might be a big risk!" and then X turns out to be a damp squib, it looks like you cried wolf. If there's a big AI crash, I expect there will be a lot of people rubbing their hands saying "wow those doomers were so wrong about AI being a big deal! so silly to worry about that!" That said, I agree if your messaging is just "let's end AI!", then there's some circumstances under which you could look better after a crash e.g. especially if it looks like your efforts contributed to it, or it failed due to reasons you predicted / the things you were protesting about (e.g. accidents happening, causing it to get shut down). However, if the AI crash is for unrelated reasons (e.g. the scaling laws stop working, it takes longer to commercialise than people hope), then I think the Pause AI people could also look silly – why did we bother slowing down the mundane utility we could get from LLMs if there's no big risk?

My sense is that public opinion has already been swinging against the AI industry (not just OpenAI), and that this is a good and righteous way to slow down reckless AGI 'progress' (i.e. the hubris of the AI industry driving humanity off a cliff).

4
Linch
Maybe I already had a pretty dim view, but this incident did not update me about his character personally (whereas "sign a lifetime nondisparagement agreement within 60 days or lose all of your previously earned equity" did surprise me a bit).  I did update negatively on his competency/PR skills though. 

My take is this:

Whenever Sam Altman behaves like an unprincipled sociopath, yet again, we should update, yet again, in the direction of believing that Sam Altman might be an unprincipled sociopath, who should not be permitted to develop the world's most dangerous technology (AGI).

adekcz - thanks for writing this. I'm also horrified by OpenAI turning from well-intentioned to apparently reckless and sociopathic, in pushing forward towards AGI capabilities without any serious commitment to AI safety.

The question is whether withholding a bit of money from OpenAI will really change their behavior, or whether a 'ChatGPT boycott' based on safety concerns could be more effective if our money-withholding is accompanied by some noisier public signaling of our moral outrage. I'm not sure what this would look like, exactly, but I imagine it co... (read more)

3
adekcz
Agree, boycotts should be public, therefore my post :). On the other hand, I don't see myself being able to push this to the next level (very little personal fit). I would be very happy if someone did.
Load more