D

dsj

128 karmaJoined Jun 2022

Comments
14

Oh lol, thanks for explaining! Sorry for misunderstanding you. (It's a pretty amusing misunderstanding though, I think you'd agree.)

Fair enough, I edited it again. I still think the larger points stand unchanged.

Sure, I understand that it’s a supposed default instrumental goal and not a terminal goal. Sorry that my wording didn’t make that distinction clear. I’ve now edited it to do so, but I think my overall points stand.

You seem to be lumping people like Richard Ngo, who is fairly epistemically humble, in with people who are absolutely sure that the default path leads to us all dying. It is only the latter that I'm criticizing.

I agree that AI poses an existential risk, in the sense that it is hard to rule out that the default path poses a serious chance of the end of civilization. That's why I work on this problem full-time.

I do not agree that it is absolutely clear that default instrumental goals of an AGI entail it killing literally everyone, as the OP asserts.

(I provide some links to views dissenting from this extreme confidence here.)

dsj
3mo11
1
0

To be clear, mostly I'm not asking for "more work", I'm asking people to use much better epistemic hygiene. I did use the phrase "work much harder on its epistemic standards", but by this I mean please don't make sweeping, confident claims as if they are settled fact when there's informed disagreement on those subjects.

Nevertheless, some examples of the sort of informed disagreement I'm referring to:

  • The mere existence of many serious alignment researchers seriously optimistic about scalable oversight methods such as debate.
  • This post by Matthew Barnett arguing we've been able to specify values much more successfully than MIRI anticipated.
  • Shard theory, developed mostly by Alex Turner and Quintin Pope, calling into question the utility argmaxer framework which has been used to justify many historical concerns about instrumental convergence leading to AI takeover.
  • This comment by me arguing ChatGPT is pretty aligned compared to MIRI's historical predictions, because it does what we mean and not what we say.
  • A detailed set of objections from Quintin Pope to Eliezer's views, which Eliezer responded to by saying it's "kinda long", and engaged with extremely superficially before writing it off.
  • This by Stuhlmüller and Byun, as well as many other articles by others, arguing that process oversight is a viable alignment strategy, converging with rather than opposing capabilities.

Notably, the extreme doomer contingent has largely failed even to understand, never mind engage with, some of these arguments, frequently lazily pattern-matching and misrepresenting them as more basic misconceptions. A typical example is thinking Matthew Barnett and I have been saying that GPT understanding human values is evidence against the MIRI/doomer worldview (after all, "the AI knows what you want but does not care, as we've said all along"), when in fact we're saying there's evidence we have actually pointed GPT successfully at those values.

It's fine if you have a different viewpoint. Just don't express that viewpoint as if it's self-evidently right when there's serious disagreement on the matter among informed, thoughtful people. An article like the OP which claims that labs should shut down should at least try to engage with the views of someone who thinks the labs should not shut down, and not just pretend such people are fools unworthy of mention.

These essays are well known and I'm aware of basically all of them. I deny that there's a consensus on the topic, that the essays you link are representative of the range of careful thought on the matter, or that the arguments in these essays are anywhere near rigorous enough to meet my criterion: justifying the degree of confidence expressed in the OP (and some of the posts you link).

dsj
3mo11
7
1

I’ll go further and say that I think those two claims are widely believed by many in the AI safety world (in which I count myself) with a degree of confidence that goes way beyond what can be justified by any argument that has been provided by anyone, anywhere, and I think this is a huge epistemic failure of that part of the AI safety community.

I strongly downvoted the OP for making these broad, sweeping, controversial claims as if they are established fact and obviously correct, as opposed to one possible way the world could be which requires good arguments to establish, and not attempting any serious understanding of and engagement with the viewpoints of people who disagree that these organizations shutting down would be the best thing for the world.

I would like the AI safety community to work much harder on its epistemic standards.

Another easy thing you can do, which I did several years ago, is download Kiwix onto your phone, which allows you to save offline versions of references such as Wikipedia, WikiHow, and way, way more. Then also buy a solar-powered or hand-crank USB charger (often built into disaster radios such as this one which I purchased).

For extra credit, store this data on an old phone you no longer use, and keep that and the disaster radio in a Faraday bag.

dsj
1y74
7
0

I’m calling for a six month pause on new font faces more powerful than Comic Sans.

It varies, but most treaties are not backed up by force (by which I assume we're referring to inter-state armed conflict). They're often backed up by the possibility of mutual tit-for-tat defection or economic sanction, among other possibilities.

Load more