All of Donald Hobson's Comments + Replies

The Unweaving of a Beautiful Thing

The time of a single witch or wizard trying to snare death in their dying spell was over a decade ago. Such techniques could only be used by a skilled witch or wizard upon their deaths, and such events were hard to plan for ethically. 

Now a team of thaumotological engineers laboured over a contrivance of electronics and crystals. This device would be slid under a bed of a terminally ill patient. Slotted into the centre were a handful of cells in a dish, taken from the dying patient themselves.Theoretical research had been done, software had been writt... (read more)

1atb13dNice! In many ways, a very different style to what I wrote, but I like to think it shares something of the same spirit (perhaps representing something of a logical next step).
Measuring the "apocalyptic residual"

Another key factor is levels of competence to achieve their objective. The people trying to wake chuthulu by chanting magic words aren't a problem. The doomsday cult doing cutting edge biotech or AI research is a problem. How many rationality points do these doomsday cultists have?

1Samuel Shadrach1moYup definitely a very important factor.
Nines of safety: Terence Tao’s proposed unit of measurement of risk

Nines of unsafety, for the pessimists. So 2 9's of unsafety is a 99% chance of doom.

What “defense layers” should governments, AI labs, and businesses use to prevent catastrophic AI failures?

The boring answers

Don't give your AI system excess compute. Like ideally on a hardware level. Run it on a small isolated machine not a 0.1% timeshare on a supercomputer.

Use the coding practices developed by Nasa to minimize standard bugs. 

Record all random seeds and input data to make everything reproducible. 

Put in hard coded sanity checks between AI and output. A robot arm isn't allowed to move beyond safe limits by a simple max(AI_OUTPUT, MAXIMUM_ARM_ANGLE) type code.

Humans checking in the loop.

Hardware minimization of unneeded action space. S... (read more)

Quotes about the long reflection

The rot13 is to make it harder to search for. I think that this is a discussion that would be easy to misinterpret as saying something offensive.

Quotes about the long reflection
but just thought that slavery was a pre-condition for some people having good things in life. Therefore, it was justified on those grounds.

Rot13

Gung vf pyrneyl n centzngvp qrpvfvba onfrq ba gur fbpvrgl ur jnf va. Svefgyl, gur fynirel nf cenpgvfrq va napvrag Terrpr jnf bsgra zhpu yrff pehry guna pbybavny fynirel. Tvira gung nyybjvat gur fynir gb znxr gurve bja jnl va gur jbeyq, rneavat zbarl ubjrire gurl fnj svg, naq gura chggvat n cevpr ba gur fynirf serrqbz jnf pbzzba cenpgvpr, gung znxrf fbzr cenpgvprf gung jrer pnyyrq fynirel bs gur gvzr ybbx abg gung ... (read more)

2EdoArad1yWhy Rot13? This seems like an interesting discussion to be had
But exactly how complex and fragile?

Machine learning works fine on non adversarial inputs. If you train a network to distinguish cats from dogs, and put in a normal picture of a cat, it works. However, there are all sorts of wierd inputs that look nothing like cats or dogs that will also get classified as cats. If you give the network a bunch of bad situations, and a bunch of good, (say you crack open a history textbook, and ask a bunch of people how nice various periods and regimes were.) then you will get a network that can distinguish bad from good within the normal flow of human history.... (read more)

I'm Buck Shlegeris, I do research and outreach at MIRI, AMA

If no more AI safety work is necessary, that means that there is nothing we can do to significantly increase the chance of FAI over UFAI.

I could be almost certain that FAI would win because I had already built one. Although I suspect that there will be double checking to do, the new FAI will need told about what friendly behavior is, someone should keep an eye out for any UFAI ect. So FAI work will be needed until the point where no human labor is needed and we are all living in a utopia.

I could be almost certain that UFAI will win. I could see lots of p... (read more)

Existential Risk and Economic Growth

I think that existential risk is still something that most governments aren't taking seriously. If major world governments had a model that contained a substantial probability of doom, there would be a Lot more funding. Look at the sort of funding anything and everything that might possibly help that happened in the cold war. I see this not taking it seriously as being caused by a mix of human psychology, and historical coincidence. I would not expect it to apply to all civilizations.