R

Remmelt

Research Coordinator @ Stop/Pause AI area at AI Safety Camp
1059 karmaJoined Working (6-15 years)

Bio

See explainer on why AGI could not be controlled enough to stay safe:
https://www.lesswrong.com/posts/xp6n2MG5vQkPpFEBH/the-control-problem-unsolved-or-unsolvable

Note: I am no longer part of EA because of the community’s/philosophy’s overreaches. I still post here about AI safety. 

Sequences
3

Bias in Evaluating AGI X-Risks
Developments toward Uncontrollable AI
Why Not Try Build Safe AGI?

Comments
256

Topic contributions
3

Update: reverting my forecast back to 80% chance likelihood for these reasons.

I'm also feeling less "optimistic" about an AI crash given:

  1. The election result involving a bunch of tech investors and execs pushing for influence through Trump's campaign (with a stated intention to deregulate tech).
  2. A military veteran saying that the military could be holding up the AI industry like "Atlas holding the globe", and an AI PhD saying that hyperscaled data centers, deep learning, etc, could be super useful for war.

I will revise my previous forecast back to 80%+ chance.

Just found a podcast on OpenAI’s bad financial situation.

It’s hosted by someone in AI Safety (Jacob Haimes) and an AI post-doc (Igor Krawzcuk).

https://kairos.fm/posts/muckraiker-episodes/muckraiker-episode-004/

There are bunch of crucial considerations here. I’m afraid it would take too much time to unpack those.

Happy though to have had this chat!

As a 1st approximation, I assume humans will be selecting AIs which benefit them, not AIs which maximally increase economic growth.

The problem here is that AI corporations are increasingly making decisions for us. 
See this chapter.

Corporations produce and market products to increase profit (including by replacing their fussy expensive human parts with cheaper faster machines that do good-enough work.)

To do that they have to promise buyers some benefits, but they can also manage to sell products by hiding the negative externalities. See cases Big Tobacco, Big Oil, etc.

I am open to a bet similar to this one.

I would bet on both, on your side.
 

Potentially relatedly, I think massive increases in unemployment are very unlikely.

I see you cite statistics of previous unemployment rates as an outside view, compensating against the inside view. Did you look into the underlying rate of job automation? I'd be curious about that. If that underlying rate has been trending up over time, then there is a concern that at some point the gap might not be filled with re-employment opportunities.

AI Safety inside views are wrong for various reasons in my opinion. I agree with many of Thorstad's views you cited (eg. critiquing how fast take-off, orthogonality thesis and instrumental convergence relies on overly simplistic toy models, missing the hard parts about machinery coherently navigating an environment that's more complex than just the machinery itself).

There are arguments that you are still unaware of, which mostly come from outside of the community. They're less flashy, involving longer timelines. For example, it considers why the standardisation of hardware and code allows for extractive corporate-automation feedback loops.

To learn about why superintelligent AI disempowering humanity would be the lead up to the extinction of all current living species, I suggest digging into substrate-needs convergence. 

I gave a short summary in this post:

  • AGI is artificial. The reason why AGI would outperform humans at economically valuable work in the first place is because of how virtualisable its code is, which in turn derives from how standardisable its hardware is. Hardware parts can be standardised because their substrate stays relatively stable and compartmentalised. Hardware is made out of hard materials, like the silicon from rocks. Their molecular configurations are chemically inert and physically robust under human living temperatures and pressures. This allows hardware to keep operating the same way, and for interchangeable parts to be produced in different places. Meanwhile, human "wetware" operates much more messily. Inside each of us is a soup of bouncing and continuously reacting organic molecules. Our substrate is fundamentally different.
  • The population of artificial components that constitutes AGI implicitly has different needs than us (for maintaining components, producing components, and/or potentiating newly connected functionality for both). Extreme temperature ranges, diverse chemicals – and many other unknown/subtler/more complex conditions – are needed that happen to be lethal to humans. These conditions are in conflict with our needs for survival as more physically fragile humans.
  • These connected/nested components are in effect “variants” – varying code gets learned from inputs, that are copied over subtly varying hardware produced through noisy assembly processes (and redesigned using learned code).
  • Variants get evolutionarily selected for how they function across the various contexts they encounter over time. They are selected to express environmental effects that are needed for their own survival and production. The variants that replicate more, exist more. Their existence is selected for.
  • The artificial population therefore converges on fulfilling their own expanding needs. Since (by 4.) control mechanisms cannot contain this convergence on wide-ranging degrees and directivity in effects that are lethal to us, human extinction results.

Donation opportunities for restricting AI companies:

In my pipeline:  

  • funding a 'horror documentary' against AI by an award-winning documentary maker (got a speculation grant of $50k)
  • funding lawyers in the EU for some high-profile lawsuits and targeted consultations with EU AI Office.
     

If you're a donor, I can give you details on their current activities. I worked with staff in each of these organisations. DM me.

Hey, my apologies for taking even longer to reply (had family responsibilities this month). 

I will read that article on why Chernobyl-style events are not possible with modern reactors. Respecting you for the amount of background research you must have done in this area, and would like to learn more.

Although I think the probability of human extinction over the next 10 years is lower than 10^-6.

You and I actually agree on this with respect to AI developments. I don’t think the narratives I read of a large model recursively self-improving itself internally make sense.

I wrote a book for educated laypeople explaining how AI corporations would cause increasing harms leading eventually  to machine destruction of our society and ecosystem.

Curious for your own thoughts here. 

Basically I'm upvoting what you're doing here, which I think is more important than the text itself.

Thanks for recognising the importance of doing the work itself. We are still scrappy so we'll find ways to improve over time.
 

especially that you should have run this past a bunch of media savvy people before releasing

If you know anyone with media experience who might be interested to review future drafts, please let me know. 

I agree we need to improve on our messaging.

 

Load more