R

Remmelt

Research Coordinator @ Stop/Pause AI area at AI Safety Camp
1131 karmaJoined Working (6-15 years)

Bio

See explainer on why AGI could not be controlled enough to stay safe:
https://www.lesswrong.com/posts/xp6n2MG5vQkPpFEBH/the-control-problem-unsolved-or-unsolvable

Note: I am no longer part of EA because of the community’s/philosophy’s overreaches. I still post here about AI safety. 

Sequences
3

Bias in Evaluating AGI X-Risks
Developments toward Uncontrollable AI
Why Not Try Build Safe AGI?

Comments
261

Topic contributions
5

Update: back up to 60% chance. 

I overreacted before IMO on the updating down to 40% (and undercompensated when updating down to 80%, which I soon after thought should have been 70%).

The leader in turns of large model revenue, OpenAI has basically failed to build something worth calling GPT-5, and Microsoft is now developing more models in-house to compete with them. If OpenAI fails on the effort to combine its existing models into something new and special (likely), that’s a blow to perception of the industry.

A recession might also be coming this year, or at least in the next four years, which I made a prediction about before: https://bsky.app/profile/artificialbodies.net/post/3lbvf2ejcec2f

Update: back up to 50% chance. 

Noting Microsoft’s cancelling of data center deals. And the fact the ‘AGI’ labs are still losing cash, and with DeepSeek are competing increasingly on a commodity product. 

Update: 40% chance. 

I very much underestimated/missed the speed of tech leaders influencing the US government through the Trump election/presidency. Got caught flat-footed by this. 

I still think it’s not unlikely for there to be an AI crash as described above within the next 4 years and 8 months but it could be from levels of investment much higher than where we are now. A “large reduction in investment” at that level looks a lot different than a large reduction in investment from the level that markets were at 4 months ago. 

We ended up having a private exchange about it. 

Basically, organisers spend more than half of their time on general communications and logistics to support participants get to work. 

And earmarking stipends to particular areas of work seems rather burdensome administratively, though I wouldn’t be entirely against it if it means we can cover more people’s stipends.

Overall, I think we tended not to allow differentiated fundraising before because it can promote internal conflicts, rather than having people come together to make the camp great.

Answer by Remmelt3
0
0

Here's how I specify terms in the claim:

  • AGI is a set of artificial components, connected physically and/or by information signals over time, to in aggregate sense and act autonomously over many domains.
    • 'artificial' as configured out of a (hard) substrate that can be standardised to process inputs into outputs consistently (vs. what our organic parts can do).
    • 'autonomously' as continuing to operate without needing humans (or any other species that share a common ancestor with humans).
  • Alignment is at the minimum the control of the AGI's components (as modified over time) to not (with probability above some guaranteeable high floor) propagate effects that cause the extinction of humans.
  • Control is the implementation of (a) feedback loop(s) through which the AGI's effects are detected, modelled, simulated, compared to a reference, and corrected.

Update: reverting my forecast back to 80% chance likelihood for these reasons.

I'm also feeling less "optimistic" about an AI crash given:

  1. The election result involving a bunch of tech investors and execs pushing for influence through Trump's campaign (with a stated intention to deregulate tech).
  2. A military veteran saying that the military could be holding up the AI industry like "Atlas holding the globe", and an AI PhD saying that hyperscaled data centers, deep learning, etc, could be super useful for war.

I will revise my previous forecast back to 80%+ chance.

Just found a podcast on OpenAI’s bad financial situation.

It’s hosted by someone in AI Safety (Jacob Haimes) and an AI post-doc (Igor Krawzcuk).

https://kairos.fm/posts/muckraiker-episodes/muckraiker-episode-004/

There are bunch of crucial considerations here. I’m afraid it would take too much time to unpack those.

Happy though to have had this chat!

As a 1st approximation, I assume humans will be selecting AIs which benefit them, not AIs which maximally increase economic growth.

The problem here is that AI corporations are increasingly making decisions for us. 
See this chapter.

Corporations produce and market products to increase profit (including by replacing their fussy expensive human parts with cheaper faster machines that do good-enough work.)

To do that they have to promise buyers some benefits, but they can also manage to sell products by hiding the negative externalities. See cases Big Tobacco, Big Oil, etc.

Load more