See explainer on why AGI could not be controlled enough to stay safe:
https://www.lesswrong.com/posts/xp6n2MG5vQkPpFEBH/the-control-problem-unsolved-or-unsolvable
Note: I am no longer part of EA because of the community’s/philosophy’s overreaches. I still post here about AI safety.
Update: 40% chance.
I very much underestimated/missed the speed of tech leaders influencing the US government through the Trump election/presidency. Got caught flat-footed by this.
I still think it’s not unlikely for there to be an AI crash as described above within the next 4 years and 8 months but it could be from levels of investment much higher than where we are now. A “large reduction in investment” at that level looks a lot different than a large reduction in investment from the level that markets were at 4 months ago.
We ended up having a private exchange about it.
Basically, organisers spend more than half of their time on general communications and logistics to support participants get to work.
And earmarking stipends to particular areas of work seems rather burdensome administratively, though I wouldn’t be entirely against it if it means we can cover more people’s stipends.
Overall, I think we tended not to allow differentiated fundraising before because it can promote internal conflicts, rather than having people come together to make the camp great.
Here's how I specify terms in the claim:
I'm also feeling less "optimistic" about an AI crash given:
I will revise my previous forecast back to 80%+ chance.
Just found a podcast on OpenAI’s bad financial situation.
It’s hosted by someone in AI Safety (Jacob Haimes) and an AI post-doc (Igor Krawzcuk).
https://kairos.fm/posts/muckraiker-episodes/muckraiker-episode-004/
As a 1st approximation, I assume humans will be selecting AIs which benefit them, not AIs which maximally increase economic growth.
The problem here is that AI corporations are increasingly making decisions for us.
See this chapter.
Corporations produce and market products to increase profit (including by replacing their fussy expensive human parts with cheaper faster machines that do good-enough work.)
To do that they have to promise buyers some benefits, but they can also manage to sell products by hiding the negative externalities. See cases Big Tobacco, Big Oil, etc.
Update: back up to 60% chance.
I overreacted before IMO on the updating down to 40% (and undercompensated when updating down to 80%, which I soon after thought should have been 70%).
The leader in turns of large model revenue, OpenAI has basically failed to build something worth calling GPT-5, and Microsoft is now developing more models in-house to compete with them. If OpenAI fails on the effort to combine its existing models into something new and special (likely), that’s a blow to perception of the industry.
A recession might also be coming this year, or at least in the next four years, which I made a prediction about before: https://bsky.app/profile/artificialbodies.net/post/3lbvf2ejcec2f