My reaction here was: 'Good, someone shows they care enough about this issue that they're willing to give a costly signal to others that this needs to be taken seriously' (i.e. your point a).
I do personally think many people in EA and rationalist circles (particularly those concerned about AI risk) can act more proactively to try and prevent harmful AI developments (in non-violent ways).
It's fair though to raise the concern that Guido's hungerstrike could set an example for others to take actions that are harmful to themselves. If you have any exampl...
I just expanded the text:
...On one hand, it was a major contribution for a leading AI company to speak out against the moratorium as stipulated. On the other hand, Dario started advocating himself for minimal regulation. He recommended mandating a transparency standard along the lines of RSPs, adding that state laws "should also be narrowly focused on transparency and not overly prescriptive or burdensome".[11] Given that Anthropic had originally described SB 1047's requirements as 'prescriptive' and 'burdensome', Dario was effectively arguing for the fe
FAAI evolution can happen horizontally and rapidly, unlike biological evolution
Note that horizontal code transfer can happen under biological evolution too. E.g. with bacteria.
For the rest, this summary is roughly accurate!
I adjusted my guesstimate of winning down to a quarter.
I now guess it's more like 1/8 chance (meaning that from my perspective Marcus will win this bet on expectation). It is pretty hard to imagine so many paying customers going away, particularly as revenues have been growing in the last year.
Marcus has thought this one through carefully, and I'm naturally sticking to the commitment. If we end up seeing a crash down the line, I invite all of you to consider with me how to make maximum use of that opportunity!
I still think a crash is fairly likely, but als...
like AI & ML VC deal activity being <30% and Anthropic valuation <$30B
My preference was for the former metric (based on AI PitchBook-NVCA Venture Monitor), and another metric based on some threshold for the absolute amount Anthropic or OpenAI got in investments in a next round (which Marcus reasonably pointed out could be triggered if the company just decided to do a some extra top-up round).
I was okay with using Marcus’ Anthropic valuation metric with the threshold set higher, and combined with another possible metric. My worry was that An...
Good question.
Marcus and I did a lot of back and forth on potential criteria. I started by suggesting metrics that capture a decline in investments into AI companies. Marcus though was reasonably trying to avoid things that can be interest rate/tariff/broad market driven.
So the criteria we have here are a result of compromise.
The revenue criteria are rather indirect for capturing my view on things. I think if OpenAI and Anthropic each continue to make $5+ billion yearly losses (along with losses by other model developers) that would result in investo...
Apr: Californian civil society nonprofits
This petition has the most rigorous legal arguments in my opinion.
Others I know also back a block (#JusticeForSuchir, Ed Zitron, Stop AI, creatives for copyright). What’s cool is how diverse the backers are, from skeptics to doomers, and from tech whistleblowers to creatives.
Frankly, because I'd want to profit from it.
The odds of 1:7 imply a 12.5% chance of a crash, and I think the chance is much higher (elsewhere I posted a guess of 40% for this year, though I did not have precise crash criteria in mind there, and would lower the percentage once it's judged by a few measures, rather than my sense of "that looks like a crash").
That percentage of 12.5% is far outside of the consensus on this Metaculus page. Though I notice that their criteria for a "bust or winter" are much stricter than where I'd set the threshold for a ...
Just because they didn't invest money in the Stargate expansion doesn't mean they aren't reserving the option to do so later if necessary.... If you believe that the US government will prop up AI companies to virtually any level they might realistically need by 2029, then i don't see a crash happening.
Thanks for reading and your thoughts.
I disagree, but I want to be open to changing my mind if we see e.g. the US military ramping up contracts, or the US government propping up AI companies with funding at the level of say the $280 billion CHIPS Act.
This is clarifying context, thanks. It's a common strategy to go red for years while tech start-ups build a moat around themselves (particularly through network effects). Amazon built a moat in terms of drawing in vendors and buyers into its platform while reducing logistics costs, and Uber in drawing in taxi drivers and riders onto its platform. Tesla started out with a technological edge.
Currently, I don't see a strong case for that OpenAI and Anthropic are building up a moat.
–> Do you have any moats in mind that I missed? Curious.
Network effect...
Update: back up to 70% chance.
Just spent two hours compiling different contributing factors. Now I weighed those factors up more comprehensively, I don't expect to change my prediction by more than ten percentage points over the coming months. Though I'll write here if I do.
My prediction: 70% chance that by August 2029 there will be a large reduction in investment in AI and a corresponding market crash in AI company stocks, etc, and that both will continue to be for at least three months.
For:
Update: back up to 60% chance.
I overreacted before IMO on the updating down to 40% (and undercompensated when updating down to 80%, which I soon after thought should have been 70%).
The leader in turns of large model revenue, OpenAI has basically failed to build something worth calling GPT-5, and Microsoft is now developing more models in-house to compete with them. If OpenAI fails on the effort to combine its existing models into something new and special (likely), that’s a blow to perception of the industry.
A recession might also be coming this year, or at least in the next four years, which I made a prediction about before.
Update: 40% chance.
I very much underestimated/missed the speed of tech leaders influencing the US government through the Trump election/presidency. Got caught flat-footed by this.
I still think it’s not unlikely for there to be an AI crash as described above within the next 4 years and 8 months but it could be from levels of investment much higher than where we are now. A “large reduction in investment” at that level looks a lot different than a large reduction in investment from the level that markets were at 4 months ago.
We ended up having a private exchange about it.
Basically, organisers spend more than half of their time on general communications and logistics to support participants get to work.
And earmarking stipends to particular areas of work seems rather burdensome administratively, though I wouldn’t be entirely against it if it means we can cover more people’s stipends.
Overall, I think we tended not to allow differentiated fundraising before because it can promote internal conflicts, rather than having people come together to make the camp great.
Here's how I specify terms in the claim:
I'm also feeling less "optimistic" about an AI crash given:
I will revise my previous forecast back to 80%+ chance.
Just found a podcast on OpenAI’s bad financial situation.
It’s hosted by someone in AI Safety (Jacob Haimes) and an AI post-doc (Igor Krawzcuk).
https://kairos.fm/posts/muckraiker-episodes/muckraiker-episode-004/
As a 1st approximation, I assume humans will be selecting AIs which benefit them, not AIs which maximally increase economic growth.
The problem here is that AI corporations are increasingly making decisions for us.
See this chapter.
Corporations produce and market products to increase profit (including by replacing their fussy expensive human parts with cheaper faster machines that do good-enough work.)
To do that they have to promise buyers some benefits, but they can also manage to sell products by hiding the negative externalities. See cases Big Tobacco, Big Oil, etc.
I am open to a bet similar to this one.
I would bet on both, on your side.
Potentially relatedly, I think massive increases in unemployment are very unlikely.
I see you cite statistics of previous unemployment rates as an outside view, compensating against the inside view. Did you look into the underlying rate of job automation? I'd be curious about that. If that underlying rate has been trending up over time, then there is a concern that at some point the gap might not be filled with re-employment opportunities.
AI Safety inside views are wrong for vari...
Donation opportunities for restricting AI companies:
Hey, my apologies for taking even longer to reply (had family responsibilities this month).
I will read that article on why Chernobyl-style events are not possible with modern reactors. Respecting you for the amount of background research you must have done in this area, and would like to learn more.
Although I think the probability of human extinction over the next 10 years is lower than 10^-6.
You and I actually agree on this with respect to AI developments. I don’t think the narratives I read of a large model recursively self-improving itself internally make sense.
I wrote a book for educated laypeople explaining how AI corporations would cause increasing harms leading eventually to machine destruction of our society and ecosystem.
Curious for your own thoughts here.
Basically I'm upvoting what you're doing here, which I think is more important than the text itself.
Thanks for recognising the importance of doing the work itself. We are still scrappy so we'll find ways to improve over time.
especially that you should have run this past a bunch of media savvy people before releasing
If you know anyone with media experience who might be interested to review future drafts, please let me know.
I agree we need to improve on our messaging.
This is great! Appreciating your nitty-gritty considerations
(1) There's a good chance the outcome will be some form of "catch and release" -- it's usually easier to deal with isolated protestors who do not cause violence or significantly damage property in this manner rather than by pursuing criminal charges to trial.
“Catch and release” is what’s happening right now. However, as we keep repeating the barricades, and as hopefully more and more protestors join us, it would highly surprise me if the police and court system just allow us to keep b...
Thanks for the kind words!
I personally think it would be helpful to put more emphasis on how OpenAI’s reckless scaling and releases of models is already concretely harming ordinary folks (even though no major single accident has shown up yet).
Eg.
To clarify for future reference, I do think it’s likely (80%+) that at some point over the next 5 years there will be a large reduction in investment in AI and a corresponding market crash in AI company stocks, etc, and that both will continue to be for at least three months.
Update: I now think this is 90%+ likely to happen (from original prediction date).
But it’s weird that I cannot find even a good written summary of Bret’s argument online (I do see lots of political podcasts).
I found an earlier scenario written by Bret that covers just one nuclear power plant failing and that does not discuss the risk of a weakening magnetic field.
The quotes from the OECD Nuclear Energy Agency’s report were interesting.
moving nuclear fuel stored in pools into dry casket storage The extent to which we can do this is limited because spent fuel must be stored for one to ten years in spent fuel pools while the shorter-lived isotopes decay before it's ready to be moved to dry cask storage.
I did not know this. I added an edit to the post: “nuclear waste already stored in pools for 5 years”.
...I don't think an environmental radioisotope release can realistically give people across the world acute radiat
Regarding 1., I would value someone who has researched this give more insight into:
A. How long diesel generators could be expected to be supplied with diesel when there is some continental electricity outage of a year (or longer). This is hard to judge. My intuition is that society would be in chaos and that maintaining diesel supplies would be extremely tough to manage.
B. What is minimally required in the long process of shutting down a nuclear power plant? Including but not limited to diesel or other backup generator supplies.
Regarding 2., I do not see h...
Igor Krawzcuk, an AI PhD researcher, just shared more specific predictions:
“I agree with ed that the next months are critical, and that the biggest players need to deliver. I think it will need to be plausible progress towards reasoning, as in planning, as in the type of stuff Prolog, SAT/SMT solvers etc. do.
I'm 80% certain that this literally can't be done efficiently with current LLM/RL techniques (last I looked at neural comb-opt vs solvers, it was _bad_), the only hope being the kitchen sink of scale, foundation models, solvers _and_ RL
…
If OpenAI/Anthr...
To clarify for future reference, I do think it’s likely (80%+) that at some point over the next 5 years there will be a large reduction in investment in AI and a corresponding market crash in AI company stocks, etc, and that both will continue to be for at least three months.
Ie. I think we are heading for an AI winter.
It is not sustainable for the industry to invest 600+ billion dollars per year in infrastructure and teams in return for relatively little revenue and no resulting profit for major AI labs.
At the same time, I think that within the next ...
The report is focussed on preventing harms of technology to people using or affected by that tech.
It uses FDA’s mandate of premarket approval and other processes as examples of what could be used for AI.
Restrictions to economic productivity and innovation is a fair point of discussion. I have my own views on this – generally I think the negative assymetry around new scalable products being able to do massive harm gets neglected by the market. I’m glad the FDA exists to counteract that.
The FDA’s slow response to ramping up COVID vaccines during the pandemic...
They mentioned that line at the top of the 80k Job board.
Still do I see.
“Handpicked to help you tackle the world's most pressing problems with your career.”
Yeah, it's a case of people being manipulated into harmful actions. I'm saying 'besides' because it feels like a different category of social situation than seeing someone take some public action online and deciding for yourself to take action too.