Marcus Abramovitch 🔸

3873 karmaJoined

Comments
245

This just came to mind: the reason that it's the wrong way to go about solving problems is that you want to solve the largest problems (well, per resource) and not just solve any random problem. Like, there is a problem that my shoes are currently untied, and I don't want to bend down or spend 10 seconds to tie them, but it's not very important.

So if you want to solve the most important problems, you should start with the problem and then work backwards for what solutions you might wish existed. I think the mere fact that people often talk about forecasting as the solution they are seeking to apply, whether that be Sentinel or whoever, is evidence that things are going wrong.

This just came to mind: the reason that it's the wrong way to go about solving problems is that you want to solve the largest problems (well, per resource) and not just solve any random problem. Like, there is a problem that my shoes are currently untied, and I don't want to bend down or spend 10 seconds to tie them, but it's not very important.

So if you want to solve the most important problems, you should start with the problem and then work backwards for what solutions you might wish existed. I think the mere fact that people often talk about forecasting as the solution they are seeking to apply, whether that be Sentinel or whoever, is evidence that things are going wrong.

One thing I'd flag is that models are extremely good at telling who is prompting them, and this leads to them being sycophantic, in very subtle ways. I'm not quite sure how they do it, but I've seen this in multiple instances.

For starters, you'd want to lobby for external safety testing on internal models.  You'd want to make sure external safety researchers had access to the models. You'd want certain reporting, etc.

I think he would include a lot of people who work at Anthropic, for example, on pre-training, some of whom went through MATS or something.

Yea, this is fair. I am much more sympathetic to non-PM forecasting than I am PM/judgemental forecasting. The ideas in this post were really developed in 2023/2024 when I saw EAs spending a ton of time on Manifold/Metaculus, investing at high valuations, generally revering prediction markets for decision making, etc. whereas what I was seeing was completely different.

This post really belabours the first and second bullet point, perhaps because that is where a lot of money has gone to, but there can be a lot of value in the third.

I really believe in following the money. I think if we spend $100M on forecasting and $90M of it went to prediction market-style forecasting, I think it's fair to basically lump it all together. It'd be one thing if PMs were a small experiment within broad forecasting, but its been the main thing.

Hi Eva,

I think the Social Science Prediction Platform (alongside a friend of mine who is doing something similar for clinical trials) are among the more interesting uses of forecasting/PMs but I'm skeptical they will be uptaken to the degree/impact you might hope for.

do forecasts inform 1% of their funding or what?

I'm skeptical of things of the form "small percentage chance * big number". I think humans are really bad at estimating small percentages.

Would be happy to talk privately about any situations you are thinking of.

As promised, my reply (a couple days late).

  1. I think it's a bit of a cop out to suggest that the money was spent poorly and therefore it's not fair to judge forecasting on the merits. Not sure if this is what you are saying though.
  2. I'm not sure I agree. Blockchains were funded. Lots of academic research is funded by governments, etc.
  3. By this logic, why aren't we paying people in the top, say 100 on Polymarket on non-sports questions and saying "hey guys, we want to pay you to make some forecasts for us".
  4. I'm not sure how I feel about these. I will say, I am not nearly as big a fan of AI 2027 as others seem to be, and I think it is going to severely discredit AI risks because we have been crying wolf when frankly, most of the scary stuff they say doesn't happen. I am very happy to make some bets with Daniel or Eli on some of these (and give them extra time).
  5. Useless is a stretch and I didn't intend to claim it originally. Sorry if I implied it. I merely think they are overrated, hence the title.

I agree that there is a lot of stuff being conflated in "forecasting". I suppose, I want to single out Prediction Markets and Judgemental Forecasting.

I think far more than $10M/year is going into forecasting. Many grants for forecasting are awarded outside the forecasting fund, such as the Navigating Transformative AI Fund. It depends on what you count, but I think it is closer to $25M/year.

I really question if people are really getting much, if anything, from all these forecasts that they didn't already have before.

Load more