MaxRa

3782 karmaJoined Mar 2017

Bio

Participation
5

Hi, I'm Max :)

  • working in AI governance (strategy, expert surveys, research infrastructure, EU tech policy fellow)
  • background in cognitive science & biology (did research on metacognition and confidence judgements)
  • most worried about AI going badly for technical & coordination reasons
  • vegan for the animals
  • doing my own forecasts: https://www.metaculus.com/accounts/profile/110500/

Comments
564

Topic contributions
2

I agree that things like confirmation bias and myside bias are huge drivers impeding "societal sanity". And I also agree that it won't help a lot here to develop tools to refine probabilities slightly more.

That said, I think there is a huge crowd of reasonably sane people who have never interacted with the idea of quantified forecasting as a useful epistemic practice and a potential ideal to thrive towards when talking about important future developments. Like other commentators say, it's currently mostly attracting a niche of people who thrive for higher epistemic ideals, who try to contribute to better forecasts on important topics, etc. I currently feel like it's not intractable for quantitative forecasts to become more common in epistemic spaces filled with reasonable enough people (e.g. journalism, politics, academia). Kinda similar to how tracking KPIs where probably once a niche new practice and are now standard practice.

Thanks, I think that's a good question. Some (overlapping) reasons that come to mind that I give some credence to:

a) relevant markets are simply making an error in neglecting quantified forecasts

  • e.g. COVID was an example where I remember some EA adjacent people making money because investors were underrating the pandemic potential signifiantly
  • I personally find it plausible when looking e.g. at the quality of think tank reports which seems significantly curtailed due to the amount of vague propositions that would be much more useful if more concrete and quantified

b) relevant players train the relevant skills sufficiently well into their employees themselves (e.g. that's my fairly uninformed impression from what Jane Street is doing, and maybe also Bridgewater?)

c) quantified forecasts are so uncommon that it still feels unnatural to most people to communicate them, and it feels cumbersome to be nailed down on giving a number if you are not practiced in it

d) forecasting is a nerdy practice, and those practices need bigger wins to be adopted (e.g. maybe similar to learning programming/math/statistics, working with the internet, etc.)

e) maybe more systematically I'm thinking that it's often not in the interest of entrenched powers to have forecasters call bs on whatever they're doing.

  • in corporate hierarchies people in power prefer the existing credentialism, and oppose new dimensions of competition
  • in other arenas there seems to be a constant risk of forecasters raining on your parade

f) maybe previous forecast-like practices ("futures studies", "scenario planning") maybe didn't yield many benefits and made companies unexited about similar practices (I personally have a vague sense of not being impressed by things I've seen associated with these words)

I don't think there's actually a risk of CAISID damaging their EA networks here, fwiw, and I don’t think CAISID wanted to include their friendships in this statement.

My sense is that most humans are generally worried about disagreeing with what they perceive to be a social group’s opinion, so I spontaneously don’t think there’s much specific to EA to explain here.

I‘m really excited about more thinking and grant-making going into forecasting!

Regarding the comments critical of forecasting as a good investment of resources from a world-improving perspective, here some of my quick thoughts:

  1. Systematic meritocratic forecasting has a track record of outperforming domain experts on important questions - Examples: Geopolitics (see Superforecasting), public health (see COVID), IIRC also outcomes of research studies

  2. In all important domains where humans try to affect things, they are implicitly forecasting all the time and act on those forecasts. Random examples: - "If lab-grown meat becomes cheaper than normal meat, XY% of consumers will switch" - "A marginal supply of 10,000 bednets will decrease malaria infections by XY%" - Models of climate change projections conditional on emmissions

  3. In many domains humans are already explicitly forecasting and acting on those forecasts - Insurance (e.g. forecasts on loan payments) - Finance (e.g. on interest rate changes) - Recidivism - Weather - Climate

  4. Increases in use of forecasting has the potential to increase societal sanity - Make people more able to appreciate and process uncertainty in important domains - Clearer communication (e.g. less talking past one another by anchoring discussion on real world outcomes) - Establish feedback loops with resolvable forecasts ➔ stronger incentives for being correct & ability to select people who have better world models

That said, I also think that it's often surprisingly difficult to ask actionable questions when forecasting, and often it might be more important to just have a small team of empowered people with expert knowledge combined with closely coupled OODA loops instead. I remember finding this comment from Jan Kulveit pretty informative:

In practice I’m a bit skeptical that a forecasting mindset is that good for generating ideas about “what actions to take”. “Successful planning and strategy” is often something like “making a chain of low-probability events happen”, which seems distinct, or even at tension with typical forecasting reasoning. Also, empirically, my impression is that forecasting skills can be broadly decomposed into two parts—building good models / aggregates of other peoples models, and converting those models into numbers. For most people, the “improving at converting non-numerical information into numbers” part has initially much better marginal returns (e.g. just do calibration trainings...), but I suspect doesn’t do that much for the “model feedback”.

Source: https://ea.greaterwrong.com/posts/by8u954PjM2ctcve7/experimental-longtermism-theory-needs-data#comment-HgbppQzz3G3hLdhBu

Some other relevant responses:

Scott Alexander writes

My current impression of OpenAI’s multiple contradictory perspectives here is that they are genuinely interested in safety - but only insofar as that’s compatible with scaling up AI as fast as possible. This is far from the worst way that an AI company could be. But it’s not reassuring either.

Zvi Mowshowitz writes

Even scaling back the misunderstandings, this is what ambition looks like.

It is not what safety looks like. It is not what OpenAI’s non-profit mission looks like. It is not what it looks like to have concerns about a hardware overhang, and use that as a reason why one must build AGI soon before someone else does. The entire justification for OpenAI’s strategy is invalidated by this move.

[...]

The chip plan seems entirely inconsistent with both OpenAI’s claimed safety plans and theories, and with OpenAI’s non-profit mission. It looks like a very good way to make things riskier faster. You cannot both try to increase investment on hardware by orders of magnitude, and then say you need to push forward because of the risks of allowing there to be an overhang.

Or, well, you can, but we won’t believe you.

This is doubly true given where he plans to build the chips. The United States would be utterly insane to allow these new chip factories to get located in the UAE. At a minimum, we need to require ‘friend shoring’ here, and place any new capacity in safely friendly countries.

Also, frankly, this is not The Way in any sense and he has to know it:

Sam Altman: You can grind to help secure our collective future or you can write substacks about why we are going fail.

MaxRa
3mo21
9
0
20

Thanks a lot for sharing, and for your work supporting his family and for generally helping the people who knew him in processing this loss. I only recently got to know him during the last two EA conferences I attended but he left a strong impression of being a very kind and caring and thoughtful person.

Huh, I actually kinda thought that Open Phil also had a mixed portfolio, just less prominently/extensively than GiveWell. Mostly based on hearling like once or twice that they were in talks with interested UHNW people, and a vague memory of somebody at Open Phil mentioning them being interested in expanding their donors beyond DM&CT... 

Cool!

the article is very fair, perhaps even positive!

Just read the whole thing, wondering whether it gets less positive after the exerpt here. And no, it's all very positive. Thanks you guys for your work, so good to see forecasting gaining momentum.

For example, the fact that it took us more than ten years to seriously consider the option of "slowing down AI" seems perhaps a bit puzzling. One possible explanation is that some of us have had a bias towards doing intellectually interesting AI alignment research rather than low-status, boring work on regulation and advocacy.

I'd guess it's also that advocacy and regulation seemed just less marginally useful in most worlds with the suspected AI timelines of even 3 years ago?

Hmmm, your reply makes me more worried than before that you'll engage in actions that increase the overall adversarial tone in a way that seems counterproductive to me. :')

I also think we should reconceptualize what the AI companies are doing as hostile, aggressive, and reckless. EA is too much in a frame where the AI companies are just doing their legitimate jobs, and we are the ones that want this onerous favor of making sure their work doesn’t kill everyone on earth.

I'm not completely sure what you refer to with "legitimate jobs", but I generally have the impression that EAs working on AI risks have very mixed feelings about AI companies advancing cutting edge capabilities? Or sharing models openly? And I think reconceptualizing "the behavior of AI companies" (I would suggest trying to be more concrete in public, even here) as aggressive and hostile will itself be perceived as hostile, which you said you wouldn't do? I think that's definitely not "the most bland advocacy" anymore?

Also, the way you frame your pushback makes me worry that you'll loose patience with considerate advocacy way too quickly:

"There’s no reason to rush to hostility"

"If showing hostility works to convey the situation, then hostility could be merited."

"And I really hope it’s not necessary to advance into hostility."

Load more