MaxRa

3816 karmaJoined Mar 2017

Bio

Participation
5

Hi, I'm Max :)

  • working in AI governance (strategy, expert surveys, research infrastructure, EU tech policy fellow)
  • background in cognitive science & biology (did research on metacognition and confidence judgements)
  • most worried about AI going badly for technical & coordination reasons
  • vegan for the animals
  • doing my own forecasts: https://www.metaculus.com/accounts/profile/110500/

Comments
567

Topic contributions
2

Just in case someone interested in this has not done so yet, I think Zvi‘s post about it was worth reading.

https://thezvi.substack.com/p/openai-the-board-expands

Thanks for your work on this, super interesting!

Based on just quickly skimming, this part seems most interesting to me and I feel like discounting the bottom-line of the sceptics due to their points seeming relatively unconvincing to me (either unconvincing on the object level, or because I suspect that the sceptics haven't thought deeply enough about the argument to evaluate how strong it is):

We asked participants when AI will displace humans as the primary force that determines what happens in the future. The concerned group’s median date is 2045 and the skeptic group’s median date is 2450—405 years later.

[Reasons of the ~400 year discrepancy:]

● There may still be a “long tail” of highly important tasks that require humans, similarto what has happened with self-driving cars. So, even if AI can do >95% of humancognitive tasks, many important tasks will remain.

● Consistent with Moravec’s paradox, even if AI has advanced cognitive abilities it willlikely take longer for it to develop advanced physical capabilities. And the latter wouldbe important for accumulating power over resources in the physical world.

● AI may run out of relevant training data to be fully competitive with humans in alldomains. In follow-up interviews, two skeptics mentioned that they would updatetheir views on AI progress if AI were able to train on sensory data in ways similar tohumans. They expected that gains from reading text would be limited.

● Even if powerful AI is developed, it is possible that it will not be deployed widely,because it is not cost-effective, because of societal decision-making, or for other reasons.

And, when it comes to outcomes from AI, skeptics tended to put more weight on possibilities such as

● AI remains more “tool”-like than “agent”-like, and therefore is more similar totechnology like the internet in terms of its effects on the world.

● AI is agent-like but it leads to largely positive outcomes for humanity because it isadequately controlled by human systems or other AIs, or it is aligned with humanvalues.

● AI and humans co-evolve and gradually merge in a way that does not cleanly fit theresolution criteria of our forecasting questions.

● AI leads to a major collapse of human civilization (through large-scale death events,wars, or economic disasters) but humanity recovers and then either controls or doesnot develop AI.

● Powerful AI is developed but is not widely deployed, because of coordinated humandecisions, prohibitive costs to deployment, or some other reason

I agree that things like confirmation bias and myside bias are huge drivers impeding "societal sanity". And I also agree that it won't help a lot here to develop tools to refine probabilities slightly more.

That said, I think there is a huge crowd of reasonably sane people who have never interacted with the idea of quantified forecasting as a useful epistemic practice and a potential ideal to thrive towards when talking about important future developments. Like other commentators say, it's currently mostly attracting a niche of people who thrive for higher epistemic ideals, who try to contribute to better forecasts on important topics, etc. I currently feel like it's not intractable for quantitative forecasts to become more common in epistemic spaces filled with reasonable enough people (e.g. journalism, politics, academia). Kinda similar to how tracking KPIs where probably once a niche new practice and are now standard practice.

Thanks, I think that's a good question. Some (overlapping) reasons that come to mind that I give some credence to:

a) relevant markets are simply making an error in neglecting quantified forecasts

  • e.g. COVID was an example where I remember some EA adjacent people making money because investors were underrating the pandemic potential signifiantly
  • I personally find it plausible when looking e.g. at the quality of think tank reports which seems significantly curtailed due to the amount of vague propositions that would be much more useful if more concrete and quantified

b) relevant players train the relevant skills sufficiently well into their employees themselves (e.g. that's my fairly uninformed impression from what Jane Street is doing, and maybe also Bridgewater?)

c) quantified forecasts are so uncommon that it still feels unnatural to most people to communicate them, and it feels cumbersome to be nailed down on giving a number if you are not practiced in it

d) forecasting is a nerdy practice, and those practices need bigger wins to be adopted (e.g. maybe similar to learning programming/math/statistics, working with the internet, etc.)

e) maybe more systematically I'm thinking that it's often not in the interest of entrenched powers to have forecasters call bs on whatever they're doing.

  • in corporate hierarchies people in power prefer the existing credentialism, and oppose new dimensions of competition
  • in other arenas there seems to be a constant risk of forecasters raining on your parade

f) maybe previous forecast-like practices ("futures studies", "scenario planning") maybe didn't yield many benefits and made companies unexited about similar practices (I personally have a vague sense of not being impressed by things I've seen associated with these words)

I don't think there's actually a risk of CAISID damaging their EA networks here, fwiw, and I don’t think CAISID wanted to include their friendships in this statement.

My sense is that most humans are generally worried about disagreeing with what they perceive to be a social group’s opinion, so I spontaneously don’t think there’s much specific to EA to explain here.

I‘m really excited about more thinking and grant-making going into forecasting!

Regarding the comments critical of forecasting as a good investment of resources from a world-improving perspective, here some of my quick thoughts:

  1. Systematic meritocratic forecasting has a track record of outperforming domain experts on important questions - Examples: Geopolitics (see Superforecasting), public health (see COVID), IIRC also outcomes of research studies

  2. In all important domains where humans try to affect things, they are implicitly forecasting all the time and act on those forecasts. Random examples: - "If lab-grown meat becomes cheaper than normal meat, XY% of consumers will switch" - "A marginal supply of 10,000 bednets will decrease malaria infections by XY%" - Models of climate change projections conditional on emmissions

  3. In many domains humans are already explicitly forecasting and acting on those forecasts - Insurance (e.g. forecasts on loan payments) - Finance (e.g. on interest rate changes) - Recidivism - Weather - Climate

  4. Increases in use of forecasting has the potential to increase societal sanity - Make people more able to appreciate and process uncertainty in important domains - Clearer communication (e.g. less talking past one another by anchoring discussion on real world outcomes) - Establish feedback loops with resolvable forecasts ➔ stronger incentives for being correct & ability to select people who have better world models

That said, I also think that it's often surprisingly difficult to ask actionable questions when forecasting, and often it might be more important to just have a small team of empowered people with expert knowledge combined with closely coupled OODA loops instead. I remember finding this comment from Jan Kulveit pretty informative:

In practice I’m a bit skeptical that a forecasting mindset is that good for generating ideas about “what actions to take”. “Successful planning and strategy” is often something like “making a chain of low-probability events happen”, which seems distinct, or even at tension with typical forecasting reasoning. Also, empirically, my impression is that forecasting skills can be broadly decomposed into two parts—building good models / aggregates of other peoples models, and converting those models into numbers. For most people, the “improving at converting non-numerical information into numbers” part has initially much better marginal returns (e.g. just do calibration trainings...), but I suspect doesn’t do that much for the “model feedback”.

Source: https://ea.greaterwrong.com/posts/by8u954PjM2ctcve7/experimental-longtermism-theory-needs-data#comment-HgbppQzz3G3hLdhBu

Some other relevant responses:

Scott Alexander writes

My current impression of OpenAI’s multiple contradictory perspectives here is that they are genuinely interested in safety - but only insofar as that’s compatible with scaling up AI as fast as possible. This is far from the worst way that an AI company could be. But it’s not reassuring either.

Zvi Mowshowitz writes

Even scaling back the misunderstandings, this is what ambition looks like.

It is not what safety looks like. It is not what OpenAI’s non-profit mission looks like. It is not what it looks like to have concerns about a hardware overhang, and use that as a reason why one must build AGI soon before someone else does. The entire justification for OpenAI’s strategy is invalidated by this move.

[...]

The chip plan seems entirely inconsistent with both OpenAI’s claimed safety plans and theories, and with OpenAI’s non-profit mission. It looks like a very good way to make things riskier faster. You cannot both try to increase investment on hardware by orders of magnitude, and then say you need to push forward because of the risks of allowing there to be an overhang.

Or, well, you can, but we won’t believe you.

This is doubly true given where he plans to build the chips. The United States would be utterly insane to allow these new chip factories to get located in the UAE. At a minimum, we need to require ‘friend shoring’ here, and place any new capacity in safely friendly countries.

Also, frankly, this is not The Way in any sense and he has to know it:

Sam Altman: You can grind to help secure our collective future or you can write substacks about why we are going fail.

MaxRa
4mo21
9
0
20

Thanks a lot for sharing, and for your work supporting his family and for generally helping the people who knew him in processing this loss. I only recently got to know him during the last two EA conferences I attended but he left a strong impression of being a very kind and caring and thoughtful person.

Huh, I actually kinda thought that Open Phil also had a mixed portfolio, just less prominently/extensively than GiveWell. Mostly based on hearling like once or twice that they were in talks with interested UHNW people, and a vague memory of somebody at Open Phil mentioning them being interested in expanding their donors beyond DM&CT... 

Load more