Hide table of contents

Advanced AI systems can potentially perceive, decide, and act much faster than humans can -- perhaps many orders of magnitude faster. Given that we're used to intelligent agents all operating at about human speed, the effects of this 'speed mismatch' could be quite startling & counter-intuitive. An advanced AI might out-pace human actions and reactions in a way that's somewhat analogous to the way that a 'speedster' superhero (e.g. the Flash, Quicksilver) can out-pace normal humans, or the way that some fictional characters can 'stop time' and move around as if if everyone else is frozen in place (e.g. in 'The Fermata' novel (1994) by Nicholson Baker). 

Are there any more realistic depictions of this potential AI/human speed mismatch in nonfiction articles or books, or in science fiction stories, movies, or TV series -- especially ones that explore the risks and downsides of the mismatch?

18

0
0

Reactions

0
0
New Answer
New Comment


4 Answers sorted by

I personally find video game speedrunning a pretty useful intuition pump for what it might look like for an AI to do things in the real world. Seeing the skill-ceiling in games feels like it has helped me calibrate on how crazy things could get if you have much faster-thinking and faster-acting Artificial Intelligence. 

Habryka -- nice point. 

Example: speedrunning 'Ultimate Doom': 

This isn't quite what you're looking for because it's more a partial analogy of the phenomenon you point to rather than a realistic depiction, but FWIW I found this old short story by Eliezer Yudkowsky quite memorable.

PS: A few good examples I can think of off the top of my head (although they're not particularly realistic in relation to current AI tech):

  • The space battle scenes in the Culture science fiction novels by Iain M. Banks, in which the ship 'Minds' (super advanced AIs) fight so fast using mostly beam weapons that the battles are typically over in a few seconds, long before their human crews have any idea what's happening. https://spacebattles-factions-database.fandom.com/wiki/Minds 
  • The scene in Avengers: Age of Ultron in which Ultron wakes up, learns human history, defeats Jarvis, escapes into the Internet, and starts manufacturing robot copies of itself within a few seconds: 
  • The scenes in Mandalorian TV series where the IG-11 combat robot is much faster than the humanoid storm troopers: 

"The Bobiverse" series is lighthearted and generally techno-optimistic, but does portray this in a way that seems accurate to me.

Erin - thanks; looks interesting; hadn't heard of this science fiction book series before. 

https://bobiverse.fandom.com/wiki/We_Are_Legion_(We_Are_Bob)_Wiki

Comments3
Sorted by Click to highlight new comments since:

Thanks for the very useful link. I hadn't read that before. 

I like the intuition pump that if advanced AI systems are running at about 10 million times human cognitive speed, then one year of human history equals 10 million years of AI experience.

Yup! Alternatively: we’re working with silicon chips that are 10,000,000× faster than the brain, so we can get a 100× speedup even if we’re a whopping 100,000× less skillful at parallelizing brain algorithms than the brain itself.

Curated and popular this week
Paul Present
 ·  · 28m read
 · 
Note: I am not a malaria expert. This is my best-faith attempt at answering a question that was bothering me, but this field is a large and complex field, and I’ve almost certainly misunderstood something somewhere along the way. Summary While the world made incredible progress in reducing malaria cases from 2000 to 2015, the past 10 years have seen malaria cases stop declining and start rising. I investigated potential reasons behind this increase through reading the existing literature and looking at publicly available data, and I identified three key factors explaining the rise: 1. Population Growth: Africa's population has increased by approximately 75% since 2000. This alone explains most of the increase in absolute case numbers, while cases per capita have remained relatively flat since 2015. 2. Stagnant Funding: After rapid growth starting in 2000, funding for malaria prevention plateaued around 2010. 3. Insecticide Resistance: Mosquitoes have become increasingly resistant to the insecticides used in bednets over the past 20 years. This has made older models of bednets less effective, although they still have some effect. Newer models of bednets developed in response to insecticide resistance are more effective but still not widely deployed.  I very crudely estimate that without any of these factors, there would be 55% fewer malaria cases in the world than what we see today. I think all three of these factors are roughly equally important in explaining the difference.  Alternative explanations like removal of PFAS, climate change, or invasive mosquito species don't appear to be major contributors.  Overall this investigation made me more convinced that bednets are an effective global health intervention.  Introduction In 2015, malaria rates were down, and EAs were celebrating. Giving What We Can posted this incredible gif showing the decrease in malaria cases across Africa since 2000: Giving What We Can said that > The reduction in malaria has be
 ·  · 15m read
 · 
“I” refers to Zach, the Centre for Effective Altruism's CEO. Oscar is CEA’s Chief of Staff. We are grateful to all the CEA staff and community members who have contributed insightful input and feedback (directly and indirectly) during the development of our strategy and over many years. Mistakes are of course our own. Exec summary As one CEA, we are taking a principles-first approach to stewardship of the EA community. During the search for a new CEO, the board and search committee were open to alternative strategic directions, but from the beginning of my tenure, we’ve committed to a strategy under which we will: * Operate as one CEA, rather than winding down, breaking up or renaming the organization. Instead of optimizing for each of our team’s programs, we’ll be optimizing for EA as a whole. * Take a principles-first approach to EA, rather than becoming an AI org or otherwise re-orienting ourselves to specific causes. * Take greater responsibility for stewardship of the EA community, rather than restricting ourselves to passively providing infrastructure and support. This post explores stewardship in greater detail. Stewardship is about actors taking more responsibility for reaching and raising EA’s ceiling, and we believe CEA should play a leading role in steering, supporting and coordinating the community. Importantly, however, stewardship of EA is not ownership of EA: we don’t want to be the only leaders, and we do want a close collaboration with the community. During 2024 we focussed on building strong foundations that CEA will require to succeed at stewarding the community, including making over 20 hires (having started the year with 34 staff) while cutting a quarter of our costs, and developing our strategy for 2025 and 2026, including by listening to and learning from members of the EA community during visits I made to over half a dozen countries and in more than 200 one-on-one meetings. I feel good about the foundations we built and having priori
Neel Nanda
 ·  · 1m read
 · 
TL;DR Having a good research track record is some evidence of good big-picture takes, but it's weak evidence. Strategic thinking is hard, and requires different skills. But people often conflate these skills, leading to excessive deference to researchers in the field, without evidence that that person is good at strategic thinking specifically. I certainly try to have good strategic takes, but it's hard, and you shouldn't assume I succeed! Introduction I often find myself giving talks or Q&As about mechanistic interpretability research. But inevitably, I'll get questions about the big picture: "What's the theory of change for interpretability?", "Is this really going to help with alignment?", "Does any of this matter if we can’t ensure all labs take alignment seriously?". And I think people take my answers to these way too seriously. These are great questions, and I'm happy to try answering them. But I've noticed a bit of a pathology: people seem to assume that because I'm (hopefully!) good at the research, I'm automatically well-qualified to answer these broader strategic questions. I think this is a mistake, a form of undue deference that is both incorrect and unhelpful. I certainly try to have good strategic takes, and I think this makes me better at my job, but this is far from sufficient. Being good at research and being good at high level strategic thinking are just fairly different skillsets! But isn’t someone being good at research strong evidence they’re also good at strategic thinking? I personally think it’s moderate evidence, but far from sufficient. One key factor is that a very hard part of strategic thinking is the lack of feedback. Your reasoning about confusing long-term factors need to extrapolate from past trends and make analogies from things you do understand better, and it can be quite hard to tell if what you're saying is complete bullshit or not. In an empirical science like mechanistic interpretability, however, you can get a lot more fe