Hide table of contents

The purpose of this post (also available on LessWrong) is to share an alternative notion of “singularity” that I’ve found useful in timelining/forecasting.

  • fully general tech company is a technology company with the ability to become a world-leader in essentially any industry sector, given the choice to do so — in the form of agreement among its Board and CEO — with around one year of effort following the choice.

Notice here that I’m focusing on a company’s ability to do anything another company can do, rather than an AI system's ability to do anything a human can do.  Here, I’m also focusing on what the company can do if it so chooses (i.e., if its Board and CEO so choose) rather than what it actually ends up choosing to do.  If a company has these capabilities and chooses not to use them — for example, to avoid heavy regulatory scrutiny or risks to public health and safety — it still qualifies as a fully general tech company.

This notion can be contrasted with the following:

  • Artificial general intelligence (AGI) refers to cognitive capabilities fully generalizing those of humans.
  • An autonomous AGI (AAGI) is an autonomous artificial agent with the ability to do essentially anything a human can do, given the choice to do so — in the form of an autonomously/internally determined directive — and an amount of time less than or equal to that needed by a human.

Now, consider the following two types of phase changes in tech progress:

  1. A tech company singularity is a transition of a technology company into a fully general tech company.  This could be enabled by safe AGI (almost certainly not AAGI, which is unsafe), or it could be prevented by unsafe AGI destroying the company or the world.
  2. An AI singularity is a transition from having merely narrow AI technology to having AGI technology.

I think the tech company singularity concept, or some variant of it, is important for societal planning, and I’ve written predictions about it before, here:

  • 2021-07-21 — prediction that a tech company singularity will occur between 2030 and 2035;
  • 2022-04-11 — updated prediction that a tech company singularity will occur between 2027 and 2033.

A tech company singularity as a point of coordination and leverage

The reason I like this concept is that it gives an important point of coordination and leverage that is not AGI, but which interacts in important ways with AGI.  Observe that a tech company singularity could arrive

  1. before AGI, and could play a role in
    1. preventing AAGI, e.g., through supporting and enabling regulation;
    2. enabling AGI but not AAGI, such as if tech companies remain focussed on providing useful/controllable products (e.g., PaLM, DALL-E);
    3. enabling AAGI, such as if tech companies allow experiments training agents to fight and outthink each other to survive.
  2. after AGI, such as if the tech company develops safe AGI, but not AAGI (which is hard to control, doesn't enable the tech company to do stuff, and might just destroy it).

Points (1.1) and (1.2) are, I think, humanity’s best chance for survival.  Moreover, I think there is some chance that the first tech company singularity could come before the first AI singularity, if tech companies remain sufficiently oriented on building systems that are intended to be useful/usable, rather than systems intended to be flashy/scary.

How to steer tech company singularities?

The above suggests an intervention point for reducing existential risk: convincing a mix of

  • scientists
  • regulators
  • investors, and
  • the public

… to shame tech companies for building useless/flashy systems (e.g., autonomous agents trained in evolution-like environments to exhibit survival-oriented intelligence), so they remain focussed on building usable/useful systems (e.g., DALL-E, PaLM) preceding and during a tech company singularity.  In other words, we should try to steer tech company singularities toward developing comprehensive AI services (CAIS) rather than AAGI.

How to help steer scientists away from AAGI: 

  • point out the relative uselessness of AAGI systems, e.g., systems trained to fight for survival rather than to help human overseers;
  • appeal to the badness of nuclear weapons, which are — after detonation — the uncontrolled versions of nuclear reactors.
  • appeal to the badness of gain-of-function lab leaks, which are — after getting out — the uncontrolled versions of pathogen research.

How to convince the public that AAGI is bad: 

  • this is already somewhat easy; much of the public is already scared of AI because they can’t control it.
  • do not make fun of the public or call people dumb for fearing things they cannot control; things you can’t control can harm you, and in the case of AGI, people are right to be scared.

How to convince regulators that AAGI is bad:

  • point out that uncontrollable autonomous systems are mainly only usable for terrorism
  • point out the obvious fact that training things to be flashy (e.g., by exhibiting survival instincts) is scary and destabilizing to society.
  • point out that many scientists are already becoming convinced of this (they are)

How to convince investors that AAGI is bad: point out

  • the uselessness and badness of uncontrollable AGI systems, except for being flashy/scary;
  • point out that scientists (potential hires) are already becoming convinced of this;
  • point out that regulators should, and will, be suspicious of companies using compute to train uncontrollable autonomous systems, because of their potential to be used in terrorism.

Speaking personally, I have found it fairly easy to make these points since around 2016.  Now, with the rapid advances in AI we’ll be seeing from 2022 onward, it should be easier.  And, as Adam Scherlis (sort of) points out [EA Forum comment], we shouldn't assume that no one new will ever care about AI x-risk, especially as AI x-risk becomes more evidently real.  So, it makes sense to re-try making points like these from time to time as discourse evolves.

Summary

In this post, I introduced the notion of a "tech company singularity", discussed how the idea might be usable as an important coordination and leverage point for reducing x-risk, and gave some ideas for convincing others to help steer tech company singularities away from AAGI.

All of this isn't to say we'll be safe from AI risk, and far from it; e.g., see What Multipolar Failure Looks Like.  Efforts to maintain cooperation on safety across labs and jurisdictions remains paramount, IMHO.

In any case, try on the "tech company singularity" concept and see if does anything for you :)

51

0
0

Reactions

0
0

More posts like this

Comments5


Sorted by Click to highlight new comments since:

>>after a tech company singularity, such as if the tech company develops safe AGI

I think this should be "after AGI"?

Yes, thanks!  Fixed.

I’m a bit confused and wanted to clarify what you mean by AGI vs AAGI: are you of the belief that AGI could be safely controlled (e.g., boxed) but that setting it to “autonomously” pursue the same objectives would be unsafe?

Could you describe what an AGI system might look like in comparison to an AAGI?

Yes, surely inner-alignment is needed for AGI to not (accidentally) become AAGI by default?

Thank you so much for this extremely important and brilliant post, Andrew! I really appreciate it.

I completely agree that the degree to which autonomous general-capabilities research is outpacing alignment research needs to be reduced (most likely via recruitment and social opinion dynamics), and that this seems neglected relative to its importance.

I wrote a post on a related topic recently, and it would be really great to hear what you think! (https://forum.effectivealtruism.org/posts/juhMehg89FrLX9pTj/a-grand-strategy-to-recruit-ai-capabilities-researchers-into)

Curated and popular this week
Ben_West🔸
 ·  · 1m read
 · 
> Summary: We propose measuring AI performance in terms of the length of tasks AI agents can complete. We show that this metric has been consistently exponentially increasing over the past 6 years, with a doubling time of around 7 months. Extrapolating this trend predicts that, in under a decade, we will see AI agents that can independently complete a large fraction of software tasks that currently take humans days or weeks. > > The length of tasks (measured by how long they take human professionals) that generalist frontier model agents can complete autonomously with 50% reliability has been doubling approximately every 7 months for the last 6 years. The shaded region represents 95% CI calculated by hierarchical bootstrap over task families, tasks, and task attempts. > > Full paper | Github repo Blogpost; tweet thread. 
Ronen Bar
 ·  · 10m read
 · 
"Part one of our challenge is to solve the technical alignment problem, and that’s what everybody focuses on, but part two is: to whose values do you align the system once you’re capable of doing that, and that may turn out to be an even harder problem", Sam Altman, OpenAI CEO (Link).  In this post, I argue that: 1. "To whose values do you align the system" is a critically neglected space I termed “Moral Alignment.” Only a few organizations work for non-humans in this field, with a total budget of 4-5 million USD (not accounting for academic work). The scale of this space couldn’t be any bigger - the intersection between the most revolutionary technology ever and all sentient beings. While tractability remains uncertain, there is some promising positive evidence (See “The Tractability Open Question” section). 2. Given the first point, our movement must attract more resources, talent, and funding to address it. The goal is to value align AI with caring about all sentient beings: humans, animals, and potential future digital minds. In other words, I argue we should invest much more in promoting a sentient-centric AI. The problem What is Moral Alignment? AI alignment focuses on ensuring AI systems act according to human intentions, emphasizing controllability and corrigibility (adaptability to changing human preferences). However, traditional alignment often ignores the ethical implications for all sentient beings. Moral Alignment, as part of the broader AI alignment and AI safety spaces, is a field focused on the values we aim to instill in AI. I argue that our goal should be to ensure AI is a positive force for all sentient beings. Currently, as far as I know, no overarching organization, terms, or community unifies Moral Alignment (MA) as a field with a clear umbrella identity. While specific groups focus individually on animals, humans, or digital minds, such as AI for Animals, which does excellent community-building work around AI and animal welfare while
Max Taylor
 ·  · 9m read
 · 
Many thanks to Constance Li, Rachel Mason, Ronen Bar, Sam Tucker-Davis, and Yip Fai Tse for providing valuable feedback. This post does not necessarily reflect the views of my employer. Artificial General Intelligence (basically, ‘AI that is as good as, or better than, humans at most intellectual tasks’) seems increasingly likely to be developed in the next 5-10 years. As others have written, this has major implications for EA priorities, including animal advocacy, but it’s hard to know how this should shape our strategy. This post sets out a few starting points and I’m really interested in hearing others’ ideas, even if they’re very uncertain and half-baked. Is AGI coming in the next 5-10 years? This is very well covered elsewhere but basically it looks increasingly likely, e.g.: * The Metaculus and Manifold forecasting platforms predict we’ll see AGI in 2030 and 2031, respectively. * The heads of Anthropic and OpenAI think we’ll see it by 2027 and 2035, respectively. * A 2024 survey of AI researchers put a 50% chance of AGI by 2047, but this is 13 years earlier than predicted in the 2023 version of the survey. * These predictions seem feasible given the explosive rate of change we’ve been seeing in computing power available to models, algorithmic efficiencies, and actual model performance (e.g., look at how far Large Language Models and AI image generators have come just in the last three years). * Based on this, organisations (both new ones, like Forethought, and existing ones, like 80,000 Hours) are taking the prospect of near-term AGI increasingly seriously. What could AGI mean for animals? AGI’s implications for animals depend heavily on who controls the AGI models. For example: * AGI might be controlled by a handful of AI companies and/or governments, either in alliance or in competition. * For example, maybe two government-owned companies separately develop AGI then restrict others from developing it. * These actors’ use of AGI might be dr