Note: This started as a quick take, but it got too long so I made it a full post. It's still kind of a rant; a stronger post would include sources and would have gotten feedback from people more knowledgeable than I. But in the spirit of Draft Amnesty Week, I'm writing this in one sitting and smashing that Submit button.

Many people continue to refer to companies like OpenAI, Anthropic, and Google DeepMind as "frontier AI labs". I think we should drop "labs" entirely when discussing these companies, calling them "AI companies"[1] instead. While these companies may have once been primarily research laboratories, they are no longer so. Continuing to call them labs makes them sound like harmless groups focused on pushing the frontier of human knowledge, when in reality they are profit-seeking corporations focused on building products and capturing value in the marketplace.

Laboratories do not directly publish software products that attract hundreds of millions of users and billions in revenue. Laboratories do not hire armies of lobbyists to control the regulation of their work. Laboratories do not compete for tens of billions in external investments or announce many-billion-dollar capital expenditures in partnership with governments both foreign and domestic.

People call these companies labs due to some combination of marketing and historical accident. To my knowledge no one ever called Facebook, Amazon, Apple, or Netflix "labs", despite each of them employing many researchers and pushing a lot of genuine innovation in many fields of technology.

To be clear, there are labs inside many AI companies, especially the big ones mentioned above. There are groups of researchers doing research at the cutting edge of various fields of knowledge, in AI capabilities, safety, governance, etc. Many individuals (perhaps some readers of this very post!) would be correct in saying they work at a lab inside a frontier AI company. It's just not the case that any of these companies as a whole is best described as a "lab". Some actual AI labs include FAR.AI, Redwood Research, METR, and all academic groups. There might be some for-profit entities that I would call labs, but I'm skeptical by default.

OpenAI, Anthropic, and DeepMind are tech companies, pure and simple. Each has different goals and approaches, and the private goals of their departments and employees vary widely, but I believe strongly that thinking of them as tech companies rather than AI laboratories provides clarity and will improve the quality of thinking and discussion within this community.

  1. ^

    When more specificity is needed, "frontier AI companies," "generative AI companies," "foundational AI companies," or similar could also be used.

257

42
0
5

Reactions

42
0
5
Comments22


Sorted by Click to highlight new comments since:

I agree with this - 80,000 Hours made this change about a year ago.

It's also just jargon-y. I call them "AI companies" because people outside the AGI memeplex don't know what an "AI lab" is, and (as you note) if they infer from someone's use of that term that the frontier developers are something besides "AI companies," they'd be wrong!

I expect that "labs" usefully communicates to most of my interlocutors that I'm talking about the companies developing frontier models and not something like Palantir. There's a lot of hype-based incentive for companies to claim to be "AI companies", which creates confusion. (Indeed, I didn't know before I chose Palantir as an example, but of course they're marketing themselves as an AI company.)

That said, I agree with the consideration in your post. I don't claim which is the bigger consideration, only that they trade off.

I think this is a useful distinction, thanks for raising it. I support terms like, "frontier AI company," "company making frontier AI," and "company making foundation models," all of which help distinguish OpenAI from Palantir. Also it seems pretty likely that within a few years, most companies will be AI companies!? So we'll need new terms. I just don't want that term to be "lab".

Another thing you might be alluding to is that "lab" is less problematic when talking to people within the AI safety community, and more problematic the further out you go. I think that, within a community, the terms of art sort of lose their generic connotations over time, as community members build a dense web of new connotations specific to that meaning. I regret to admit that I'm at the point where the word "lab" without any qualifiers at all makes me think of OpenAI!

But code switching is hard, and if we use these terms internally, we'll also use them externally. Also external people read things that were more intended for internal people, so the language leaks out.

I agree that the term "AI company" is technically more accurate. However, I also think the term "AI lab" is still useful terminology, as it distinguishes companies that train large foundation models from companies that work in other parts of the AI space, such as companies that primarily build tools, infrastructure, or applications on top of AI models.

I agree that those companies are worth distinguishing. I just think calling them "labs" is a confusing way to do so. If the purpose was only to distinguish them from other AI companies, you could call them "AI bananas" and it would be just as useful. But "AI bananas" is unhelpful and confusing. I think "AI labs" is the same (to a lesser but still important degree).

Unfortunately there's momentum behind the term "AI lab" in a way that is not true for "AI bananas". Also, it is unambiguously true that a major part of what these companies do is scientific experimentation, as one would expect in a laboratory—this makes the analogy to "AI bananas" imperfect.

I think "labs" has the connotation of mad scientists and somebody creating something that escapes the lab, so has some "good" connotations for AI safety comms.

Of course, depending on the context and audience. 

Interesting point! I'd be OK with people calling them "evil mad scientist labs," but I still think the generic "lab" has more of a positive, harmless connotation than this negative one.

I'd also be more sympathetic to calling them "labs" if (1) we had actual regulations around them or (2) they were government projects. Biosafety and nuclear weapons labs have a healthy reputation for being dangerous and unfriendly, in a way "computer labs" do not. Also, private companies may have biosafety containment labs on premises, and the people working within them are labworkers/scientists, but we call the companies pharmaceutical companies (or "Big Pharma"), not "frontier medicine labs".

Also also if any startup tried to make a nuclear weapons lab they would be shut down immediately and all the founders would be arrested. [citation needed]

Seems testable! 

Fwiw, I would have predicted that labs would lead to more positive evaluations overall, including higher evaluations of responsibility and safety. But I don't think people's intuitions are very reliable about such cases.

People call these companies labs due to some combination of marketing and historical accident. To my knowledge no one ever called Facebook, Amazon, Apple, or Netflix "labs", despite each of them employing many researchers and pushing a lot of genuine innovation in many fields of technology.

I agree overall but fwiw I think that for the first few years of Open AI and Deepmind's existence, they were mostly pursuing blue sky research with few obvious nearby commercial applications (e.g. training NNs to play video games). I think a lab was a pretty reasonable term - or at least similarly reasonable to calling say, bell labs a lab.

I completely agree that OpenAI and Deepmind started out as labs and are no longer so.

My point was that I don’t think it was marketing or a historical accident, and it’s actually quite different to the other companies that you named which were all just straightforward revenue generating companies from ~day 1.

Ah! Yes that's a good point and I misinterpreted.That's part of what I meant by "historical accident" but now I think that it was confusing to say "accident" and I should have said something like "hisotrical activities".

I think people like the “labs” language because it makes it easier to work with them and all the reasons you state, which is why I generally say “AI companies”. I do find it hard, however, to make myself understood sometimes in an EA context when I don’t use it. 

I imagine that one reason they are referred to as "labs" is because, to some extent, they are seen as creating a new kind of organism. They aren't just creating a product, they are poking and prodding something most do not fully understand.

This is a good point, though we will probably need to discern between several varieties of "AI companies". 
"Lab" (currently) means research is happening there (which is correct for the companies you mentioned).
"AI company" right now mostly says someone is doing something that involves AI. If you're building a ChatGPT wrapper you're an "AI company". 

So while I do agree with your point that these companies are no longer just labs (as you mentioned), we need to denote that they are companies where major research is happening, in comparison to most companies who are just building products with AI. 
Yes, they're all tech companies. But OpenAI, Anthropic and DeepMind are obviously the core of a cluster of points in objectspace, and it seems reasonable to look for some name for that cluster (with a different discussion being what exactly the cluster that denotes "labs" includes, and whether these points are a part of it).

I agree that they're worth calling out somehow, I just think "lab" is a misleading way to doing so given their current activities. I've made some admittedly-clunky suggestions in other threads here.

Very, very fair point, Sawyer! There's a lot left to be desired in existing AI risk communications--especially to the public/policymakers-- so any refinements are very welcome in my book. Great post! 

Good point. Word association is misleading in this case.

They are big AGI companies.

And they are worse than big oil companies and big tobacco companies.

Curated and popular this week
Sam Anschell
 ·  · 6m read
 · 
*Disclaimer* I am writing this post in a personal capacity; the opinions I express are my own and do not represent my employer. I think that more people and orgs (especially nonprofits) should consider negotiating the cost of sizable expenses. In my experience, there is usually nothing to lose by respectfully asking to pay less, and doing so can sometimes save thousands or tens of thousands of dollars per hour. This is because negotiating doesn’t take very much time[1], savings can persist across multiple years, and counterparties can be surprisingly generous with discounts. Here are a few examples of expenses that may be negotiable: For organizations * Software or news subscriptions * Of 35 corporate software and news providers I’ve negotiated with, 30 have been willing to provide discounts. These discounts range from 10% to 80%, with an average of around 40%. * Leases * A friend was able to negotiate a 22% reduction in the price per square foot on a corporate lease and secured a couple months of free rent. This led to >$480,000 in savings for their nonprofit. Other negotiable parameters include: * Square footage counted towards rent costs * Lease length * A tenant improvement allowance * Certain physical goods (e.g., smart TVs) * Buying in bulk can be a great lever for negotiating smaller items like covid tests, and can reduce costs by 50% or more. * Event/retreat venues (both venue price and smaller items like food and AV) * Hotel blocks * A quick email with the rates of comparable but more affordable hotel blocks can often save ~10%. * Professional service contracts with large for-profit firms (e.g., IT contracts, office internet coverage) * Insurance premiums (though I am less confident that this is negotiable) For many products and services, a nonprofit can qualify for a discount simply by providing their IRS determination letter or getting verified on platforms like TechSoup. In my experience, most vendors and companies
 ·  · 4m read
 · 
Forethought[1] is a new AI macrostrategy research group cofounded by Max Dalton, Will MacAskill, Tom Davidson, and Amrit Sidhu-Brar. We are trying to figure out how to navigate the (potentially rapid) transition to a world with superintelligent AI systems. We aim to tackle the most important questions we can find, unrestricted by the current Overton window. More details on our website. Why we exist We think that AGI might come soon (say, modal timelines to mostly-automated AI R&D in the next 2-8 years), and might significantly accelerate technological progress, leading to many different challenges. We don’t yet have a good understanding of what this change might look like or how to navigate it. Society is not prepared. Moreover, we want the world to not just avoid catastrophe: we want to reach a really great future. We think about what this might be like (incorporating moral uncertainty), and what we can do, now, to build towards a good future. Like all projects, this started out with a plethora of Google docs. We ran a series of seminars to explore the ideas further, and that cascaded into an organization. This area of work feels to us like the early days of EA: we’re exploring unusual, neglected ideas, and finding research progress surprisingly tractable. And while we start out with (literally) galaxy-brained schemes, they often ground out into fairly specific and concrete ideas about what should happen next. Of course, we’re bringing principles like scope sensitivity, impartiality, etc to our thinking, and we think that these issues urgently need more morally dedicated and thoughtful people working on them. Research Research agendas We are currently pursuing the following perspectives: * Preparing for the intelligence explosion: If AI drives explosive growth there will be an enormous number of challenges we have to face. In addition to misalignment risk and biorisk, this potentially includes: how to govern the development of new weapons of mass destr
 ·  · 1m read
 · 
This is a linkpost for a new paper called Preparing for the Intelligence Explosion, by Will MacAskill and Fin Moorhouse. It sets the high-level agenda for the sort of work that Forethought is likely to focus on. Some of the areas in the paper that we expect to be of most interest to EA Forum or LessWrong readers are: * Section 3 finds that even without a software feedback loop (i.e. “recursive self-improvement”), even if scaling of compute completely stops in the near term, and even if the rate of algorithmic efficiency improvements slow, then we should still expect very rapid technological development — e.g. a century’s worth of progress in a decade — once AI meaningfully substitutes for human researchers. * A presentation, in section 4, of the sheer range of challenges that an intelligence explosion would pose, going well beyond the “standard” focuses of AI takeover risk and biorisk. * Discussion, in section 5, of when we can and can’t use the strategy of just waiting until we have aligned superintelligence and relying on it to solve some problem. * An overview, in section 6, of what we can do, today, to prepare for this range of challenges.  Here’s the abstract: > AI that can accelerate research could drive a century of technological progress over just a few years. During such a period, new technological or political developments will raise consequential and hard-to-reverse decisions, in rapid succession. We call these developments grand challenges.  > > These challenges include new weapons of mass destruction, AI-enabled autocracies, races to grab offworld resources, and digital beings worthy of moral consideration, as well as opportunities to dramatically improve quality of life and collective decision-making. > > We argue that these challenges cannot always be delegated to future AI systems, and suggest things we can do today to meaningfully improve our prospects. AGI preparedness is therefore not just about ensuring that advanced AI systems are alig