This is an article in the featured articles series from AISafety.info. AISafety.info writes AI safety intro content. We'd appreciate any feedback

The most up-to-date version of this article is on our website, along with 300+ other articles on AI existential safety.

Corporations can be considered superintelligent only in a limited sense. Nick Bostrom, in Superintelligence, distinguishes between "speed superintelligence", "collective superintelligence", and "quality superintelligence".

Out of these, corporations come closest to collective superintelligence. Bostrom reserves the term “collective superintelligence” for hypothetical systems much more powerful than current human groups, but corporations are still strong examples of collective intelligence. They can perform cognitive tasks far beyond the abilities of any one human, as long as those tasks can be decomposed into many parallel, human-sized pieces. For example, they can design every part of a smartphone, or sell coffee in thousands of places simultaneously.

However, corporations are still very limited. They don't have speed superintelligence: no matter how many humans work together, they'll never program an operating system in one minute, or play great chess in one second per move. Nor do they have quality superintelligence: ten thousand average physicists collaborating to invent general relativity for the first time would probably fail where Einstein succeeded. Einstein was thinking on a qualitatively higher level.

AI systems could be created one day that think exceptional thoughts at high speeds in great numbers, presenting major challenges we’ve never had to face when dealing with corporations.

3

0
0

Reactions

0
0
Comments2
Sorted by Click to highlight new comments since:

Good article summarizing the point, but I don't see the reason for posting these older discussions on the forum.

Thanks for the feedback! So, these articles are intended to serve as handy links to share with people confused about some point of AI safety. (Which ties into our mission: spreading correct models on AI safety, which seems robustly good.) Plausibly, people on the EA forum encounter others like this, or fall into that category themselves. It's a tricky topic, after all, and lots of people on the forum are new.  Your comment suggests we failed to position ourselves correctly. And also that these articles might not be a great fit for the EA forum. Which is useful, because we're still figuring out what content would be a good fit here, and how to frame it.

Does that answer your question? 

Curated and popular this week
Relevant opportunities