From the perspective of an approach called Human-Centric Functional Modeling (HCFM), which defines systems according to their function, the earth is an adaptive problem-solving system that functions to solve the problems posed by it's environment. As we'll see, this perspective is important in defining in the simplest possible way why human intelligence has been so impactful on the planet. And this approach is important in understanding how to fix that impact.

From this functional modeling perspective, the earth can be defined in terms of a minimal (simplest possible) set of functions of the earth, and higher order interactions between those functions, to the limit of some order N that is reliably achievable without a new system of organization to govern those interactions. Biological life consists of all the Nth order interactions between those functions, where these interactions are higher order and therefore of greater complexity than reliably achievable without life. From the functional modeling perspective homeostasis is the process of seeking to sustain fitness to function, and all life shares this feature. In essence, in the earth converting part of itself into life, life is a set of processes the earth has developed during it's lifetime to adapt to give itself greater capacity to maintain stability in it's fitness to function.

The earth's adaptation to produce human intelligence marks a functional transition point in the life of the earth. Human intelligence has enabled the part of the earth that is mankind to not only generate a surplus of resources, which other animals can do, but has also given mankind sufficient capacity for abstraction to enable that surplus to be represented in abstract terms that remove the barriers to accumulation. Whether accumulation of knowledge of where fruits or vegetables can be gathered, whether accumulation of reasoning processes enabling prey animals to be outwitted or predator animals to be escaped, with such abstraction any accumulation can potentially be represented as, for example, abstract economic value that can be stored and exchanged, so that economic value can be accumulated at levels orders of magnitude greater. But value in the abstract is impact on any targeted problem in the world, so capacity to accumulate value is the capacity for this subset of the world that is human to achieve impact on the entire world itself. And this removal of barriers to accumulating value has enabled human-beings to accumulate orders of magnitude greater value, and to have the potential for orders of magnitude greater impact on the world around us, than any other organism.

If it's true that no other animal has had the intelligence to have been able to abstract value into a form that can be accumulated in this way, then looking at economic value as one measure of capacity for impact, the economic value of human intelligence might be at least the value of all economic wealth that has been accumulated to date, that is, all the economic wealth in existence on the earth. For this reason human intelligence, in terms of capacity for not only economic but also of other impacts, might be nature's most important innovation in the entire 3.5 billion year history of the earth. Human intelligence is not only likely the most impactful innovation in the history of the planet, but it also might be the most impactful innovation possible in the planet’s future until a similarly transformative change in ability to create impact on any objective (to create and store value) might be achieved.

Human intelligence is unique in it's capacity for impact as measured in this way, however, a model of human cognition defined using HCFM suggests that without a fundamental change in the organization of groups, human cognition, whether individual or in groups, faces a limit to the complexity of problems it can reliably define or solve, and a limit to the degree that it can reliably scale cooperation to increase that capacity.

A new cognitive system with the ability to solve even higher order problems would make solving entirely new problems possible, such as the problem of abstracting value even further so it can be accumulated at many orders of magnitude higher still. For example, such a system might make it possible to solve the problem of understanding, at a specific enough level to communicate sufficient incentive to everyone who needs to be incentivized to participate, how cooperating in one thing, like making better cell phones, can benefit cooperation in everything else, like gardening. Because at some level of abstraction, all cooperation is the same task. But this can't reliably be achieved without groups having the capacity to abstract the value in abstraction itself, which would provide an entirely new level of value the group might accumulate, and therefore an entirely new level of potential impact.

This ability to accumulate exponentially greater value would mark another transition point in the life of the earth. Currently even though mankind can travel to the moon or to mars, we are constrained to the earth's resources in doing so. A new cognitive system with the potential to conceive higher order problems and solutions might potentially operate not just with the earth's resources, but with resources of other planets or celestial bodies, and therefore effectively be part of a larger system than the earth, such as the solar system or galaxy. In its increased capacity of this subset of the solar system or galaxy that is human to change the entire earth itself, this new cognitive system would be the most important innovation in both the history and immediate future of mankind.

An ability to accumulate value (impact problems) at such a scale would represent such a transformative change that it would replace human intelligence in importance. When at some time in the future that point of transformation arrives and our collective capacity to accumulate value scales dramatically, no innovation resulting from human intelligence at any time in the history of human civilization, whether discovering the wheel, the printing press, electricity, the internal combustion engine, or even the computer, will have been more important. Just as no innovation before that point can have been more important than nature’s invention of intelligence itself which led to them.

General Collective Intelligence or GCI is a system that organizes groups of individuals into a single collective intelligence with the potential for vastly greater general problem solving ability than any individual in the group. Because GCI leverages a model of human cognition that might be used to implement a system of Artificial General Intelligence or AGI, implementing a GCI will create semantic models of reasoning processes and information, as well as other components, that can be reused to create an AGI. However, GCI has been suggested to be more important that AGI because there are problems that are predicted to be not reliably solvable without GCI. One such problem is making an AGI safe. AGI has been predicted to drive unprecedented inequality, and has been predicted to enable unprecedented surveillance, for whomever owns it. The potential centralization of authority resulting from such destabilizing levels of inequality, and surveillance has been suggested to be an existential risk in itself. In particular, because once such a system is used to centralize decision-making in this way, objection to harm caused by that system might not be reliably possible, while misalignment of that system with collective interests might virtually guarantee such harm will occur. And because once such a decision-making system is in place, controlling it might not be reliably achievable where it is too complex to understand. However, GCI is predicted to have the capacity to reliably define a sufficiently complex web of decentralized mutual cooperation to bind an AGI to the well-being of people and the planet.

AGI is only one existential risk. But a system for significantly increasing a group's general problem solving ability is relevant to solving the problem posed by every other existential risk as well. Because if the problem with complex problems is that they are too complex for us to currently have the ability to conceptualize either the problem, or its solution, then the problem of all existential risks is the lack of a system that reliably makes us smart enough to do so. One experiment to validate the capacity of a GCI to significantly increase the problem-solving ability of a group in this way is discussed here: https://forum.effectivealtruism.org/posts/cc8WN9LFz8pATRMYz/how-to-launch-an-experiment-in-the-effective-altruism

-6

0
0

Reactions

0
0

More posts like this

Comments2
Sorted by Click to highlight new comments since:

This type of content might be more suited to LessWrong, and you might get better feedback/engagement there.

I just signed up for LessWrong. Thanks!

Curated and popular this week
Relevant opportunities