Hide table of contents

Update October 2022: Please see an updated and expanded version of this post linked here on arXiv (co-authored with Anders Sandberg). The original post is below. 

Summary

The leading discussion in existential risk and effective altruism centers on investigating each risk separately, such as misaligned AI and nuclear war. While the outcomes of many risk situations hinge on the performance of human groups---whether a collective (examples include democratic governments and scientific communities) can work together and generate good outcomes. This issue is often referred to as coordination problems in the existential risk and effective altruism communities. Here, I first argue that it is too narrow to think about these rich and encompassing phenomena as one of coordination. I then propose to think about them as Collective Intelligence problems. Collective Intelligence is a transdisciplinary field that studies how a collective of individuals performs together to achieve goals in a broad sense. The applications cross-cut humans (tribes, corporations, cities, etc.), animal groups, robotic swarms, and collections of neurons. I give an overview of this field and highlight findings relevant to improving the collective intelligence of human groups. Finally, existential risk and effective altruism communities can benefit from learning from other transdisciplinary disciplines and put a greater effort into identifying commonalities across various risk domains to help improve the robustness of our society in the face of many potential risks.

About the author

Vicky Chuqiao Yang is a fellow at the Santa Fe Institute in the United States. Her scientific research uses quantitative methods to understand and predict collective human behavior. More about her work on her website, Google Scholar, or Twitter

 ______________________________________________

 


We’ve got to be as clear-headed about human beings as possible, because we are still each other’s only hope.    ---James Baldwin
 

Many existential risk efforts, such as improving AI safety, preventing large-scale nuclear war, and preventing climate catastrophe, hinge on better collective performance of human groups. For example, mitigating global pandemic risks consists of coordinating individual lifestyle changes, the research efforts of the scientific community, and policies of nation-states. Our ability for effective collective decision-making and collective action are infrastructures for responding to a wide variety of, if not all, catastrophic risks. The vulnerability of having a weak infrastructure, such as in collective decision-making and collective action, can be compared to a pyramid sitting on its tip. Many forces can lead the pyramid to fall, be it a gust of wind, a shake of the ground. Compared to focusing on which forces will tip the pyramid over, and preventing these forces from happening, it is more useful to find ways to shift the pyramid out of the vulnerable position. Similarly, discussion in existential risks has been centered around which risks are likely to occur and how to circumvent each one. While this line of work is necessary and valuable, the complementary perspective of improving the system's infrastructure to be more stable under all risks can be fruitful but remains less explored. 

 

 

In effective altruism and existential risk communities, work concerned with human collective decision and action is often referred to as “coordination problems.” However, I find it narrow and restricting to think about these complex phenomena of collective decisions and action as problems of coordination. Firstly, the term “coordination problem” is strongly related to coordination games from game theory. While a helpful framework in many applications, it typically considers a small number of agents with a fixed set of explicit rules. While many hard problems we face are among a large number of agents, and the mechanisms of interaction may be implicit or change over time. This distinction is at the core of some most important human endeavors. According to Harari (2015), Homo Sapiens have accomplished astonishing achievements and dominated the earth because humans can cooperate in large numbers and in flexible ways. “In large numbers” is in contrast to chimpanzees and other mammals, which can only cooperate in small groups where one develops a personal relationship with their counterparts. “In flexible ways” is as opposed to bees and ants, which can work in large numbers but with more rigid roles. This idea is echoed in the social brain hypothesis (Dunbar 1998), which posits that human intelligence developed because of the need to handle the increased complexity of living in large social groups, suggesting human intelligence and group behavior are inextricably linked. Second, besides the consideration of group size and flexible ways of interaction, another issue with viewing the human collective through the lens of coordination is that it tends to dwell on the negative outcomes of bad coordination like tragedy of the commons. These are important to avoid, however making it the sole focus risks neglecting the upsides from the human collective, where the group is more capable than the sum of the individuals within it. The focus on avoiding bad coordination may lead to neglecting how to achieve that. Instead of thinking about these human-collective issues as an issue of coordination, I would advocate for thinking about them more broadly as an issue of Collective Intelligence. 

 

A brief introduction to the transdisciplinary field of Collective Intelligence 

The issue of how a collective work together is a central question in many applications in human society and beyond. Examples include human groups, such as in mitigating existential risks, elections, prediction markets, and juries; animal groups, such as in deciding the direction for migration; robotic swarms, such as designing rules for individual robots such that the collective can perform certain tasks; and neurons, such as the brain making coherent sense of the world while each neuron responds to different, and sometimes conflicting stimuli. In all these diverse applications, a shared problem is how to process distributed information effectively. Researchers have come together to study the phenomena cross-cutting these application domains, and the transdisciplinary field is referred to as Collective Intelligence. I refer to this field as transdisciplinary, as opposed to interdisciplinary, because it studies a shared phenomenon that manifests in many disciplines, spanning computer science, neuroscience, robotics, animal behavior, and many social sciences. It transcends the disciplinary boundaries, while interdisciplinary often refers to combining two fields to discover new things in either one. Collective intelligence is still a young field. It has many commonalities with the community engaging in effective altruism and existential risk---not only because its subject of study is closely related to mitigating existential risk (see Bak-Coleman et al. 2021, for a good argument for collective behavior to be considered a “crisis discipline”), but also because they share similar challenges in organizing a transdisciplinary effort. 

With many rigorous research efforts in the past two decades in Collective Intelligence, researchers have found many factors that help or hurt the performance of human groups in solving problems and making collective decisions, predictions, or estimations. While research does not yet offer recipes for constructing a good human collective, as collective performance is the outcome of complex interactions of many variables. These findings point to a few general directions for improving the collective performance of human groups and I offer a brief (and non-exhaustive) summary of the prominent findings below. 

 

Summary of Collective Intelligence findings for human collective performance

Collective intelligence. Similar to individual general intelligence (commonly known as IQ), that one individual can excel at a wide range of tasks, such as math and music, Woolley and colleagues (2010) find a similar property for human groups, named collective intelligence. When asking small groups to perform a wide range of tasks, including brainstorming, sudoku, and unscrambling words, the performance on a subset of the tasks gives a good out-of-sample prediction. In other words, some groups outperform others on a wide variety of tasks, suggesting the equivalent of general intelligence for groups. This finding is replicated by later studies (see Riedl et al. 2021 for a review). The studies look into what makes some groups more effective than others. Among the peer-reviewed studies (see Riedl et al. 2021 for a summary), the consensus is that the social process is more important than the skill of individual members. Social perceptiveness, the ability of individuals to identify social cues, is a key factor in improving group performance. This is manifested in group behaviors such as even conversational turn-taking---each group member speaks roughly equal amount. Consequently, groups with a higher proportion of women tend to have higher collective intelligence, as being female is correlated with greater social perceptiveness. The IQ of individual members plays a much less, and some argue negligible role.  

Diversity. Besides the social perceptiveness of group members, research finds benefit in having a diverse group of individuals. The diversity referred to here is diversity in knowledge and cognitive models. Abundant research has found in lab experiments, theoretical and computational models, and in real-world problem-solving scenarios, the phenomenon of a “diversity bonus”--- that a diverse group performs better than a homogenous group (Page 2019, Aminpour et al. 2021). Some even find a diverse group of non-experts can outperform a homogeneous group of experts (Hong & Page 2004). For a group of diverse agents to work together, an important aspect is cognitive alignment---such as commitment to group goals and shared beliefs (Krafft 2019). Another line of work finds that maintaining diversity in the group is hard---conformity and traditional market forces are against it. Mann & Helbing (2016) proposes alternative incentives to reward accurate minority predictions for maintaining diversity in the group---especially reward the minority who is right when the majority is wrong. 

Committed minorities. There has been much evidence that the presence of committed minorities, those who stubbornly hold on to their opinion, can lead to substantial changes in the collective behavior of a group. Most notably, Centola et al. (2018) find through human subject experiments that a critical mass of committed individuals (around 25% in their experiments) can tip social conventions towards the direction of these committed individuals. This critical transition is also predicted by an abundance of theoretical studies (such as Xie et al. 2011). It would require further investigation to understand the precise critical mass needed in different scenarios, however, the powerful effects of committed minorities on groups can be a fruitful direction in thinking about how to elicit (or prevent) social change.

Social influence. An area under heated debate is whether and how individuals should communicate with each other in a collective. There are quality experimental studies that have found social influence can have both positive and negative effects on collective performance (see Jayles et al., 2017 for an example of positive effect, and Lorenz et al., 2011 for an example of negative effect). On the one hand, letting individuals exchange information can lead to the loss of independent information, and worse collective performance (also referred to as groupthink). On the other hand, communication may help individuals deliberate and discover better answers. I think the story is complicated because the effect of social influence is likely to depend on several other variables. A paper led by the author (Yang et al. 2021) predicts it depends on the proportion of individuals using social information for their decisions, and whether committed minorities are present. Others find they also depend on the social network structure and adaptability (Almaatouq et al. 2020; Becker, Brackbill & Centola, 2017). The bottom line is that most people blindly following others does not lead to good outcomes in most scenarios.

Better sensors. One important component for improving group performance is to let individuals gather better information. This is especially important in forecasting and prediction tasks, such as election forecasts. This can be achieved through asking better survey questions. For example, in a method called “surprisingly popular” (Prelec, Seung & McCoy 2017), instead of asking people what they think, researchers also ask people what they think other people think. This helps to discover surprising phenomena that are hidden by social norms, such as Trump winning the 2016 US election. Another example is to use individual’s social circles as better sensors (Galesic et al., 2021)--- instead of asking individuals who they will vote for, ask who their friends will vote for. 

Overall. The literature in this field is vast, and here I highlight a small subset of findings relevant to the performance of human groups. I provide a list at the end of the post as resources for learning more. One theme that appears in these scientific studies is that individuals’ ability to perform the task matters less than how the individuals interact. 

 

Closing thoughts

In the coordination perspective, a central issue is how to resolve different people wanting different things. Collective intelligence offers a broad and, I think, hopeful, perspective on this issue. This difference may first appear as an obstacle to coordination, while in the collective intelligence perspective, this difference, if harnessed with the right aggregation mechanism, is a source of strength. Take the example of a flock of fish. Each fish senses their local environment---food, temperature, potential predator, etc., and has a different preferred direction of where to go next. Nevertheless, the collective effectively aggregates information from all fish through local interactions in movement. The collective’s movement responds to a much wider environment, more than the range of any individual fish. Thus collective intelligence can be generated from different individuals acting on different information and wanting different outcomes. The differences should be a source of hope, and the key question should be how to harness and aggregate the individuals’ differences in productive ways. 

I do not have the answers to the big collective intelligence problems facing human collectives. But I am hopeful because nature has solved many similarly hard problems. Besides the example of fish, each neuron in our brain receives different, and sometimes conflicting information, but the brain is able to make coherent sense of the world. Hopefully, we will learn from nature's approaches to help human groups through the transdisciplinary research effort of the many scientists working in this area (including myself). 

The existential risk and effective altruism communities are also collectives, whose performance are influenced by the overarching principles discussed above. The summary mentions that both the presence of women and diversity help improve the performance of a collective. At the time of writing, the academic community in effective altruism and existential risk seem to be male-dominated, especially among those who are senior. Thinking about how to involve a more diverse community can help improve the collective performance of these fields. This also applies to attracting people from other domains with different experiences and backgrounds to effective altruism and existential risk reduction.

There is much that the existential risk and effective altruism communities can learn from other transdisciplinary efforts like collective intelligence (see also blog posts What complexity science and simulation have to offer effective altruism on their relationship with Complexity Science, another long-standing transdisciplinary community). These transdisciplinary communities look across various application domains (such as neurons, humans, and robots) and identify common central processes. They study these cross-cutting phenomena so that robotic scientists learn from animal scientists, social scientists learn from neuroscientists, etc. I think a similar spirit can also help in the existential risk and effective altruism communities. What fundamental processes make AI, nuclear war, and pandemics dangerous? Some research has suggested candidates----it may have to do with the pace of regulatory innovation being far slower than that of technological innovation; or they all have to do with dangerous technology being accessible to a small number of bad actors. These exercises of identifying commonality across various risk domains help recognize key characteristics underlying humanity's greatest dangers. With the understanding of commonalities across risks, there is hope for finding cross-cutting solutions to many problems at once. 

 

Acknowledgments

This post benefited greatly from the input of Rory Greig (@rory_greig), who inspired me to write this post and provided valuable feedback and additions that help improve this post. Also, thank Jenna Marshall for feedback.

 

Suggested resources for learning more about collective intelligence

Besides the articles and books cited in this post, below we list some resources for learning more about collective intelligence

  • The wikipedia page of collective intelligence. Contains a detailed summary of research findings in various application domains.
  • Bak-Coleman, JB. et. al. (2021) Stewardship of global collective behavior, Proceedings of the National Academy of Sciences, 118(27) (Link). This recent article lays out a summary of recent findings in the field, and discusses connections of collective behavior with challenges of the long-term future.
  • J. Surowiecki (2004) The Wisdom of Crowds: Why the Many Are Smarter Than the Few and How Collective Wisdom Shapes Business, Economies, Societies and Nations. A great introductory book for many phenomena around collective intelligence. It is a bit dated, so does not contain the most recent research results.

References

  • Almaatouq, A., Noriega-Campero, A., Alotaibi, A., Krafft, P. M., Moussaid, M., & Pentland, A. (2020). Adaptive social networks promote the wisdom of crowds. Proceedings of the National Academy of Sciences, 117(21), 11379-11386. link
  • Aminpour, P., Gray, S. A., Singer, A., Scyphers, S. B., Jetter, A. J., Jordan, R., & Grabowski, J. H. (2021). The diversity bonus in pooling local knowledge about complex problems. Proceedings of the National Academy of Sciences, 118(5). link
  • Bak-Coleman, JB. et. al. (2021) Stewardship of global collective behavior, Proceedings of the National Academy of Sciences, 118(27).  link
  • Becker, J., Brackbill, D., & Centola, D. (2017). Network dynamics of social influence in the wisdom of crowds. Proceedings of the national academy of sciences, 114(26), E5070-E5076. link
  • Centola, D., Becker, J., Brackbill, D., & Baronchelli, A. (2018). Experimental evidence for tipping points in social convention. Science, 360(6393), 1116-1119. link
  • Dunbar, R.I.M. (1998), The social brain hypothesis. Evol. Anthropol., 6: 178-190. link
  • Galesic, M., de Bruin, W. B., Dalege, J., Feld, S. L., Kreuter, F., Olsson, H., & van der Does, T. (2021). Human social sensing is an untapped resource for computational social science. Nature, 1-9. link
  • Gilbert, D. (2013). The surprising science of happiness, TED Talk. Link
  • Harari, YN (2015). Why humans run the world, TED Ideas. link
  • Hong, L., & Page, S. E. (2004). Groups of diverse problem solvers can outperform groups of high-ability problem solvers. Proceedings of the National Academy of Sciences, 101(46), 16385-16389. link
  • Jayles, B., Kim, H. R., Escobedo, R., Cezera, S., Blanchet, A., Kameda, T., ... & Theraulaz, G. (2017). How social information can improve estimation accuracy in human groups. Proceedings of the National Academy of Sciences, 114(47), 12620-12625. link
  • Lorenz, J., Rauhut, H., Schweitzer, F., & Helbing, D. (2011). How social influence can undermine the wisdom of crowd effect. Proceedings of the National Academy of Sciences, 108(22), 9020-9025. link
  • Mann, R. P., & Helbing, D. (2017). Optimal incentives for collective intelligence. Proceedings of the National Academy of Sciences, 114(20), 5077-5082. link
  • Navajas, J., Niella, T., Garbulsky, G., Bahrami, B., & Sigman, M. (2018). Aggregated knowledge from a small number of debates outperforms the wisdom of large crowds. Nature Human Behaviour, 2(2), 126-132. link
  • Krafft, P. M. (2019). A simple computational theory of general collective intelligence. Topics in Cognitive Science, 11(2), 374-392. link
  • Page, S. (2019). The Diversity Bonus. Princeton University Press.
  • Prelec, D., Seung, H. S., & McCoy, J. (2017). A solution to the single-question crowd wisdom problem. Nature, 541(7638), 532-535. link
  • Riedl, C., Kim, Y. J., Gupta, P., Malone, T. W., & Woolley, A. W. (2021). Quantifying collective intelligence in human groups. Proceedings of the National Academy of Sciences, 118(21). link
  • Surowiecki J. (2004) The Wisdom of Crowds: Why the Many Are Smarter Than the Few and How Collective Wisdom Shapes Business, Economies, Societies and Nations.
  • Woolley, A. W., Chabris, C. F., Pentland, A., Hashmi, N., & Malone, T. W. (2010). Evidence for a collective intelligence factor in the performance of human groups. Science, 330(6004), 686-688. link
  • Xie, J., Sreenivasan, S., Korniss, G., Zhang, W., Lim, C., & Szymanski, B. K. (2011). Social consensus through the influence of committed minorities. Physical Review E, 84(1), 011130. link
  • Yang, V. C., Galesic, M., McGuinness, H., & Harutyunyan, A. (2021). When do Social Learners Affect Collective Performance Negatively? The Predictions of a Dynamical-System Model. arXiv preprint arXiv:2104.00770. link



 

30

0
0

Reactions

0
0

More posts like this

Comments6
Sorted by Click to highlight new comments since: Today at 1:47 PM

Thanks for the summary here. I've been eyeing collective intelligence for a while, still trying to figure out what to make of it. I think some of the ideas in the field seem pretty exciting.

"Human collective intelligence" seems like an obviously important thing to improve, if improvement is tractable and somewhat cost-effective. I haven't been as excited about the particular academic field as I have been about the abstract idea though. I haven't previously read many of the papers, but I've seen several talks on youtube by some of what seemed to be the prominent figures (according to the Wikipedia). I'm looking at some of these papers now, and some seem interesting, though I feel like I'm missing a much bigger picture. Much of what I've come across feels quite scattered. Maybe there's a textbook or large set of talks somewhere? 

I think right now I'm hesitant to call any work I do that I could imagine being around "collective intelligence" as "collective intelligence", because I just don't feel like I understand the particulars of the field. I'm similar hesitant to do things like advocate for "collective intelligence" funding or research for the same reason.

There are several areas that seem important in this area that I haven't found addressed much by this field, that kind of surprise me.

  1. I've been impressed by much of the work around Philip Tetlock and forecasting, but the collective intelligence work mostly seems fairly removed from that. I haven't seen Tetlock or others at Good Judgement Inc mention the collective intelligence field, for instance.
  2. What about ways that technology could improve the intelligence of collectives? Is that covered?
  3. I've seen various scientific experiments, but am curious about theories of how collective intelligence could be dramatically increased in the next 20 to 50 years. Is that discussed somewhere?
  4. There's a lot of research and discussion around epistemology and how crowds come to conclusions on controversial subjects. Would that be considered collective intelligence, or does intelligence preclude epistemics?
  5. Are things like rational reasoning covered, or is all of that considered non-collective?

I'm much more interested in work on making human groups better at reasoning than I am the work on groups of robots or fish; it's not clear to me how relevant the latter parts are, or how valuable it is for all of these to be part of one research effort.

Some other related questions I'd be curious about:

  1. It's not clear to me how usable these sorts of findings of collective intelligence are. Are there many cases of them being incorporated by corporations or similar, and experiencing large gains? Have people in the field of collective intelligence themselves used these ideas to have much more intelligence?
  2. Are there open research agendas are main goals of the field for the next 20-50 years?
  3. The idea of collective intelligence (CI) seems interesting, but I can barely find any literature about it. Have there been estimates of the CI of public groups we might know of? Or, have there been cases where it can be estimated in ways that are fairly obviously useful? I would expect that if there were a good measure, it would be interesting to use to understand hedge funds and other kinds of intelligent organizations.

 

No need to answer any of these questions, I just wanted to flag them to express where I'm coming from. Again, I'm excited about the idea of field (I think), I just feel like I really don't quite understand it.

It's not clear to me how usable these sorts of findings of collective intelligence are. Are there many cases of them being incorporated by corporations or similar, and experiencing large gains? Have people in the field of collective intelligence themselves used these ideas to have much more intelligence?

This was my top question after reading the post, as well.

The section I found most interesting was on group performance. I notice that the problems mentioned were mostly pretty small:

When asking small groups to perform a wide range of tasks, including brainstorming, sudoku, and unscrambling words, the performance on a subset of the tasks gives a good out-of-sample prediction.

Do you know of any studies where groups were asked to tackle more complex problems or tasks? This is obviously much harder to study, but also seems more relevant to a range of real-world use cases.

*****

Many of the most successful collectives in recent history began as startups (small groups of people running an enterprise together). Discussions of these organizations often highlight the intelligence of individual members, and the literature on startup hiring often emphasizes looking for the smartest/most impressive people. Social perceptiveness gets less attention, but is also harder to see; it's easier to say "Mark Zuckerberg is smart" than to study a bunch of early Facebook meetings. 

On the one hand, I wonder whether this leads to social perceptiveness being underrated. On the other hand, I wonder whether the greater difficulty of studying work on harder/larger-scale problems weighs in favor of social perceptiveness — e.g. if perceptiveness matters more for something like "allocating the group's work between small, simple tasks" than "determining how to approach problems too difficult for any one member to succeed".

(I haven't read the cited studies yet, so maybe these questions would have obvious answers if I did.)

Thanks @Ozzie Gooen and @Aaron Gertler, for the detailed comments. As Ozzie pointed out, the findings in collective intelligence (CI) are indeed very scattered, in the sense that what we know about CI exists as bits of effects that are often not connected. While real-world scenarios are typically affected by many interacting social and psychological factors, and how these effects interact is oftentimes unclear. I don't think there is an up-to-date synthesizing effort in the field at the moment, which is my frustration too, and what I'm trying to do in my work, and it's no easy task). As a young transdisciplinary field, CI still has much work to do in organizing and synthesizing. In fact, they just started to have their own journal (Collective Intelligence published by ACM) very recently. In the area of organizing, I think CI can learn a lot from the EA efforts. 

Good question on the applications of collective intelligence findings, and of the role of technology. I don't have in-depth knowledge about these areas. However, a resource that could be helpful is Handbook of Collective Intelligence (2005), edited by Bernstein and Malone. It devotes a chapter to discuss organizational behavior (effects on teams and organizations), and another on AI. Also see Nesta on some efforts applying collective intelligence research findings in policymaking and beyond. 

As for collective intelligence in the more complex scenarios, there is a study on project groups, another on solving a large engineering problem, also a Huff Post article on CI for "mega problem-solving." The work done on complex, real-world problem solving is not much compared to those done in labs. I hope there to be more on this in the future. Start-ups would be a great subject to study! I haven't seen any work on this, though. 

Thanks so much for the summary, I just noticed this for some reason.

I'll keep an eye out.

It sounds a bit like CI is fairly scattered, doesn't have all too much existing work, and also isn't advancing particularly quickly as of now. (A journal sounds good, but there are lots of fairly boring journals, so I don't know what to make of this)

Maybe 1-5 years from now, of whenever there gets to be a good amount of literature that would excite EAs, there could be follow-up posts summarizing the work.

Thanks for posting this Vicky!  It's a super interesting line of thought and I'd love to hear more about your research and how you view its path to effecting change in the world.

I'm commenting to flag one typo which threw me off the first time I read Vicky's comment for any future readers--I think Bernstein and Malone's Handbook of Collective Intelligence was published in 2015, rather than 2005.  

It feels like CI has been coming into its own as an actual field of research over the last 10-15 years. It'd seem much less promising to me if there had been a handbook published in 2005 without any major synthesizing efforts since.

Curated and popular this week
Relevant opportunities