Hide table of contents

A special issue on Superintelligence is coming up at the journal Informatica. The call for proposals is given below. We would welcome submissions from a range of perspectives, including philosophical and other fields that effective altruists may work in.



Since the inception of the field of artificial intelligence, a prominent goal has been to create computer systems that would reason as capably as humans across a wide range of fields. Over the last decade, this goal has been brought closer to reality. Machine learning systems have come to excel in many signal processing tasks and have achieved superhuman performance in learning tasks including the games of Go and Heads-up Poker. More broadly, we have seen large changes in every pore of our society. This remarkable progress raises the question of how the world may look if the field of artificial intelligence eventually succeeds in creating highly capable general purpose reasoning systems. In particular, it has been hypothesized that such advances may lead to the development of a superintelligent agent – one whose capabilities “greatly exceed humans across virtually all domains of interest”.

Discussion on superintelligence arose from AI circles, but has spread to other disciplines. It has been hypothesized that superintelligence might emerge not from a human-programmed AI system but from the development of emulations of the human brain, through the use of brain-computer interfaces, or through genetic engineering. If an AI system might become superintelligent, this raises some technical questions about how the system can be made to behave transparently and in alignment with human values. The hypothesis of superintelligence is also an interesting setting in which to examine philosophical questions pertaining to cognition, consciousness and moral reasoning. Pragmatically, social and societal implications of superintelligence appear to be an overwhelmingly important topic. The social sciences have a place in analyzing how the impacts of AI might change as capabilities increase. There is further work yet to be done in assessing the risks and benefits of superintelligence, and in shaping policies, protocols and governance instruments that may accentuate its benefits while mitigating any risks. Superintelligence as an undertaking therefore connects very different researchers into a multidisciplinary endeavor encompassing AI, philosophy, cognitive science, biology, law and more.


The aim of this special issue is to promote research on superintelligence by approaching the topic in the most multidisciplinary and visionary manner possible. This will lead to a thoroughly comprehensive effort of cooperation, which will inform and therefore produce new insights, innovative ideas and purposeful concepts, thereby building new grounds for thinking about superintelligence and related topics. Original research, critical studies and review articles dealing with recommended topics in regards to superintelligence are welcome. Position papers and visionary papers will be evaluated according to the quality of their argumentation. Submissions from both academia and industry are encouraged.

Topics to be discussed in this special issue include (but are not limited to) the following:

  • Artificial Superintelligence
  • Artificial General Intelligence (AGI)
  • Biological Superintelligence
  • Brain-computer Interfaces
  • Whole Brain Emulation
  • Genetic Engineering
  • Cognitive Enhancement
  • Collective Superintelligence
  • Neural Lace-Mediated Empathy
  • Technological Singularity
  • Intelligence Explosion
  • Definition of Life
  • Definition of Intelligence
  • Machine Ethics & Computational Ethics
  • Consciousness
  • Human Limitations
  • Technological Limitations
  • Societal Risk from Machine-generated Alternate Value Systems
  • Social Benefit and Existential Risk from Superintelligence
  • Policy Options for Superintelligent Artificial Intelligence Development (domestic)
  • International Governance Options for Superintelligent Artificial Intelligence Development
  • Design Considerations for Superintelligence's Morality
  • Turing Test

Note that each of these topics has to ultimately be written about in regards to superintelligence, either as a consequence or a precursor for it and its implications.

Key Dates

  • Paper Submission Deadline: August 31, 2017
  • Author Notification: October 31, 2017
  • Final Manuscript Deadline: November 30, 2017

All submissions and inquiries should be directed to the attention of: matjaz.gams@ijs.si  and tine.kole@gmail.com  or to the special editors:

  • Ryan Carey, ryan@intelligence.org
  • Matthijs Maas, matthijs.m.maas@gmail.com 
  • Nell Watson, nell.watson@su.org 
  • Roman Yampolskiy, roman.yampolskiy@louisville.edu






More posts like this

No comments on this post yet.
Be the first to respond.
Curated and popular this week
Relevant opportunities