Hide table of contents

This is an overview of a series of posts where I’ll discuss a vision of An Evaluative Evolution in the context of an accelerating coevolution of Nature, Humanity and AI. This is my first personal post ever. Patience and assistance welcome.

May I Join the Nature, Human, AI Accelerating Coevolution Conversation?

If Humanity is to create a future we want, I believe an Evaluative Evolution is likely to be a requirement. Concerns about advanced AI, as presented by the now defunct Future Fund competition, prompted me to think more seriously about the relationship between my vision of an Evaluative Evolution and the accelerating coevolution of Nature, Humanity and AI (NHA).

Intending to participate in the competition, I drafted essays for the 12/23/22 deadline. This is the birthdate of reptiles on the cosmic calendar. Mid-December I realized there was no competition and the essays weren’t too good. 

Rather than publish what I had, I decided to make the essays less not too good and assume the same audience and framing that inspired their creation before the mass extinction. I choose this path because:

  • The competition inspired work and passion. 
  • The designers of the competition are sharp and explained a perspective clearly.
  • I personally don’t have another audience for this content.

 

Titles and Key Points from What I've Written So Far:

Dear Chris 

  • I address this first essay to a good friend named Chris. Why? 
  • Starting points are important and I feel joyful starting out writing to a friend.
  • I begin describing the relationship between my vision of an Evaluative Evolution and some of the major concerns with advanced AI, why some of this may be a nasty trick but not a funny joke, and I’ll conclude with more detailed summaries of these essays.

The Wickedness of a Wicked World 

  • I posit that Nature is everything. Humans are Nature, of Nature, and wholly dependent upon Nature.
  • Nature is complex. Complex systems are open. Open systems can not be controlled. 
  • Everything Humans create is Nature, of Nature and will be Nature. 
  • Advanced AI can not be controlled. Can we prove, a priori, the impossibility of absolute control?
  • With advanced AI (e.g. AGI), are we creating another Earth scale system on which we are wholly dependent? 
  • Some have learned a lot about partnering with Nature. Can we understand what they have learned and apply this to NHA (i.e. the accelerating coevolution of Nature, Humans, AI)? 

The Principle Of Infinite Ignorance (PII)

  • Be cool with Humility. Humanity needs to get down with this. 
  • Forget acquiring all information, all knowledge. PII says the pursuit moves the goal farther away. 
  • Is it possible to prove, a priori, the infinite ignorance of AI? Hint: Maybe.
  • If so, perhaps we can also prove that absolute alignment is impossible. 
  • Should we share reality, nature, and ourselves with an entity so powerful and so ignorant? 
  • Certainly not! That is, not if we understand so little about the original black box = Values, particularly Human Values.
  • Let’s understand the relationship between values and our evolution so that wisdom might compensate for ignorance and sustain abundance.

Values Theory (VT)

  • Values are perceived as an inscrutable ethereality. That is a problem. 
  • We intend to imbue powerful (and real) machines with our values. What’s the plan?
  • As a contribution to the NHA evolutionary praxis, I humbly propose that we could consider values as:
    • Nature, of Nature
    • The progeny of difference and information
    • Complex
    • Patterns of organizing energy 
    • Distributed
    • Coevolving interdependently 
    • Never singular or isolated but infinitely nuanced, tangled and interconnected
    • Measurable

Values Signature (VS)

  • What is the relationship between VT and understanding whether AIs are behaving? 
  • Humans do this naturally, these morality checks. We are always assessing, naming and judging the values embodied and enacted by people, objects, ideas, and actions. But do we do this well enough?
  • I humbly propose Values Signature as:
    • An approach to systematically and purposefully explore, describe, measure, assess, influence and communicate about a system’s values
    • A way to describe and compare values, actions, outcomes, and visions within and between Humans, AI, and Nature
    • A property that is unique, dynamic, and coevolving with the system it describes
    • Applicable to anything - a person, painting, Earth, nation, life form, machine, neighborhood, rock, gravity, problem, bucket, emotion, courage, war

Values Information Systems (VIS) 

  • As AIs become evermore ubiquitous and relationships between AIs and values are evermore important to assess and affect, we will need maps of the values space.
  • Today, maps of values space are like geographic maps from around 10,000 BCE.
  • We have massive information systems about every topic imaginable - geography, economics, demographics, space, transportation, education - but we totally lack accessible, useful, credible, secure, dynamical, relational, comprehensive information about values.  
  • I propose VIS as kinda like GIS but instead of all information in the context of geography, all information is in the context of values. 
  • I’ll suggest a few routes toward large scale, e.g. regional and global, VIS and models of values space. 

An Evaluative Universe

  • We must attend to the Evaluative Evolution of Everything, not just AI, for lots of reasons. A few:
    • Everything is connected. If that sounds trite, you don’t understand.
    • Balanced Evolution. As we privilege the evolution of AI over Nature and Humanity, we compromise morality and wisdom. 
    • Dependence. No matter how abstracted Human and AI values become, Nature’s values are the rules.
    • Fallacy of Majority Goodness. Lots of Goodness ⇏ Lots of Goodness.
    • Interdependence. Geographic Space and Values Space are an interdependent pair.   
    • Incompressibility. Safely reducing reality may be impossible.
  • What’s required for the well being of our evolution?
    • Lots of evaluation, that’s what. 
    • Evaluation is the praxis, passive and active, of values organizing nature. 
    • Drawing on twenty-five or so years of doing stuff with other stuff doing people, I’ll propose:
      • Praxes for assessing, e.g., NHA values, actions, outcomes, visions.
      • Evaluative initiatives like the Conservation of Humanness and the Conservation of Reality.
      • Priority conversations, partnerships, research, and futures we should be visioning.

Let Us Be Saints!

  • Nature is our Steward and our Shepherd. Nature began an Evaluative Evolution long before Humans and Life. If this could be truth and comfort, AI may allow Humans to abandon existences better suited to machines, and Humans can choose a future of freedom, attending fully to our humanness. But it will not just happen. We must choose.
  • We are lazy with our souls. (Eeyore) Wondering what it all means. What happens when we’re gone? I bet someone is even making fun of Saints. Shame on you. While we fixate on our failings and evils, Saints lived the values we deify…compassion, generosity, kindness, sacrifice, love, courage, beauty, faith, hope. We must remember our goodness. What a tragedy it would be to forget and deprive our descendants of the best of themselves. 
  • Values Theory, like some spiritual traditions, suggests that as we shape reality - directing energy into people, things, and ideas - we are breathing life into our values, into our everlasting souls. To help with the soul saving, can we imagine a Prevenient Protector of our goodness? 
  • So far, only a few have had the opportunity to serve the billions, the trillions to come. Now, we, the whole of Humanity, may choose to create a future that honors our most precious values, the best of us, into perpetuity, to be revered as a generation of Saints to all, forever. Let us be Saints!

 

More Essays…

During this compressed creative splurge these last few months, I’ve drafted and conceived of heaps more related essays. To sidestep stuckness, I didn’t include them above. Maybe I’ll mix these and more into the writing as I go. Topics include:

  • A Difficult Question: Should We and/or Shouldn’t We Not Create and Fear All of This AI stuff?
  • How Hard Will This Be? Understanding Values in an Infinitely Ignorant and Abundant Wicked World
  • Let’s Call It Morality, Not Alignment
  • Nature: Are We Improving, Copying, Recreating, Replicating, or Accelerating our Interdependent Co-evolution? One Guess to Guess My Guess
  • With or Without Humans: A Musical Instrument for AIs To Play
  • A Language for a Future We Want: Communications for NHA 
  • Values Theory and Life: Meaning in Honor and Destruction
  • Some Assumptions and Preliminaries
  • Some Important and Irrelevant Information
  • Lots (and Lots) of the Questions I Asked in the First Few Days

Right. So that’s a lot. 

 

What's Next?

I’m getting provocative comments and questions from a few friends. To get that feedback while working is thrilling! And helpful. I hope this post prompts more feedback from…soon-to-be-friends? I hope so.

Inquiry fueled much of this work. A subsequent, perhaps next, post may be the 100 or so questions I started with. If you ask a question, I’ll be sure to both respond and include (and cite) it in the list.   

With Love and Humility,

Matt


 

-9

0
0

Reactions

0
0

More posts like this

Comments
No comments on this post yet.
Be the first to respond.
Curated and popular this week
Relevant opportunities