Towards a measure of Autonomy and what it means for EA

by WillPearson21st Jul 201713 comments

1

Frontpage

Autonomy is a human value that is referenced in discussions around effective altruism. However I have not seen any attempts to formalise autonomy so we can try and discern the impacts our decisions will have on autonomy in the future.

 

 *Epistemic Status: Exploratory*

  

In this article I shall introduce a relatively formal measure of autonomy, based on the intuition that it is the ability to do things by yourself with what you have. The measure introduced allows you to move from less to more autonomy, without being black and white about it. Then I shall talk about how increasing autonomy fits in with the values of movements such as poverty reduction, ai risk reduction and the reduction of suffering.

 

Autonomy is not naturally encouraged the capitalist system due to the incentives involved. So if we wish for a future with increased autonomy we need to think about it and how best to promote it.

 

Making autonomy an explicit value 

 

Part of Effective altruism is finding shared human values that we can work together towards.  Whether it is existential risk reduction or the reduction of suffering, our endeavours are underpinned by our shared values.

 

Autonomy is one of those values. It has not been I made fully explicit and so I think aspects of it have been neglected by the effective altruism community. With that in mind I want to propose a way of measuring autonomy to spark further discussion. 

 

Autonomy is valuable in many value systems that do not make it a primary value, as it allows you to exist outside the dominant economic and political systems. There are lots of reasons you might want to do so, these include:

 

  • The larger system is fragile and you want insulate parts of it from catastrophic failure in other parts of the system (reducing existential risk, by making the system more resilient to losses of parts of it).
  • The larger system has no need for you. For example if you are slowly becoming less economically valuable as more jobs are heavily automated. If something like universal basic income is not implemented, then becoming more autonomous might be the only way to survive.
  • You disagree with the larger system for moral reasons, for example if it is using slavery or polluting the seas. You may wish to opt out of the larger system in whole or in part so you are not contributing to the activity you disagree with.
  • The larger system is hostile to you. It is an authoritarian or racist government.  There are plenty examples of this happening in history, so it will probably happen again.
  • You wish to go somewhere outside the dominant system, for example to live in space.

 

Concepts around Autonomy

 

Autonomy, by my definition, is the ability to do a thing by yourself. An example of something you can (probably) do autonomously is open a door. You need no-ones help to walk over and manipulate the door in such a way that it opens. Not all everyday activities are so simple, things become more complicated when you talk about switching on a light. You can throw the light switch by yourself, but you still rely on there being an electricity grid maintained by humans (unless you happen to be off-grid), for the light to come on. You do not make the light go on by yourself. The other agents in the process are hidden from you and know nothing of your actions with regards to the light switch (apart from a very small blip in energy usage), but they are still required for the light to go on. So you cannot turn on the light autonomously. 

 

A mental concept useful in talking about autonomy is the capability footprint of an activity. The capability footprint is basically the volume of physical space required to do an activity. If that footprint to contains other agents (or actors maintained by another agent) then you are not autonomous in that activity. If there are agents involved but there are sufficient numbers and they have an incentive to carry on doing the activity, then you can just treat it like the environment. You only lose autonomy when agents can decide to stop you performing an action. An example of people relying on lots of agents while performing a task that seems autonomous is breathing. We rely on plants to produce the oxygen we need to breathe, but there is little chance that all the plants will one day decide to stop producing oxygen. So we can still be said to breathe autonomously. The free market in its idealised state can be seen to be like that (if one company decides to stop selling you a product, then another one will fill the gap). However in the real world natural monopolies, government regulation, intellectual property and changing economic times might mean that products are no longer available to you. So we are going to assume that no humans can be involved in an autonomous activity.

 

So for our light switch example the footprint includes the wiring in your house, the wiring of the grid, the people maintaining the grid and the people maintaining the power stations (and the miners/riggers). So you are not autonomous in that activity

 

Another example is navigation. If you rely on GPS your capability footprint expands to include the satellites in orbit, these are actors from another agent and they may stop maintaining it (or have degraded it’s performance) on their whim. If you rely on a compass and map, your capability footprint expands to molten iron core of earth, but you are autonomous, at least with regards to this activity, because that does not rely on another agent. 

 

Trying to make individual activities autonomous is not very interesting. For example being able autonomously navigate does not insulate yourself from catastrophe if you cannot produce your own food. So we need the concept of the important activities in your life. Those things that will sustain you and give your life meaning. These will be the vital set of activities. We can get an idea of how much you rely on others by taking each non-autonomous capability footprint of the activities in the vital set and creating a union of them to get a non-autonomous vital footprint.  This is something we can try to minimise, the larger your vital footprint the more easily your essential activities can be disrupted and the more agents you rely upon. But it doesn’t capture everything people value. People, myself included, choose to use google rather than setting up their own mail servers. So we need to look at what is inside each vital set to get to the reason why.

 

Some vital capability sets allow you to do more things than others, you can achieve a very small footprint if you adopt stone age technology but the number of activities you can do is limited. Being able to do more things is better as we can be more adaptable, so we need a measure that captures that. The vital capability set has a size, the number of activities you can perform, so we can divide that by the footprint to get the vital capability density. This measure captures both intuitions that doing more things is good, and doing things that are less spread out and intermingled with other people is good.

 

The history of human development has been of increasing both the vital capability footprint and the size of the vital capability set. So the vital capability density has probably been going up (these things are somewhat hard to measure, it is easy to see direction of change less easy to measure magnitudes of both things). So the current economic and political system seems very good at increasing the vital capability set. So there is little need for us to do work in that direction. But the expansion of the vital capability footprint has been going in the wrong direction and seems set to keep going in that direction. This is due to it not being incentivised by our current system. 

 

Companies are incentivised to try and keep control of their products and revenue streams so that they can get a return on their investment and stay solvent. Trade is the heart of capitalism. This might mean moving towards a larger vital capability footprint. You can see this in the transition to Software as a Service from shrinkwrapped software that you own. There are some products that makes moves towards shrinking the footprint, things such as solar panels. However what you really need to be independent is the ability to manufacture and recycle solar panels for yourself, else you are only energy independent for the life time of those panels.

 

The only people likely to work on technologies that reduce the vital capability footprint are the military and the space industry, neither of which will necessarily democratize the technology or have incentives to make things long term autonomous.

 

So there is potential work here to improve the human condition, that would not get done otherwise. We can try and help people to shrink their vital footprint while maintaining or expanding the vital capability set with the goal of allowing each human to increase their vital capability density over the long term. This is what I will mean when I talk about increasing humanities autonomy.  

 

What is to be done?

 

To increase humanities autonomy successfully we will need to figure out how to prevent any negative aspects of making more independent capable people and to create advanced technologies that do not exist today.

 

It has been hypothesised by Pinker that the increased interdependence of our society is what has led to the long peace. We rely on other people for things necessary for our livelihood so that we do not want to disrupt their business as that disrupts our lives is how the story goes. We would lose that mechanism, if it is important.  There is also the risks of giving people increased ability, they might do things by accident that have large negative consequences for other people’s lives, such as releasing deadly viruses. So we need to make sure we have found ways to mitigate these scenarios, so increased autonomy does not lead to more chaos and strife.

 

This is more pertinent when you consider what technologies are needed to reduce the vital footprint.The most important one is intelligence augmentation. Currently our economy is as partially as distributed as it is, because of the complexity of dealing with all the myriad things we create and how to create them. People and companies specialise in doing a few things and doing them well because it reduces the complexity they need to manage. So to reduce the size of the vital footprint you need to be able to get people able to do more things. Which means increasing their ability to manage complexity, which means intelligence augmentation. What exactly this looks like is not known at this time. Initiatives like link neuralink seem like part of the solution. We would also need to have computers we actually would want to interface with, ones that are more resistant to subversion and less reliant on human maintenance. We need to deal with issues of alignment of these systems with our goals, so they are not external agents (reducing our autonomy) and also the issues around potential intelligence explosions. I am working on these questions, but more people would be welcome.

 

Making us reliant on some piece of computer hardware that we cannot make ourselves would not decrease our vital footprint. So we need to be able to make them ourselves. Factories and recycling facilities are vast so we would need to shrink these too. There are hobbyist movements already for decentralising some manufacturing, things like 3d printing, but other manufacturing is still heavily centralised like solar panels construction, metal smelting and chip manufacturing. We do not have any current obvious pathways to decentralisation for these things. You also want to make make sure everything is fully recyclable as much as possible. If not you increase your vital footprint to include both the mines and also the places to dispose of your rubbish. 

 

Other EA views and autonomy

 

I don’t take increasing autonomy as the only human value, but it is interesting to think how it might interact with other goals of the effective altruist community by itself. Each of these probably deserves an essay, but a brief sketch will have to do for now.

 

AIrisk

 

The autonomy view vastly prefers a certain outcome to the airisk question. It is not in favour of creating a single AI that looks after us all (especially not by uploading), but prefers the outcome where everyone is augmented and we create the future together. However if decisive strategic advantages are possible and human will definitely seek them out or create agents that do so, then creating an AI to save us from that fate may be preferable. But trying to find a way that does not involve that is a high priority of the autonomy view. 

 

Poverty reduction

 

This can be seen as bringing everyone up to a more similar set of vital activities. So it is not in conflict. The autonomy view of decreasing the footprint points at something to do even when everyone is at an equal vital set. Aiming for that might be something currently neglected which could have a large impact. For example the a-fore mentioned human operated self replicating solar cell factory could have a large impact in Africa's development. Also trying to reduce poverty by getting people involved in a global economic system, which seems to have less need of people in the future, may not be the most effective long term strategy.

 

Suffering reduction

 

Increasing every human’s vital set to be similar should allow everyone to do the activities needed to avoid the same suffering. So in this regard it is compatible.

 

However my view of autonomy is not currently universal, in that I am not trying to increase the autonomy of animals in the same way as I want to increase humanities. I’m not sure what it would look like to try and give hens the same autonomy as humans. This in part is because I rely on people choosing more autonomy and I’m not sure how I could communicate that choice to a hen. It is also that humans are currently not very autonomous so the work to help them seems enormous. Perhaps the circle of moral concerm will increase as autonomy becomes easier.

 

In conclusion

 

I hope I have given you an interesting view of autonomy. I mainly hope to spark a discussion of what it means to be autonomous. I look forward to other people’s views on whether I have captured the aspects of autonomy important to them.

 

Thanks to my partner great philosophical discussions about this concept with me and for someone from an EA meetup in London who saw that I was mainly talking about autonomy and inspired me to try and be explicit about what I cared about.