Hide table of contents

Summary

Algorithms as significant, constant factors in our material lives are very new things. Today, algorithms guide the majority of our social and economic exchanges. Twenty-five years ago, algorithms guided almost no aspect of our daily social and economic activity. This modern phenomena of algorithms is dominated by self-interested discourse, under the auspices of neutrality in utility. But, where there is a void of values, values-lessness takes hold. Values-lessness is antithesis to other-regarding behaviors like altruism. Our widespread use of algorithms, their increasing sophistication and the amount of time we spend with them is making it increasingly likely that we will lock-in this values-less system before humanity ever manages to reach an AGI reality. If the EA project reorients towards systems [discourse] change, EAs can potentially do something about this.

Reading note

This essay is an application of an already posted critique of EA, Altruism is systems change, so why isn’t EA? 

I read the first half of What We Owe the Future over the weekend and Part II really stuck with me, particularly the discussion of value lock-in. So, I thought I’d add to my critique with this application of it to the concept of values lock-in. I had also previously applied my critique to beneficiary agency in giving, Reciprocity and the causes of diminishing returns, if you’re curious. 

You can view everything in brackets [] as a helper word(s) in case the concept being expressed is unfamiliar. These words can also be read as part of the text, of course. You can also just ignore everything in brackets too, if it makes it easier for you. Even better, insert  [your own]...

Given this is an application of my already existing critique, I’m copying some relevant key positions from that critique to hopefully make this essay a bit easier to consume. New positions, for this specific argument, are noted. 

Key positions

  • New: the advent and acceleration of algorithms in defining market discourse is currently locking-in a self-interested values-less system, which is antithesis to altruism. This is somewhat dependent upon AI, but mostly its about exposure and the frequency of that exposure to the self-interested discourse.
  • New: privilege makes observing and questioning systems more difficult.
  • For this critique, Systems change and systemic change are not the same thing. EA references to systemic change do not attempt to reimagine our shared, reality shaping discourse. Rather, they focus on modifications to the discourse as it is without challenging it, to make it possibly better (e.g., policy changes). Systems change is broader and is a challenge to the system as it is- a reassessment, revision and reassumption of what we share as reality.
  • Systems change, when it happens, is rarely dramatic and oftentimes not even noticeable until it has already happened. It is, however, almost always intentional. And when it's not intentional, there's likely no choice in it happening (see x-risk concerns) which is also when it can be dramatic.
  • We occupy a self-interested system that was created for and by the study of economics and those interested in self [the self-interested?]. There is something other, maybe even before this self-interested system that is fundamentally more compatible with altruism than the current system. A lot of economic theory points to a system of reciprocity where altruism can be seen as a facet of strong reciprocity.
  • Self-interest is antithesis to altruism. It's contradictory to reify the self-interested system by using its tools axiomatically and treating the question of systems change ambivalently all while claiming altruistic intent.

Privilege and systems awareness

A sort of general criticism of EA is that it is a privileged thing. This is a critique I think tends to stick a bit, because the most frequent proponents of EA are generally materially and socially privileged people (a quality of which they are all well aware): elite university academics, graduates, well paid techies, billionaires who attended elite universities, etc. This notion of privilege feeds into another notion: altruism is something you get to do when you make it or have no material worries, like some sort of materialistic human advancement. If you collect enough gold coins, you get to level up. There’s a bit of a feedback loop here that I imagine could make communicating EA outside of its existing audience difficult and frustrating. It's also mostly inaccurate. Altruism only seems like a privileged opportunity and only seems like a potentially possible human advancement because we're immersed in an artificial, self-interested discourse that doesn’t give much room for non-elites to contemplate collective benefit the way EA folks have been allowed through their actual privilege [leisure].

This might sound harsh and reflexive for some. I am, however, an ardent supporter of being altruistic and doing so as effectively as possible. It's a primary motivation in my life and work,  which makes me a fairly natural supporter of the EA project. However, I too have certain privileges that allow me, to an extent, leisure to examine life’s big questions. The EA project’s problem here, at least communicatively, is that most people don’t have this opportunity and by most, I mean the vast, vast majority. And because EA as a project is ambivalent about the system that keeps people from that opportunity [actual privilege], it makes the people whom EA proponents are trying to reach pretty suspect of EA, when they have the time to even properly consider the pitch, that is. There are also whole other groups of people out there with their own certain types of privilege - scientists, for instance  - who are just starting to take notice of EA and question not only its lack of critical systems treatment, but also its [standard economic] reductionist tendencies - which likely results from its general lack of critical systems treatment. Self-interest, economically speaking, is necessarily reductionist and this is the system EA finds itself within and choses to treat ambivalently.[1]

I think it’s probably hard for some EA proponents to be critical of a system that affords them their leisure [time to think]. Intuitively, it’s a somewhat natural human position for anyone with leisure. It’s the self-interested system that has bolstered the system reifying institutions with billion dollar endowments, elite networks and so forth, so why question that which allows for your deep thought? This self-imposed, self-interested construct is also an artifact; it's not any more humanly true, for the critic or the leisured, than anything else created by people. No one has to just accept this anymore than they have to just accept an artificial system or any of its other artifacts.

It makes some sense then, though, that this sort of willful systems blindness would present as systems agnosticism [ambivalence], completely skipping over something that is at the root of so many of the problems EA seeks to address. A pretty solid example of how this blindness manifests can be found in Will MacAskill's concern for values lock-in in What We Owe The Future (WWOTF): a system triggered by AGI advance which would or could potentially lock-in the wrong values.[2] Values lock-in is a concern we should all be focused on and not just because of one potential future, but because it's something that is already happening. It's accelerating, too and it hasn’t taken anything remotely as futuristic as AGI to get us there - which should frighten anyone focused on longtermism. 

Values-less algorithms

As MacAskill notes in WWOTF, Google is already using AI in its search product - which is true, but that’s where MacAskill's treatment of AI in search ends. Here’s a bit more information. Google regularly uses three AI models ( RankBrain, neural matching and BERT) that provide language processing that also factors into Google’s ranking calculations and a fourth (MUM) that is focused mostly on just language processing and finding consensus in results, for now.[3] Google has several more potential AI models in development that it could or will add to its algorithm. The use cases for these AI models so far have been generalized, but also quite specialized towards language comprehension with the exception of MUM which is looking to minimize misinformation.[4] They are factors in Google’s total ranking algorithm, but they are not anywhere near determinative for most rankings nor do they undermine the basic theory of Google's search algorithm, they in fact support it. Google’s algorithm, for the most part, is still just a series of processes - calculations of factors that, ultimately, tell you what information is important. No AGI, no overall complex AI decision making, just math and not even particularly sophisticated math at that. Google’s algorithm, boiled down, is essentially weighted ranking. 

The scary part is, that even without sophisticated machine learning being a determinant factor in search, Google is telling you and everyone else, what to [not] care about, what’s [not] important and its shaping who we are and how we see ourselves and the world. And what is it that Google is telling you is most important? What’s popular, of course. Because what is popular is what sells. The core of Google's algorithm, from inception to date, is ranking popularity - which sites have more backlinks from other sites that have more backlinks Google determines are relevant, where what is relevant is what is popularly determined to be relevant. The MUM AI model, for instance, seems to be reinforcing and enhancing this. It is, fundamentally, that simple. There are no explicit values in this process, which, like all absence, actually means something. It’s a position taken against values being meaningful in search, at least. 

Reductionist intuition would probably make one wonder what the problem is - tools should be values-less [neutral]. But neutral in this instance doesn’t mean enforced or conscientious neutrality - it means ambivalence and just as ambivalence towards the self-interested system reinforces self-interested [lack of] values, so too does it reinforce a lack of values in Google’s ‘neutral’ search engine.  A multitude of studies, qualitative and quantitative, have demonstrated exactly how Google’s algorithm, while designed to be neutral about everything, is actually reinforcing and helping set biases, especially social biases. In 2017, for example, Safjia Umoja Noble demonstrated biases in search suggestion, images, and identity-based search results among many other stereotype reinforcing outcomes, all underpinned by Google’s supposedly neutral algorithm.[5] Umoja Noble has identified search suggestions that transmit derogatory messages to marginalized people, about themselves. She's found image searches that reinforce image stereotyping for ethnic or racial minorities. She's identified search results that push users toward content filled with implicit bias, based on unrelated, innocuous prompts and terms. And much more. As Umoja Noble put it,

…there is a missing social and human context in some types of algorithmically driven decision making, and this matters for everyone engaging with these types of technologies in everyday life. It is of particular concern for marginalized groups, those who are problematically represented in erroneous, stereotypical, or even pornographic ways in search engines and who have also struggled for nonstereotypical or nonracist and nonsexist depictions in the media and in libraries.[6]

When Umoja Noble says “everyone,” she means it. While these biases directly harm marginalized individuals, they quite directly, although unintentionally reinforce stereotypes within typically privileged audiences and these audiences are likely unaware. This agency robbing harm just barely scratches the surface of the algorithmic values-less-setting process, though. 

Google handles over 90% of all search engine traffic globally which translates to 90% of all search engine based advertising. Because of this search dominance, Google also handles just shy of a third of all display advertising globally - advertisers largely taking advantage of programmatic advertising opportunities.[7] And while Google isn’t advertising bias, it isn’t advertising values either. Materialist consumption represents the vast majority of Google’s advertising activity; the same self-interested materialism that is quite starkly antithesis to altruism and other forms of reciprocity. Again, where values are absent, values-lessness [self-interest] fills the void. Most search engine purveyors operate on a similarly synergistic search > programmatic advertising model. Other big advertisers, like Amazon, programmatically remarket based on product searches placed within their selling platforms. Basically, people who have unfettered access to online search and commerce are being constantly exposed to bias and material self-interest [values-less discourse], most of it implicitly, reducing their agency. 

Social media algorithms are worse. Because of the closed-system of social media platforms, the selling goal for most of them is to keep people engaged with their platform as long as possible and as often as possible. Increased exposure means increased revenue. To keep attention, social media platforms employ a variety of tools to engage behavioral responses in their users. Likes, notifications and gamification trigger reward centers in the human brain while content silo-ing appeals to conditioned and innate grouping social behaviors. This isn’t exactly news to people after the scrutiny social media has received over the last few years. What a lot of people don’t realize, however, is how simple and easy behavioral modification really is. Many people assume that they are too intelligent or savvy to be persuaded by marketing and social bias techniques, but  people across the board, privileged or not, highly intelligent or typical, are extremely perceptible to discursive bias formation through mere exposure, the band wagon effect, social norm formation and many more behavioral bias formation effects.[8] If you are a social media user, then chances are if you think you are impervious to social media bias forming effects, you’ve likely already been affected. 

Social media claims neutrality as well, just like search algorithm purveyors. And of course, just like with search, where there is a void of values, values-less discourse [the self-interested system] fills it. Recent research has shown that social media not only further polarizes people [the dualistic mode of the self-interested system], but has also made people extremely less open to those they view as other.[9] Other-regarding behaviors are suffering across the board when people perceive others to be outside of their social or political group, which from at least a few theoretical perspectives, would mean other-regarding behaviors are ceasing to exist at all. For readers not familiar with 'other-regarding behaviors' from the economics taxonomy, it stands for things you do for others (e.g., altruism). 

The acceleration of all of this might be the most concerning aspect of what I’ve described, at least for me. There’s the basic observation that a little over two decades ago, we barely had search algorithms and social media algorithms were not a thing at all. But on top of this, there’s the acceleration in the amount of time people are committing to spending with these algorithms in addition to the ever increasing sophistication of these algorithms in their ability to provoke and even command behavior. And all of it is completely devoid of values. There are currently over 5 billion internet users on the planet who are spending as much time, on average, online as they do sleeping - every single day.[10] That’s about 7 hours a day being inundated with the self-interested [values-less] discourse of algorithms and all of the biases, polarization and reductionist concepts these algorithms bring with them. 

There’s a lot of scientifically derived psychology that can be employed here to explain what happens to people with this level of exposure to certain things, views and systems. Mere exposure study, for instance, indicates that often, singular experiences, like just one experience with a good meal can generate preferences that last an entire lifetime.[11] That might seem like a silly and obvious example, but sustenance is a fundamental material concern - eating food is an act of self-interest. Mere exposure, of course, is bigger than this. Public relations and advertising professionals have been using its basic principles for generations to persuade people otherwise unpersuadable - at an exponentially lower clip than what we are currently experiencing. 

It would probably be pretty easy to dismiss the observations I’ve presented here as just a forceful anti-consumerism argument, but that would be a shallow or superficial view. Anti-consumerism tends to deal with the material impacts on people and the environment consumerism entails. That’s certainly problematic, but I am interested here in the impact this overwhelming imposition of the self-interested system is having on human agency. How much of it are people able to absorb, for instance, before they are incapable of considering other ways of being, other discourses? I suspect a lot of EA adherents might read this and think that they are able to think beyond the avalanche of self-interested discourse, so others should be able to as well. But that argument forgets privilege and how the EA’s ambivalence towards systems change is not making it any easier for other people to have the leisure to contemplate other systems and ways, like EA adherents contemplate altruism. This is values lock-in, happening right now and currently, the EA project isn’t doing anything explicit or obviously intentional about it, which makes longtermist concerns about values lock-in maybe moot - unless something’s done about it, that is [systems change]. 

Why are you teaching your robot to be self-interested?

By the time humans get to a point where AGI determining our values for us is a near term potential, it might very well be too late. We likely will have already locked in a values-less, self-interested system and even more frighteningly, we will have likely locked-in these values for the AGI we create as well. Real altruism will likely not be part of the equation. I am not an AI professional, but given my focus on our current algorithmically driven world, I pay pretty close attention. My observations tend to confirm what I have explained here and the conclusion I lean towards. For instance, reinforcement learning (RL) models are becoming more and more active in AI development and pretty much all models are built, from the ground up, to emulate self-interest in agent motivation as well as agent interaction with other agents. There are a few fairly recent instances where big groups like Google and Intel have been programming their RL agents towards cooperation preferences, but, importantly, only in the service of self-interest - which as you know, is not altruistic, to say the least.

What’s an EA to do?

If you skipped the reading note, then you missed that this essay is an application of a broader critique of EA I made, Altruism is systems change, so why isn’t EA? You can probably guess from the title that I argue in that piece that EA as a project is currently pretty ambivalent about actual systems change (not to be confused with how EA defines systemic change) and that if it wants to meet its own objectives, it needs to shift course. This same recommendation applies here. If you’re going to prevent AGI from locking in the wrong set of values - including the absence thereof - then you need to start by addressing the lock-in of values now, where and when it is already happening. So, how do you do that? Here are some thoughts:

  • Inject reciprocal discourse into everything the EA project does. Its a natural fit, given that altruism is functionally an aspect of reciprocity.
  • Study algorithms and other forms of communication from an altruistic perspective. Promote the results.
  • As I mentioned, I am no AI expert, but I have looked and do watch the AI space and am pretty unable to identify many AI projects that are building models that start with reciprocity (within which I and others, like a lot of economists and political economists, include altruism). Like the RL models I mentioned, they all start with self-interested modeling or agents and might progress to cooperation, but limited to reciprocal behavior in the service of self-interest - 'emergent reciprocity', which is likely a false construct, as far as human development is concerned. Maybe it's a naïve question, but I wonder why there is an lack of modeling driven first by other-regarding rather than self-regarding behaviors? And maybe this is happening, but I don't necessarily see it emphasized within the EA project. It seems to me that the vast majority of AI modeling currently underway, that the EA project also observes, is approaching agents from a homo economicus idea of human nature, which as I argued and others are arguing, is quite flawed, at least when you're describing the only other advanced sentients we know of (humans).
  • AI Academia also seems pretty lacking in perspective on reciprocity as foundational, rather than emergent. In an admittedly limited, but recent search for academic articles dealing with reciprocity in AI development, I only managed to find a handful dealing with reciprocity as a basis for AI modeling and even fewer dealing with reciprocity as a value [system] within AI theory. A lot of it was dated and most of it was fairly obscure, if citation numbers are an indicator. Maybe this could change?
  • This last one might be off putting given I am currently arguing for a forceful, critical EA treatment of discourse [systems change], which is already a lot of thinking and work.  But broadening the EA project beyond giving and future considerations to focus on aspects of the world where there is more discourse and therefore, more opportunity to change hearts and minds [systems change], might be necessary. In the above, I referenced algorithms - which operate primarily in markets. This is one example, there are others, but markets are where most human interaction takes place - especially in our hyper self-interested, increasingly algorithmic system. 
     


 

  1. ^

    Hard science has generally stopped prioritizing reductionism in favor of concepts like emergence, holism and of course, complex systems. Economics hasn't quite caught up and a lot of people tend to still treat economic reductionism axiomatically, especially some classical economists, non-economists and non-political economists. 

    It's also important to define what reductionism means in this essay and general critique. Reductionism has a lot of varied meaning in different settings and their discourses. The use of reductionism in this critique is specific and limited to positivist and economic reductionism as they relate to economic or human exchange discourse as well as the individual within these discursive settings. This use should not be conflated with Derek Parfit's reductionist view of personal identity. My position for this critique is that the social cannot be reduced in truth defining ways and that reductionism is just a tool to further or aid understanding, not understanding in and of itself, in the social [discursive] context. This contention and the arguments presented here might raise questions and have potential implications about the nature of the individual outside of human exchange settings, especially since a primary subject here is self-interest. I would suggest, however, there is potentially more alignment with  Parfit's reductionism, which dissolves the notion of self, and what I am arguing, than there is conflict (e.g., to what extent are the problems of the impersonal also the problems of the self-interested system [discourse]?). There is not, however, room to discuss all of this in this critique.

  2. ^

    See Part II in : MacAskill, William (2022), What We Owe The Future, Basic Books, New York. 

  3. ^

    Schwartz, Barry (2022), How Google uses artificial intelligence In Google Search, Search Engine Land, retrieved from: https://searchengineland.com/how-google-uses-artificial-intelligence-in-google-search-379746

  4. ^

     Nayak, Pandu (2021),  MUM: A new AI milestone for understanding information, Google, retrieved from: https://blog.google/products/search/introducing-mum/

  5. ^

    Umoja Noble, Safiya (2017), Algorithms of Oppression, New York University Press, New York.

  6. ^

    ibid, p 22.

  7. ^

    Kemp, Simon (2022), Digital 2022: April Global Statshot Report, Dataportal.com accessed here: https://datareportal.com/reports/digital-2022-april-global-statshot

  8. ^

    See, for example, Cinelli, Matteo, et. al., (2021), The echo chamber effect on social media, Proceedings of the National Academy of Sciences, 118, 9. 

  9. ^

    Bail, Christopher, et. al., (2018), Exposure to opposing views on social media can increase political polarization, Proceedings of the National Academy of Sciences, 115, 37.

  10. ^

    Kemp, Simon (2022), Digital 2022: April Global Statshot Report, Dataportal.com.

  11. ^

    This is a fairly well understood aspect or result of mere exposure bias, but for a fairly classic example, see: Bornstein, Robert, et. al., (1992), Stimulus recognition and the mere exposure effect, Journal of Personality and Social Psychology, 63, 4. For a food preference specific reference, see: Pliner, Patricia (1982), The Effects of Mere Exposure on Liking for Edible Substances, Appetite, 3, 3.

1

0
0

Reactions

0
0

More posts like this

Comments1
Sorted by Click to highlight new comments since: Today at 1:23 PM
[anonymous]2y1
0
0

Rereading this, I failed to draw a connection between time as a privilege and the lack of ability to understand how our ever increasing exposure to algorithms is solidifying a values[less] system - in great part, because of a lack of time [leisure]. The point was not to knock privileged people, but to point out that even some of the most privileged among us are unaware of how we are  being locked-in to a values[less] system - so how can we expect less privileged people to be aware of this agency-reducing process? This is a parallel to the fight or flight response in the absence of reciprocity metaphor from my initial critique. 

If I were to rewrite this, I would update it to better make this connection and make it explicit, rather than implied as this and the critique in which this essay is set are not a single document. I doubt the connection is apparent for anyone other than me tbh. 

Curated and popular this week
Relevant opportunities