Hide table of contents

This post is a contest entry for Criticism and Red Teaming Contest 

EA as an AI safety case

Effective altruism is similar to the AI alignment problem in some sense: we're looking for what would be the greatest good for humanity, and we're searching for a way to safely and effectively implement this good. 

EA considers both of these tasks to be generally solved (otherwise we would not have been doing anything at all) for the social activities of a public organization, but at the same time, completely unsolved for the AI ​​alignment. 

However, EA runs into the same problems as in AI safety: “goodharting”, unforeseen consequences, wireheading (and the problem of internal conflicts). It seems surprising to me that the problems that we are trying to solve for AI, we consider already solved for the human organization – which, however, intends to solve them for AI.

Utilitarianism’s failure modes

One of these AI-misalignment-style failure of EA is the general acceptance of utilitarianism as if we know really what is good and how to measure it. The absolutizing of utilitarianism as a moral principle is subject to Goodhart’s effect: “When a measure becomes a target, it ceases to be a good measure". 

For example, if a king wants to know the welfare of his country, then it is reasonable to conduct a survey about what percentage of people who are happy, and observe how this percentage changes depending on certain government decisions. However, it is a mistake to transfer this to an individual: humans may have a life goal that implies a large number of unpleasant moments: for example, sports, parenthood, climbing Everest or winning a war. If we absolutize percentage-of-happy-people principle, then many state decisions will begin to look absurd: an unnecessary increase in the population, pouring opium into the water, refusing to procreate and declining to have an army. 

Now I will list a few cases where utilitarianism fails:

The Trolley Problem in the fog

The trolley in the fog problem is the following: I see the normal trolley, but all the situation is covered in the fog, and the 5 people which I am going to save are at a larger distance, so I am seeing them less clearly than the person I am going to kill via moving the lever. After I moved the lever, it turns out that the five people were not real, but were just a bush on tracks. The lesson here is that I should count not only the number of saved people, but also my uncertainty in this number, and such uncertainty is proportional to the distance to the people who are supposed to be saved. In many cases, such a fog-discount could completely compensate for the expected gain in the number of saved people, especially if we account for typical human biases like overconfidence. 

The real solution for the trolley problem, as we know from memes, is to either derail the cart by pulling the lever when only the first pair of wheels have passed the lever, or to find and stop the person who designs such experiments. 

Blurring the line between remote people and possible people

In the original trolley problem all people are real but on different tracks of a trolley. In its real-life analogues, we often can’t observe the five people who will be saved, so it is a variant of the trolley in the fog problem. They often are not even born: if I invest in anti-malaria drug development, I am saving the lives of yet unborn children. 

Saving unborn is great, but eventually, EA slowly blurs the line between remote people and possible people: if we have less pollution now, there will be fewer deaths in future generations from cancer, and all future people are possible, right? 

But here the risk of illusions is high. The real person is real, but the future possible people are currently only in my imagination, and my imagination is likely to be affected by all kinds of biases, including selfish ones. As a result, I can start to think that I am saving thousands of people by just writing a post, and feel – I really had that feeling – a moral orgasm of my perceived extreme goodness.

An example of the bad application of the trolley-problem-like thinking: a man is sentenced to death, but he says that he will ensure the birth of five new people through surrogacy if he is freed – should we release him? The problem is that exchanging real humans for a large number of possible people gives criminals a carte blanche to do whatever they want, as long as it can be compensated later. And of course, compensation can be postponed indefinitely and, in the end, it will never happen.

The consequences of utilitarianism

In some sense consequentialism contradicts utilitarianism. If everybody becomes utilitarian, they will constantly fail into the trap of trolley-in-the-fog-like problems as most people can’t calculate the consequences of their actions.

From the utilitarian point of view, it would be better if most people were deontologists. Most people cannot correctly calculate the sum of the consequences of their actions – and therefore they will calculate the expected utility in the wrong ways, inclining the result to their own benefit. Therefore, only leaders of the country or leaders of charitable foundations should be utilitarians, and everyone will be better off if most people will follow the rules. 

The value of happiness is overrated

Another way to critique utilitarianism is to see that the value of happiness is overrated: happiness is only a measure of success. It is a signal for reinforcement learning. But learning also requires pain sometimes. There is nothing wrong with small and mild pain. Only prolonged and/or unbearable suffering is terrible because they are destructive to the individual. 

Covering the universe with happy observer-moments is the same failure mode that covering with pictures of smiley faces which was once suggested as an example of false AI friendliness. 

Utilitarian EA views the pain’s value as a linear function of its intensity (see specks vs. torture debate), I view it as a step function. 

Linearity

One of EA failure modes is the linearity of utility functions. In real life, the personal utility of something is asymptotic: the more I get something, the less it is valuable for me, and thus after a certain amount I get enough and turn to something else. This ensures the balance of different desires and needs. 

Linearity of utility can produce undesired outcomes. For example, if ten elephants can be saved for a total of $10,000 but only one rhino for the same price, then the rhinoceros will become extinct, as all investment will go on saving elephants. As a result, this will lead to a decrease in bio-diversity. But in the real-life refuge, having too many elephants is bad, they could be even hunted.

Paperclip maximiser is an example of AI which falls into the trap of linearity: the more paperclips, the higher the utility.

Future evolution is good

By measuring good through the amount of pleasure, utilitarianism ignores the human complexity and the need for future evolution. In biology, pleasure helps survival, and survival ensures the appearance of decadents and eventually the capability of the species to adapt and evolve. In other words, pain and pleasure are tiny bits of evolution. 

Longtermism, astronomical waste and surviving the end of the universe

The problem of consequences of consequences. Something good may have bad consequences, but those bad consequences will lead to an even greater good. It is necessary to calculate the whole future in order to take into account all the consequences, and this is impossible. Here appeared the Butterfly effect – any action has endless consequences. In the end, we are either myopic or if we start to calculate consequences, we become longtermists.

I am longtermist, but my view on the longtermism is different. There is an idea that we should cover the whole universe with simulations full of happy human minds (escaping astronomical waste), and after we do it, we will “fulfil human endowment” and “achieve full human potential” and can happily die in the heat death of the universe. My view is that finding the ways to survive the end of the universe is more important and will eventually give us a chance to have even more happy minds. I am also skeptical that just creating an astronomical number of human minds is good (except in the case when we run resurrectional simulations). 

The more humans are there, the less is the relative value of each one, and as humans are social and status-driven beings, knowing that you are very insignificant will be a strong moral burden. Even now there is more art created then I can consume by many orders of magnitude, and it is embarrassing for the creators as well as for me. I think that if people will live longer and will have time to evolve into more complex and different beings, it will be less of a problem. Also, the thought that I can have unlimited potential for evolution, but have to die to give way to other “happy minds” is painful, not only for me but for these minds too.

Where are the effects?

My friend asked me: where are effects? While EA called itself “effective”, we rarely see its effects, because the biggest effects are supposed to happen in the remote future, remote countries and be statistical. This creates a problem of feedback: we never know if we have saved some future generation. Weak feedback signals could be easily hacked by malicious egoistic agents, who just want to be paid.

EA as arbitrage of the price of life in different countries

In some sense, we can view Effective Altruism as a cost-of-life arbitrage. EA-as-giving only works if someone earns more money than they need for survival. And EA-as-giving works as long as there is a difference between the rich and the poor, and especially between rich and poor countries. That is why the most impressive examples of EA efficiency are about Africa, as saving life there is cheaper. But in order to give, you need to earn, and you can earn a lot only under capitalism. Thus, effective altruism needs a system where there is inequality and exploitation.

EA as a world government 

But at the same time, EA works as if it is a wannabe world government, as it takes care of all people and future generations. Due to this, it comes into conflict with the goals of local states. It is thus unsurprising that EA tries to go into politics, as governments distribute very large amounts of money.

The market is more efficient than a gift

EA promotes gratuitous aid, but it is less effective. Mutual aid will eventually win over gratuitous aid. If I constantly help certain animals, then I waste my resource, but I do not get any resources in return. Ultimately, I won't be able to help anymore. However, you can help those who can then help you. Such an exchange can go on longer and ultimately generate more good. For example, a foundation gives a loan to a poor person, then creates its own business, repays the loan, and it can be given back to another person.

EA’s misalignment: we forget about death

In general, this has probably already been said elsewhere – but EA misses the main values of people. These are the needs to live longer and not to die – and the secret repressed dream – the resurrection of the dead. At the same time, traditional religions are not afraid to make such promises, although only the super-technologies of the future can realize them.

That is, the focus of EA is misaligned with human values – the main need of people is not “happiness”, but not dying. A mortal being cannot be happy: the thought of death is a worm within. At the same time, it is better to redefine happiness as a harmonious eternal development in a perfect world. EA significantly ignores the badness of death.

The resurrection of the dead is good and could be cheap

EA ignores the importance of the resurrection of the dead. But there are two ways to increase the chance of resurrection cheaply. First, is plastination, which is the preservation of the brain in a chemical solution, which is better in an organizational sense than cryonics: no need for constant care and thus it should be cheaper. Second, it is life-logging as an instrument to achieve digital immortality. Both could be achieved starting from a few thousands USD per person.

There are several people who now advocating for accepting fighting aging as EA cause area, so it is not new, but still should be mentioned that a relatively small life extension (a few years) could be achieved via some simple interventions, but each year of life extension increases personal chances to survive until radical life extension technologies will be developed so each year utility is more than just a year.

EA pumps resources from near to far

EA pumps resources from near to far: to distant countries, to a distant future, to other beings. At the same time, the volume of the “far” is always greater than the volume of the near, that is, the pumping will never stop and therefore the good of the “neighbours” will never come. And this causes a deaf protest from the general public, which already feels that it has been robbed by taxes and so on.

But sometimes helping a neighbour is cheaper than helping a distant person, because we have unique knowledge and opportunities in our inner circle. For example, a person is drowning, I am standing on the shore, I threw him a life buoy, and it cost me nothing. I think (not sure but) that I personally prevented several accidents by telling cab drivers that a pedestrian is ahead etc. 

If all billionaires are so for the good, then why hasn't the problem of the homeless in San Francisco been solved yet? If this problem is particularly complex and unsolvable, then maybe other people's problems only seem simple?

The subject of help is more important than help

We can help effectively either by improving the quality of care or by changing the subject of help. EA tries to find cheaper new subjects: people in poor countries, animals, and future generations. It is opposite to the typical human type of commitment when I care not about pain, but about a person. 

A rather separate idea: Insects are more likely to be copies of each other and thus have less moral value

The number of possible states of consciousness in insects is smaller than in humans, as they presumably have a smaller field of attention. Therefore, they have less moral value, since their mental states are more likely to be copies of each other. If there are one hundred copies of one virus, then we can count it as one virus. 

Now let's take an ant. The number of possible states of consciousness in it is much less than in humans (most likely). Due to combinatorial effects, it can be large, but still, it is astronomically smaller than human states of consciousness. That is, in total, a trillion trillion… trillion states of an ant are possible, and if we create a number of ants greater than this number, then some ants will only be copies of each other. While it is unlikely that we create so many happy ants, any ant represents a much large share of all possible ants. Thus, if we want to save an equal share of all possible ants and all possible humans, we have to save one ant and billions of billions of humans. This helps to reinforce our intuition that humans have more moral value, even if animals also have moral value and it will help us not fail into an “effectiveness trap” preserving smaller and smaller animals in grand numbers.

14

0
0

Reactions

0
0

More posts like this

Comments31
Sorted by Click to highlight new comments since: Today at 12:06 AM

Writing this in a purely personal capacity in my effort to comment more on forum posts as I think of responses:

This is just a general meta point but, to me, this post is trying to take on wayyyy too many ideas and claims. I was really intrigued by some of them and would like to see more thorough and detailed arguments for them (ie: the fog, where are the effects, arbitrage, and the ants) . However, since this tried to make so many separate points, many claims were left unsubstantiated which decreased my confidence in the post and most single points within it. Similarly, none of the individual points felt fleshed out enough for me to engage with them here in the comments.

I am excited about creative critical critiques but generally want the caution against posting too many (unless it is framed as: here is a list of half-baked critiques, let me know which ones intrigue you and I will elaborate on them). In general, I would love to be able to point to any single claim in a post and be able to understand where that came from. However, that is not happening here. So, I'm downvoting this post but looking forward to future ones!

Actually, I wanted to say something like this: "here is a list of half-baked critiques, let me know which ones intrigue you and I will elaborate on them", but removed my introduction, as I think that it will be too personal. Below is what was  cut:

"I consider myself an effective altruist: I strive for the benefit of the greatest number of people. I spent around 15 years of my life on topics which I consider EA.

 At the same time, there is some difference in my understanding of EA from the “mainstream” EA: My view is that the real good is prevention of human extinction, the victory over death and in the possibility of unlimited evolution for everyone. This understanding in some aspects diverges from the generally accepted in EA, where more importance is given to the number of happy moments in human and animal life. 

During my work, I encounter several potential criticisms of EA. In the following, I will briefly characterize each of them."

While EA called itself “effective”, we rarely see its effects, because the biggest effects are supposed to happen in the remote future, remote countries and be statistical.

EA pumps resources from near to far: to distant countries, to a distant future, to other beings. At the same time, the volume of the “far” is always greater than the volume of the near, that is, the pumping will never stop and therefore the good of the “neighbours” will never come. And this causes a deaf protest from the general public, which already feels that it has been robbed by taxes and so on.

Generating legible utility is far more costly than generating illegible utility, because people compete to generate legible utility in order to jockey for status. If your goal is to generate utility, to hell with status, then the utility you generate will likely be illegible.

But sometimes helping a neighbour is cheaper than helping a distant person, because we have unique knowledge and opportunities in our inner circle.

If you help your neighbor, he is likely to feel grateful, elevating your status in the local community. Additionally, he would be more likely to help you out if you were ever down on your luck. I'm sure that nobody would ever try to rationalize this ulterior motive under the guise of altruism.

When I tell people that have reminded a driver to look at a pedestrian ahead and probably saved that pedestrian, people generally react negatively, saying something like the driver will see the pedestrian anyway eventually, but my crazy reaction could have distracted him. 

Also, once I almost pull a girl back to safety from a street where a SUV was ready to hit her - and she does't not even call my on my birthdays! So helping neighbours doesn't give  status in my experience.

true altruism vs. ulterior motive for social gain as you mention here, as well as legible vs. illegible above...I am less cynical than some people...I often receive from people only imagining they seek my good...and I do for others truly only seeking their good...usually...the side benefits that accrue occasionally in a community are an echo of goodness coming back to you...of course people have a spectrum of motivations, some seek the good, some the echo...but both are beneficial so who cares?  Good doing shouldn't get hung up on motivations, they are trivial...I think they are mostly a personal internal transaction...you may be happier inside if you are less self seeking at your core...but we all have our needs and are evolving. 

We're looking for what would be the greatest good for humanity, and we're searching for a way to safely and effectively implement this good. 

EA considers both of these tasks to be generally solved (otherwise we would not have been doing anything at all) for the social activities of a public organization, but at the same time, completely unsolved for the AI ​​alignment. 

I'm confused by what's meant here: perhaps I just don't understand what you mean with the clause "for the social activities of a public organization", but I don't think that "EA considers both of these tasks to be generally solved"?

Two things I am speaking about are: 

(1) what is terminal moral value (good)  and 

(2) how we can increase it.

EA have some understanding what are 1 and 2, like 1 = wellbeing and 2 = donations to effective charities.

But if we ask AI safety researcher, he can't point on what should be the final goal of a friendly AI. Maximum what he can say is that future superintelligence will solve this task. Any attempt to define "good" will suffer from our incomplete understanding.

EA works  both on AI safety, where good is undefined, and on non-AI-related issue where good is defined. This looks contradictory: either we know what is real good and could use this knowledge in AI safety, or we don't know, and in that case we can't do anything useful.

In that case, I think there are some issues with equivocation and/or oversimplification in this comparison:

  1. EAs don’t “know what is good” in specific terms like “how do we rigorously define and measure the concept of ‘goodness’”; well-being is a broad metric which we tend to use because people understand each other, which is made easier by the fact that humans tend to have roughly similar goals and constraints. (There are other things to say here, but the more important point is next.)
  2. One of the major problems we face with AI safety is that even if we knew how to objectively define and measure good we aren’t sure how to encode that into a machine and ensure it does what we actually want it to do (as opposed to exploiting loopholes or other forms of reward hacking).

So the following statement doesn’t seem to be a valid criticism/contradiction:

This looks contradictory: either we know what is real good and could use this knowledge in AI safety, or we don't know, and in that case we can't do anything useful.

The problem with (1) is that here it is assumed that fuzzy set of well-being has a subset of "real goodness" inside it, but we just don't know how to define it correctly. But it could be the real goodness is outside well-being. In my view, reaching radical life extension  and death-reversal is more important than well-being, if it is understood as comfortable healthy life. 

The fact that  an organisation is doing good assumes that some concept of good exists in it. And we can't do good effectively without measuring it, which requires even stricter model of good. In other words, altruism can't be effective, if it escapes defining good. 

Moreover, some choices about what is good could be pivotal acts both for organisations and for AIs: like should we work more on biosafety, on nuclear war prevention, or digital immortality (data preservation). Here again we are ready to make such choices for organisation, but not for AI.

Of cause I known that (2) is the main problem in AI alignment.  But what I wanted to say here is that many problems which we encounter in AI alignment, also reappear in  organisations, e.g. goodharting. Without knowing how to solve them, we can't do good effectively. 

In short, I don't find your arguments persuasive, and I think they're derived from some errors such as equivocation, weird definitions, etc.

But it could be the real goodness is outside well-being. In my view, reaching radical life extension  and death-reversal is more important than well-being, if it is understood as comfortable healthy life.

First of all, I don't understand the conflict here—why would you want life extension/death reversal if not to improve wellbeing? Wellbeing is almost definitionally what makes life worth living; I think you simply may not be translating or understanding "wellbeing" correctly. Furthermore, you don't seem to offer any justification for that view: what could plausibly make life extension and death-reversal more valuable than wellbeing (given that wellbeing is still what determines the quality of life of the extended lives).

The fact that  an organisation is doing good assumes that some concept of good exists in it. And we can't do good effectively without measuring it, which requires even stricter model of good. In other words, altruism can't be effective, if it escapes defining good. 

You can assert things as much as you'd like, but that doesn't justify the claims. Someone does not need to objectively, 100% confidently "know" what is "good" nor how to measure it if various rough principles, intuition, and partial analysis suffices. Maybe saving people from being tortured or killed isn't good—I can't mathematically or psychologically prove to you why it is good—but that doesn't mean I should be indifferent about pressing a button which prevents 100 people from being tortured until I can figure out how to rigorously prove what is "good."

Moreover, some choices about what is good could be pivotal acts both for organisations and for AIs: like should we work more on biosafety, on nuclear war prevention, or digital immortality (data preservation). Here again we are ready to make such choices for organisation, but not for AI.

This almost feels like a non-sequitur that fails to explicitly make a point, but my impression is that it's saying "it's inconsistent/contradictory to think that we can decide what organizations should do but not be able to align AI." 1) This and the following paragraph still don't address my second point from my previous comment, and so you can't say "well, I know that (2) is a problem,  but I'm talking about the inconsistency"—a sufficient justification for the inconsistency is (2) all by itself; 2) The reason we can do this with organizations more comfortably is that mistakes are far more corrigible, whereas with sufficiently powerful AI systems, screwing up the alignment/goals may be the last meaningful mistake we ever make. 

But what I wanted to say here is that many problems which we encounter in AI alignment, also reappear in  organisations, e.g. goodharting. Without knowing how to solve them, we can't do good effectively. 

I very slightly agree with the first point, but not the second point (in part for reasons described earlier). On the first point, yes, "alignment problems" of some sort often show up in organizations. However: 1) see my point above (mistakes in organizations are more corrigible); 2) aligning humans/organizations—with which we share some important psychological traits and have millennia of experience working with—is fairly different from aligning machines in terms of the challenges. So "solving" (or mostly solving) either one does not necessarily guarantee solutions to the other.

The transition from "good" to "wellbeing" seems rather innocent, but it opens the way to rather popular line of reasoning: that we should care only about the number of happy observer-moments, without caring whose are these moments. Extrapolating, we stop caring about real humans, but start caring about possible animals. In other words, it opens the way to pure utilitarian-open-individualist bonanza, where value of human life and individuality are lost and badness of death is ignored.  The last point is most important for me, as I view irreversible mortality as the main human problem.

I wrote more about why death is bad in Fighting Aging as an Effective Altruism Cause: A Model of the Impact of the Clinical Trials of Simple Interventions – and decided not to say it again in the main post, as the conditions of the contest requires that only new material should be published, but I recently found that the similar problem was raised in another application in the section "Defending person-affecting views". 

The transition from "good" to "wellbeing" seems rather innocent, but it opens the way to rather popular line of reasoning: that we should care only about the number of happy observer-moments, without caring whose are these moments. Extrapolating, we stop caring about real humans, but start caring about possible animals. In other words, it opens the way to pure utilitarian-open-individualist bonanza, where value of human life and individuality are lost and badness of death is ignored.  The last point is most important for me, as I view irreversible mortality as the main human problem.

To be totally honest, this really gives off vibes of "I personally don't want to die and I therefore don't like moral reasoning that even entertains the idea that humans (me) may not be the only thing we should care about." Gee, what a terrible world it might be if we "start caring about possible animals"! 

Of course, that's probably not what you're actually/consciously arguing, but the vibes are still there. It particularly feels like motivated reasoning  when you gesture to abstract, weakly-defined concepts like the "value of human life and individuality" and imply they should supersede concepts like wellbeing, which, when properly defined and when approaching questions from a utilitarian framework, should arguably subsume everything morally relevant. 

You seem to dispute the (fundamental concept? application?) of utilitarianism for a variety of reasons—some of which (e.g., your very first example regarding the fog of distance) I see as reflecting a remarkably shallow/motivated (mis)understanding of utilitarianism, to be honest. (For example, the fog case seems to not understand that utilitarian decision-making/analysis is compatible with decision-making under uncertainty.)

If you'd like to make a more compelling criticism that stems from rebuffing utilitarianism, I would strongly learning more about the framework from people who at least decently understand and promote/use the concept, such as here: https://www.utilitarianism.net/objections-to-utilitarianism#general-ways-of-responding-to-objections-to-utiliarianism  

I need to clarify my views: I want to save humans first, and after that save all animals, from closest to humans to more remote. By "saving" I mean resurrection of the dead, of course. I am pro resurrection of mammoth and I am for cryonics for pets.  Such framework will eventually save everyone, so in infinity it converges with other approaches to saving animals.  

But "saving humans first" gives us a leverage, because we will have more powerful civilisation which will have higher capacity to do more good. If humans will extinct now, animals will eventually extinct too when Sun will become a little brighter, around 600 mln. years from now. 

But the claim that I want to save only my life is factually false. 

I’m afraid you’ve totally lost me at this point. Saving mammoths?? Why??

And are you seriously suggesting that we can resurrect dead people whose brains have completely decayed? What?

And what is this about saving humans first? No, we don’t have to save every human first, we theoretically only need to save enough so that the process of (whatever you’re trying to accomplish?) can continue. If we are strictly welfare-maximizing without arbitrary speciesism, it may mean prioritizing saving some of the existing animals over every human currently (although this may be unlikely).

To be clear, I certainly understand that you aren’t saying you only care about saving your own life, but the post gives off those kinds of vibes nonetheless.

Unless you’re collecting data for an EA forum simulator (not IRB approved) I would consider disengaging in some situations. Some posts probably aren’t going to first place as a red team prize.

I am serious about resurrection of the dead, there are several ways, including running the simulation of the  whole history of mankind and filling the knowledge gaps with random noise which, thanks to Everett, will be correct in one of the branches. I explained this idea in longer article: You Only Live Twice: A Computer Simulation of the Past Could be Used for Technological Resurrection

What if we can develop future technology to read all the vibrations emanated from the earth from all of human history...the earliest ones will be farther out, the most recent ones near...then we can filter through them and recreate everything that ever happened on earth, effectively watching what happened in the past...and maybe even to the level of brain waves of each human, thus we could resurrect all previously dead humans by gathering their brain waves and everything they ever said...presumably once re-animated they could gain memory of things missed and reconstruct themselves further.  Of course we could do this with all extinct animals too. 

This really becomes a new version of heaven.  For the religious; what if this was G-d's plan, not to give us a heaven but for us to create one with the minds we have (or have been given) this being the resurrection...maybe G-d's not egoistic and doesn't care if we acknowledge the originating gift meaning atheism is just fine.  We do know love doesn't seek self benefit so that would fit well since "G-d is love". I like being both religious and atheist at the same time, which I am. 

I would like to thank turchin the author for inspiring this idea in me for it is truly blowing my mind. Please let me know of other writings on this. 

Erratum: "asymptomatic" -> "asymptotic".

Covid age error!  Corrected. 

turchin! You're just killing me with these ideas, I'm absolutely blown away and excited by what I'm reading here. I commented above to Harrison Durland's comment questioning resurrecting Mammoths and already dead human brains??  Is it that if you happened to die before EA/Longtermists develop the tech to preserve your brain for future re-animation then your just screwed out of being in on the future?  Or from an idea I developed in one of my stories for a different purpose - what if every single movement on the earth created waves which emanated outward into space, every motion, every sound, every word uttered, and even every brain wave...and it is an indelible set of waves forever expanding outwardly...and we develop a reader...and then a player...we could watch scenes of everything that ever happened. Eventually we could reconstruct every brain wave so we could rebuild and then re-animate all humans who ever lived.  Wow. Is this the resurrection? Is this the merging of science and religion, atheism and spirituality?

To collect all that information we need superintelligent AI, and actually we don't need all vibrations, but only the most relevant pieces of data - the data which is capable to predict human behaviour. Such data could be collected from texts, photos, DNA and historical simulations - but it is better to invest in personal life-logging to increase ones chances to be resurrected. 

Can you point me to more writing on this and tell me the history of it.

Check my site about it: http://digital-immortality-now.com/

Or my paper: Digital Immortality: Theory and Protocol for Indirect Mind Uploading

And there is a group in FB about life-logging as life extension where a few EA participate: https://www.facebook.com/groups/1271481189729828 

typo: Even now there is more art created when I can consume by many orders of magnitude (should be "then"?)

..."Even now there is more art created then I can consume by many orders of magnitude, and it is embarrassing for the creators as well as for me".   ...besides the typo, I don't actually agree with this sentence. For me it would be like saying, "There are too many people I don't know talking to each other"...I was never meant to know everyone, or to hear every conversation. Some art is global, much art is local. Your child's drawing on the refrigerator is what I call "familial art" it's value is mainly to the parents, to them it is precious, to everyone else it just looks like millions of other kid drawings, cute but not remarkable in any way. It's a hyper-local art. Much art is cultural only understood by the people of that culture. Just as that special form of humor you and your best friend have, it's only for the two of you. There can never be enough art to satisfy the human need for beauty, just as there can never be enough human conversations. 

Sure, it a typo, thanks, will correct.

[comment deleted]2y0
0
0

Insects are more likely to be copies of each other and thus have less moral value.

There are two city-states, Heteropolis and Homograd, with equal populations, equal average happiness, equal average lifespan, and equal GDP.

Heteropolis is multi-ethnic, ideologically-diverse, and hosts a flourishing artistic community. Homograd's inhabitants belong to one ethnic group, and are thoroughly indoctrinated into the state ideology from infancy. Pursuits that aren't materially productive, such as the arts, are regarded as decadent in Homograd, and are therefore virtually nonexistent.

Two questions for you:

  • Would it be more ethical to nuke Homograd than to nuke Heteropolis?
  • Imagine a trolley problem, with varying numbers of Homograders and Heteropolites tied to each track. Find a ratio that renders you indifferent as to which path the trolley takes. What is the moral exchange rate between Homograders and Heteropolites?

If in  Homograd everyone will be absolute copies of each other, the city will have much less moral value for me.

If in Homograd there will be only two exact copies of two people, and all other people will be different, if would mean for me that its real population is N-1, so I will chose to nuke Homograd.

But! I don't judge diversity here as aesthetic, but only as a chance that there will be more or less exact copies.

Curated and popular this week
Relevant opportunities