JB

James_Banks

170 karmaJoined Jul 2020

Comments
39

You didn't mention the Long Reflection, which is another point of contact between EA and religion.  The Long Reflection is about figuring out what values are actually right, and I think it would be odd to not do deep study of all the cultures available to us to inform that, including religious ones.  Presumably, EA is all about acting on the best values (when it does good, it does what is really good), so maybe it needs input from the Long Reflection to make big decisions.

I've wondered if it's easier to align AI to something simple rather than complex (or if it's more like "aligning things at all is really hard, but adding complexity is relatively easy once you get there").  If simplicity is more practical, then training an AI to do something libertarian might be simpler than to pursue any other value.  The AI could protect "agency" (one version of that being "ability of each human to move their bodies as they wish, and the ability to secure their own decision-making ability").  Or, it might turn out to be easier to program AI to listen to humans, so that AI end up under the rule of human political and economic structures, or some other way to aggregate human decision-making.   Under either a libertarian or human-obeying AI programming, humans can pursue their religions mostly as they always have. 

This is sort of a loose reply to your essay.  (The things I say about "EA" are just my impressions of the movement as a whole.)

I think that EA has aesthetics, it's just that the (probably not totally conscious) aesthetic value behind them is "lowkeyness" or "minimalism".  The Forum and logo seems simple and minimalistically warm, classy, and functional to me.

Your mention of Christianity focuses more on medieval-derived / Catholic elements.   Those lean more "thick" and "nationalistic".  ("Nationalistic" like "building up a people group that has a deeper emotional identity and shared history", maybe one which can motivate the strongest interpersonal and communitarian bonds).  But there are other versions of Christianity, more modern / Protestant / Puritan / desert.   Sometimes people are put off by the poor aesthetics of Protestant Christianity, but at some times and in some contexts, people prefer Protestantism over Catholicism, despite its relative aesthetic poverty.  I think one set of things that Puritan (and to an extent Protestant), and desert Christianities have in common is self-discipline, work, and frugality.   Self-discipline, work, and frugality seem to be a big part of being an EA, or at least in EA as it has been up to now.  So maybe in that sense, EA (consciously or not) has exactly the aesthetic it should have.

I think aesthetic lack helps a movement be less "thick" and "nationalistic" and avoiding politics is an EA goal.  (EA might like to affect politics, but avoid political identity at the same time.)  If you have a "nice looking flag" you might "kill and die" for it.  The more developed your identity, the more you feel like you have to engage in "wars" (at least flame wars) over it.  I think EA is conflict-averse and wants to avoid politics (maybe it sometimes wants to change politics but not be politically committed? or change politics in the least "stereotypically political" way possible, least "politicized"?).  EA favors normative uncertainty and being agnostic about what the good is.  So EAs might not want to have more-developed aesthetics, if those aesthetics come with commitments.

I think the EA movement as it is is doing (more or less) the right thing aesthetically.  But, the foundational ideas of EA (the things that change people's lives so that they are altruistic in orientation and have a sense that there is work for them to do and that they have to do it "effectively", or maybe that cause them to try to expand their moral circles) are ones that might ought to be exported to other cultures, perhaps to a secular culture that is the "thick" version of EA, or to existing more-"thick" cultures, like the various Christian, Muslim, Buddhist, Hindu, etc. cultures.   A "thick EA" might innovate aesthetically and create a unique (secular, I assume) utopian vision in addition to the numerous other aesthetic/futuristic visions that exist.  But "thick EA" would be a different thing than the existing "thin EA".

I hadn't heard of When the Wind Blows before.  From the trailer, I would say Testament may be darker, although a lot of that has to do with me not responding to animation (or When the Wind Blows' animation) as strongly as to live-action.  (And then from the Wikipedia summary, it sounds pretty similar.)

I would recommend Testament  as a reference for people making X-risk movies.  It's about people dying out from radiation after a nuclear war, from the perspective of a mom with kids.  I would describe it as emotionally serious, and also it presents a woman's and "ordinary person's" perspective.  I guess it could be remade if someone wanted to, or it could just be a good influence on other movies.

If EA has a lot of extra money, could that be spent on incentivizing AI safety research?  Maybe offer a really big bounty for solving some subproblem that's really worth solving.  (Like if somehow we could read  and understand neural networks directly instead of them being black boxes.)

Could EA (and fellow travelers) become the market for an AI safety industry?

I wonder if there are other situations where a person has a "main job" (being a scientist, for instance) and is then presented with a "morally urgent situation" that comes up (realizing your colleague is probably a fraud and you should do something about it).  The traditional example is being on your way to your established job and seeing someone beaten up on the side of the road whom you could take care of.  This "side problem" can be left to someone else (who might take responsibility, or not) and if taken on, may well be an open-ended and energy draining project that has unpredictable outcomes for the person deciding whether to take it on.  Are there other kinds of "morally urgent side problems that come up " and are there any better or worse ways to deal with the decision whether to engage?

The plausibility of this depends on exactly what the culture of the elite is.  (In general, I would be interested in knowing what all the different elite cultures in the world actually are.)  I can imagine there being some tendency toward thinking of the poor / "low-merit", as being  superfluous, but I can also imagine superrich people not being that extremely elitist and thinking "why not? The world is big, let the undeserving live."  or even things which are more humane than that.

But also, despite whatever humaneness there might be in the elite, I can see there being Molochian pressures to discard humans.  Can Moloch be stopped?  (This seems like it would be a very important thing to accomplish, if tractable.)   If we could solve international competition (competition between elite cultures who are in charge of things), then nations could choose to not have the most advanced economies they possibly could, and thus could have a more "pro-slack" mentality.  

Maybe AGI will solve international competition?  I think a relatively simple, safe alignment for an AGI , would be for one that was the servant of humans -- but which ones?  Each individual? Or the elites who currently represent them?  If the elites, then it wouldn't automatically stop Moloch.  But otherwise it might.  

(Or the AGI could respect the autonomy of humans and let them have whatever values they want, including international competition, which may plausibly be humanity's "revealed preference".)

This is kind of like my comment at the other post, but it's what I could think of as feedback here.

--

I liked your point IV, that inefficiency might not go away.  One reason it might not is because humans (even digital ones) would have something like free will, or caprice, or random preferences, in the same way that they do now.    Human values may not behave according to our concept of "reasonable rational values" over time, as they evolve.  In human history, there have been impulses toward the rational and the irrational.   So they might for some reason prefer something like "authentic" beef from a real / biological cow (rather than digital-world simulated beef), or wish to make some kind of sacrifice of "atoms" for some weird far future religion or quasi-religion that evolves.   

--

I don't know if my view is a mainstream one in longtermism, but I tend to think that civilization is inherently prone to fragility, and that it is uncertain that we will ever have faster-than-light travel or communications.  (I haven't thought a lot about these things, so maybe someone can show me a better way to see this.)  If we don't have FTL, then the different planets we colonize will be far apart enough to develop divergent cultures, and generally be unable to be helped by others in case of trouble.  Maybe the trouble would be something like an asteroid strike.  Or maybe it would be an endogenous cultural problem, like a power struggle among digital humans rippling out into the operation of the colony.

If this "trouble" caused a breakdown in civilization on some remote planet, it might impair their ability to do high tech things (like produce cultured meat).  If there is some risk of this happening, they would probably try to have some kind of backup system.  The backup system could be flesh-and-blood humans (more resilient in a physical environment than digital beings, even ones wedded to advanced robotics), along with a natural ecosystem and some kind of agriculture.  They would have to keep the backup ecosystem and humans going throughout their history, and then if "trouble" came, the backup ecosystem and society might take over.  Maybe for a while, hoping to return to high-tech digital human society, or maybe permanently, if they feel like it.

At that point, it all depends on the culture of the backup society staying true to "no factory farming" as to whether they don't redevelop factory farming.  If they do redevelop factory farming, then that would be part of the far future's "burden of suffering" (or whatever term is better than that).

I guess one way to prevent this kind of thing from happening (maybe what longtermists already suggest), is to simply assume that some planets will break down, and try to re-colonize them if that happens, instead of expecting them to be able to deal with their own problems.

I guess if there isn't such a thing as FTL, our ability to colonize space will be greatly limited, and so the sheer quantity of suffering possible will be a lot lower (as well as whatever good sentience gets out of existence).  But, say, we only colonize 100 planets over the remainder of our existence (under no-FTL), and 5% of them re-develop factory farming, that's still five times as many as Earth today.

Load more