James_Banks

Topic Contributions

Comments

Guided by the Beauty of One’s Philosophies: Why Aesthetics Matter

This is sort of a loose reply to your essay.  (The things I say about "EA" are just my impressions of the movement as a whole.)

I think that EA has aesthetics, it's just that the (probably not totally conscious) aesthetic value behind them is "lowkeyness" or "minimalism".  The Forum and logo seems simple and minimalistically warm, classy, and functional to me.

Your mention of Christianity focuses more on medieval-derived / Catholic elements.   Those lean more "thick" and "nationalistic".  ("Nationalistic" like "building up a people group that has a deeper emotional identity and shared history", maybe one which can motivate the strongest interpersonal and communitarian bonds).  But there are other versions of Christianity, more modern / Protestant / Puritan / desert.   Sometimes people are put off by the poor aesthetics of Protestant Christianity, but at some times and in some contexts, people prefer Protestantism over Catholicism, despite its relative aesthetic poverty.  I think one set of things that Puritan (and to an extent Protestant), and desert Christianities have in common is self-discipline, work, and frugality.   Self-discipline, work, and frugality seem to be a big part of being an EA, or at least in EA as it has been up to now.  So maybe in that sense, EA (consciously or not) has exactly the aesthetic it should have.

I think aesthetic lack helps a movement be less "thick" and "nationalistic" and avoiding politics is an EA goal.  (EA might like to affect politics, but avoid political identity at the same time.)  If you have a "nice looking flag" you might "kill and die" for it.  The more developed your identity, the more you feel like you have to engage in "wars" (at least flame wars) over it.  I think EA is conflict-averse and wants to avoid politics (maybe it sometimes wants to change politics but not be politically committed? or change politics in the least "stereotypically political" way possible, least "politicized"?).  EA favors normative uncertainty and being agnostic about what the good is.  So EAs might not want to have more-developed aesthetics, if those aesthetics come with commitments.

I think the EA movement as it is is doing (more or less) the right thing aesthetically.  But, the foundational ideas of EA (the things that change people's lives so that they are altruistic in orientation and have a sense that there is work for them to do and that they have to do it "effectively", or maybe that cause them to try to expand their moral circles) are ones that might ought to be exported to other cultures, perhaps to a secular culture that is the "thick" version of EA, or to existing more-"thick" cultures, like the various Christian, Muslim, Buddhist, Hindu, etc. cultures.   A "thick EA" might innovate aesthetically and create a unique (secular, I assume) utopian vision in addition to the numerous other aesthetic/futuristic visions that exist.  But "thick EA" would be a different thing than the existing "thin EA".

13 ideas for new Existential Risk Movies & TV Shows – what are your ideas?

I hadn't heard of When the Wind Blows before.  From the trailer, I would say Testament may be darker, although a lot of that has to do with me not responding to animation (or When the Wind Blows' animation) as strongly as to live-action.  (And then from the Wikipedia summary, it sounds pretty similar.)

13 ideas for new Existential Risk Movies & TV Shows – what are your ideas?

I would recommend Testament  as a reference for people making X-risk movies.  It's about people dying out from radiation after a nuclear war, from the perspective of a mom with kids.  I would describe it as emotionally serious, and also it presents a woman's and "ordinary person's" perspective.  I guess it could be remade if someone wanted to, or it could just be a good influence on other movies.

Why should we care about existential risk?

Existential risk might be worth talking about because of normative uncertainty.  Not all EAs are necessarily hedonists, and perhaps the ones who are shouldn't be, for reasons to be discovered later.  So, if we don't know what "value" is, or, as a movement, EA doesn't "know" what "value" is, a priori, we might want to keep our options open, and if everyone is dead, then we can't figure out what "value" really is or ought to be.

James_Banks's Shortform

If EA has a lot of extra money, could that be spent on incentivizing AI safety research?  Maybe offer a really big bounty for solving some subproblem that's really worth solving.  (Like if somehow we could read  and understand neural networks directly instead of them being black boxes.)

Could EA (and fellow travelers) become the market for an AI safety industry?

Liars

I wonder if there are other situations where a person has a "main job" (being a scientist, for instance) and is then presented with a "morally urgent situation" that comes up (realizing your colleague is probably a fraud and you should do something about it).  The traditional example is being on your way to your established job and seeing someone beaten up on the side of the road whom you could take care of.  This "side problem" can be left to someone else (who might take responsibility, or not) and if taken on, may well be an open-ended and energy draining project that has unpredictable outcomes for the person deciding whether to take it on.  Are there other kinds of "morally urgent side problems that come up " and are there any better or worse ways to deal with the decision whether to engage?

A Primer on God, Liberalism and the End of History

The plausibility of this depends on exactly what the culture of the elite is.  (In general, I would be interested in knowing what all the different elite cultures in the world actually are.)  I can imagine there being some tendency toward thinking of the poor / "low-merit", as being  superfluous, but I can also imagine superrich people not being that extremely elitist and thinking "why not? The world is big, let the undeserving live."  or even things which are more humane than that.

But also, despite whatever humaneness there might be in the elite, I can see there being Molochian pressures to discard humans.  Can Moloch be stopped?  (This seems like it would be a very important thing to accomplish, if tractable.)   If we could solve international competition (competition between elite cultures who are in charge of things), then nations could choose to not have the most advanced economies they possibly could, and thus could have a more "pro-slack" mentality.  

Maybe AGI will solve international competition?  I think a relatively simple, safe alignment for an AGI , would be for one that was the servant of humans -- but which ones?  Each individual? Or the elites who currently represent them?  If the elites, then it wouldn't automatically stop Moloch.  But otherwise it might.  

(Or the AGI could respect the autonomy of humans and let them have whatever values they want, including international competition, which may plausibly be humanity's "revealed preference".)

Why the expected numbers of farmed animals in the far future might be huge

This is kind of like my comment at the other post, but it's what I could think of as feedback here.

--

I liked your point IV, that inefficiency might not go away.  One reason it might not is because humans (even digital ones) would have something like free will, or caprice, or random preferences, in the same way that they do now.    Human values may not behave according to our concept of "reasonable rational values" over time, as they evolve.  In human history, there have been impulses toward the rational and the irrational.   So they might for some reason prefer something like "authentic" beef from a real / biological cow (rather than digital-world simulated beef), or wish to make some kind of sacrifice of "atoms" for some weird far future religion or quasi-religion that evolves.   

--

I don't know if my view is a mainstream one in longtermism, but I tend to think that civilization is inherently prone to fragility, and that it is uncertain that we will ever have faster-than-light travel or communications.  (I haven't thought a lot about these things, so maybe someone can show me a better way to see this.)  If we don't have FTL, then the different planets we colonize will be far apart enough to develop divergent cultures, and generally be unable to be helped by others in case of trouble.  Maybe the trouble would be something like an asteroid strike.  Or maybe it would be an endogenous cultural problem, like a power struggle among digital humans rippling out into the operation of the colony.

If this "trouble" caused a breakdown in civilization on some remote planet, it might impair their ability to do high tech things (like produce cultured meat).  If there is some risk of this happening, they would probably try to have some kind of backup system.  The backup system could be flesh-and-blood humans (more resilient in a physical environment than digital beings, even ones wedded to advanced robotics), along with a natural ecosystem and some kind of agriculture.  They would have to keep the backup ecosystem and humans going throughout their history, and then if "trouble" came, the backup ecosystem and society might take over.  Maybe for a while, hoping to return to high-tech digital human society, or maybe permanently, if they feel like it.

At that point, it all depends on the culture of the backup society staying true to "no factory farming" as to whether they don't redevelop factory farming.  If they do redevelop factory farming, then that would be part of the far future's "burden of suffering" (or whatever term is better than that).

I guess one way to prevent this kind of thing from happening (maybe what longtermists already suggest), is to simply assume that some planets will break down, and try to re-colonize them if that happens, instead of expecting them to be able to deal with their own problems.

I guess if there isn't such a thing as FTL, our ability to colonize space will be greatly limited, and so the sheer quantity of suffering possible will be a lot lower (as well as whatever good sentience gets out of existence).  But, say, we only colonize 100 planets over the remainder of our existence (under no-FTL), and 5% of them re-develop factory farming, that's still five times as many as Earth today.

Who is protecting animals in the long-term future?

This isn't a very direct response to your questions, but is relevant, and is a case for why there might be a risk of factory farming in the long-term future.  (This doesn't address the scenarios from your second question.) [Edit: it does have an attempt at answering your third question at the end.]

--

It may be possible that if plant-based meat substitutes are cheap enough and taste like (smell like, have mouth feel of, etc.) animal-derived meat, then it won't make economic sense to keep animals for that purpose.

That's the hopeful take, and I'm guessing maybe a more mainstream take.

If life is always cheaper in the long-run for producing meat substitutes (the best genetic engineering can always produce life that can out-compete the best non-life lab techniques), would it have to be sentient life, or could it be some kind of bacteria or something like that?  It doesn't seem to me that sentience is helpful in making animal protein, and probably just imposes some cost.

(Another hopeful take.)

A less hopeful take:  One advantage that life has over non-life, and where sentience might be an advantage, is that it can be let loose in an environment unsupervised and then rounded up for slaughter.  So we could imagine "pioneers" on a lifeless planet letting loose some kind of future animal as part of terraforming, then rounding them up and slaughtering them.   This is not the same as factory farming, but if the slaughtering process (or rounding-up process) is excessively painful, that is something to be concerned about.

My guess is that one obstacle to humans being kind to animals (or being generous in any other way) has to do with whether they are in "personal survival mode".  Utilitarian altruists might be in a "global survival mode" and care about X-risk.  But, when times get hard for people, personally, they tend to become more of "personal survival mode" people.  Maybe being a pioneer on a lifeless planet is a hard thing that can go wrong (for the pioneers), and the cultures that are formed by that founding experience will have a hard time being fully generous.

Global survival mode might be compatible with caring about animal welfare.  But personal survival mode is probably more effective at solving personal problems than global survival mode (or there is a decent reason to think that it could be), even if global survival mode implies that you should care about your own well-being as part of the whole, because personal survival mode is more desperate and efficient, and so more focused and driven toward the outcome of personal survival.  Maybe global survival mode is sufficient for human survival, but it would make sense that personal survival mode could outcompete it and seem attractive when times get hard.

Basically, we can imagine space colonization as a furtherance of our highest levels of civilization, all the colonists selected for their civilized values before being sent out, but maybe each colony would be somewhat fragile and isolated, and could restart at, or devolve to, a lower level of civilization, bringing back to life in it whatever less-civilized values we feel we have grown past.  Maybe from that, factory farming could re-emerge.

If we can't break the speed of light, it seems likely to me that space colonies (at least, if made of humans), will undergo their own cultural evolution and become somewhat estranged from us and each other (because it will be too hard to stay in touch), and that will risk the re-emergence of values we don't like from human history. 

How much of cultural evolution is more or less an automatic response to economic development, and how much is path-dependent?  If there is path-dependency, we would want to seed each new space colony with colonists who 1) think globally (or maybe "cosmically" is a better term at this scale), with an expanded moral circle, or more important, a tendency to expand their moral circles; 2) are not intimidated by their own deaths; 3) maybe have other safeguards against personal survival mode; 4) but still are effective enough at surviving.  And try to institutionalize those tendencies into an ongoing colonial culture.  (So that they can survive, but without going into personal survival mode.)   For references for that seeded culture, maybe we would look to past human civilizations which produced people who were more global than they had to be given their economic circumstances, or notably global even in a relatively "disestablished" (chaotic, undeveloped, dysfunctional, insecure) or stressed state or environment. 

(That's a guess at an answer to your third question.)

Book Review: Deontology by Jeremy Bentham

I don't think your dialogue seems creepy, but I would put it in the childish/childlike category.   The more mature way to love is to value someone in who they are (so you are loving them, a unique personal being, the wholeness of who they are rather than the fact that they offer you something else) and to be willing to pay a real cost for them.  

I use the terms "mature" and "childish/childlike" because (while children are sometimes more genuinely loving than adults), I think there is a natural tendency to lose some of your taste for the flavors, sounds, feelings of excitement, and so on, you tend to like as a child,  and to be forced to pay for people, and to come to love them more deeply (more genuinely) because of it, as you grow older.  

"Person X gives me great pleasure, a good thing" and "Person X is happy, another good thing" -- Is Person X substitutable for an even greater pleasure?  Like, would you vaporize Person X (even without causing them pain), so that you could get high/experience tranquility if that gave you greater pleasure? Or from a more altruistic or all-things-considered perspective, if that would cause there to be more pleasure in the world as a whole? If you wouldn't, then I think there's something other than extreme  hedonism going on.

I do think that you can love people in the very act of enjoying them (something I hadn't realized when I wrote the comment you replied to).  I am not sure if that is always the case when someone enjoys someone else, though.  The case I would now make for loving someone just because you enjoy them would be something like this: 

  1. "love" of a person is "valuing a person in a personal way, as what they are, a person"; 
  2. you can value consciously and by a choice of will; 
  3. or, you can value unconsciously/involuntarily by being receptive to enhancement from them.  Your body (or something like your body) is in an attitude of receiving good from them.  ("Receptivity to enhancement" is Joseph Godfrey's definition of trust from Trust of  People, Words, and God.)
  4. being receptive to enhancement (trusting) is (or could be) your body saying "I ask you to benefit me with real benefit, there is value in you with which to bring me value, you help me with a real need I have, a real need that I have is when there's something I really lack (when there's a lack of value in my  eyes), you are valuable in bringing me value, you are valuable".
  5. if the receptivity that is a valuing is receptive to a "you" that to it is a person (unique, personal, unsubstitutable), then you value that person in who they are, and you love them

It's possible that creepy people enjoy other people in a way that denies that they are persons and the other persons' unique personhood.  Or, they only enjoy without trusting (or only trusting in a minimal way).   Fungibility implies a control over your situation and a certain level of indifference about how to dispose of things.  (Vulnerability (deeper trust) inhibits fungibility.)   The person who is enjoyed has become a fungible "hedonic unit" to the creepy person.

(Creepy hedonic love: a spider with a fly wrapped in silk, a fly which is now a meal.  Non-creepy hedonic love: a calf nursing from a cow, a mutuality.)

A person could be consciously or officially a thorough-going hedonist, but subconsciously enjoy people in a non-creepy way.  

I think maturity is like a medicine that helps protect against the tendency of the childish/childlike to sometimes become creepy.

Load More