You didn't mention the Long Reflection, which is another point of contact between EA and religion. The Long Reflection is about figuring out what values are actually right, and I think it would be odd to not do deep study of all the cultures available to us to inform that, including religious ones. Presumably, EA is all about acting on the best values (when it does good, it does what is really good), so maybe it needs input from the Long Reflection to make big decisions.
I've wondered if it's easier to align AI to something simple rather than complex (or if it's more like "aligning things at all is really hard, but adding complexity is relatively easy once you get there"). If simplicity is more practical, then training an AI to do something libertarian might be simpler than to pursue any other value. The AI could protect "agency" (one version of that being "ability of each human to move their bodies as they wish, and the ability to secure their own decision-making ability"). Or, it might turn out to be ea...
This is sort of a loose reply to your essay. (The things I say about "EA" are just my impressions of the movement as a whole.)
I think that EA has aesthetics, it's just that the (probably not totally conscious) aesthetic value behind them is "lowkeyness" or "minimalism". The Forum and logo seems simple and minimalistically warm, classy, and functional to me.
Your mention of Christianity focuses more on medieval-derived / Catholic elements. Those lean more "thick" and "nationalistic". ("Nationalistic" like "building up a people group ...
I hadn't heard of When the Wind Blows before. From the trailer, I would say Testament may be darker, although a lot of that has to do with me not responding to animation (or When the Wind Blows' animation) as strongly as to live-action. (And then from the Wikipedia summary, it sounds pretty similar.)
I would recommend Testament as a reference for people making X-risk movies. It's about people dying out from radiation after a nuclear war, from the perspective of a mom with kids. I would describe it as emotionally serious, and also it presents a woman's and "ordinary person's" perspective. I guess it could be remade if someone wanted to, or it could just be a good influence on other movies.
Existential risk might be worth talking about because of normative uncertainty. Not all EAs are necessarily hedonists, and perhaps the ones who are shouldn't be, for reasons to be discovered later. So, if we don't know what "value" is, or, as a movement, EA doesn't "know" what "value" is, a priori, we might want to keep our options open, and if everyone is dead, then we can't figure out what "value" really is or ought to be.
If EA has a lot of extra money, could that be spent on incentivizing AI safety research? Maybe offer a really big bounty for solving some subproblem that's really worth solving. (Like if somehow we could read and understand neural networks directly instead of them being black boxes.)
Could EA (and fellow travelers) become the market for an AI safety industry?
I wonder if there are other situations where a person has a "main job" (being a scientist, for instance) and is then presented with a "morally urgent situation" that comes up (realizing your colleague is probably a fraud and you should do something about it). The traditional example is being on your way to your established job and seeing someone beaten up on the side of the road whom you could take care of. This "side problem" can be left to someone else (who might take responsibility, or not) and if taken on, may well be an open-ended and ener...
The plausibility of this depends on exactly what the culture of the elite is. (In general, I would be interested in knowing what all the different elite cultures in the world actually are.) I can imagine there being some tendency toward thinking of the poor / "low-merit", as being superfluous, but I can also imagine superrich people not being that extremely elitist and thinking "why not? The world is big, let the undeserving live." or even things which are more humane than that.
But also, despite whatever humaneness there might be in...
This is kind of like my comment at the other post, but it's what I could think of as feedback here.
--
I liked your point IV, that inefficiency might not go away. One reason it might not is because humans (even digital ones) would have something like free will, or caprice, or random preferences, in the same way that they do now. Human values may not behave according to our concept of "reasonable rational values" over time, as they evolve. In human history, there have been impulses toward the rational and the irrational. So the...
This isn't a very direct response to your questions, but is relevant, and is a case for why there might be a risk of factory farming in the long-term future. (This doesn't address the scenarios from your second question.) [Edit: it does have an attempt at answering your third question at the end.]
--
It may be possible that if plant-based meat substitutes are cheap enough and taste like (smell like, have mouth feel of, etc.) animal-derived meat, then it won't make economic sense to keep animals for that purpose.
That's the hopeful take, and I'm guessing...
This is SUPER interesting. And it's amazing that you have put so much thought into this exact issue!
Also, I love that everybody who responded is named James! :-)
I don't think your dialogue seems creepy, but I would put it in the childish/childlike category. The more mature way to love is to value someone in who they are (so you are loving them, a unique personal being, the wholeness of who they are rather than the fact that they offer you something else) and to be willing to pay a real cost for them.
I use the terms "mature" and "childish/childlike" because (while children are sometimes more genuinely loving than adults), I think there is a natural tendency to lose some of your taste for the flavors, ...
Would it be possible for some kind of third party to give feedback on applications? That way people can get feedback even if hiring organizations find it too costly. Someone who was familiar with how EA organizations think / with hiring processes specifically, or who was some kind of career coach, to be able to say "You are in the nth percentile of EAs I counsel. It's likely/unlikely that if you are rejected it's because you're unqualified overall." or "Here are your general strengths and weaknesses as someone applying to this position, ...
Suppose there is some kind of new moral truth, but only one person knows it. (Arguably, there will always be a first person. New moral truth might be the adoption of a moral realism, the more rigorous application of reason in moral affairs, an expansion of the moral circle, an intensification of what we owe the beings in the moral circle, or a redefinition of what "harm" means. )
This person may well adopt an affectively expensive point of view, which won't make any sense to their peers (or may make all too much sense). Their peers m...
The $100 an item market sounds like fair trade. So you might compete with fair trade and try to explain why your approach is better.
The $50,000 an item market sounds harder but more interesting. I'm not sure I would ever buy a $50,000 hoodie or mug, no matter how much money I had or how nice the designs on them were. But I could see myself (if I was rolling in money and cared about my personal appearance) buying a tailored suit for $50,000, and explaining that it only cost $200 to make (or whatever it really does) and the rest went to cha...
This kind of pursuit is something I am interested in, and I'm glad to see you pursue it.
One thing you could look for, if you want, is the "psychological constitution" being written by a text. People are psychological beings and the ideas they hold or try to practice shape their overall psychological makeup, affecting how they feel about things and act. So, in the Bhagavad-Gita, we are told that it is good to be detached from the fruits of action, but to act anyway. What effect would that idea have if EAs took it (to the extent that they h...
One possibility that maybe you didn't close off (unless I missed it) is "death by feature creep" (more likely "decline by feature creep"). It's somewhat related to the slow-rolling catastrophe, but with the assumption that AI (or systems of agents including AI, also involving humans) might be trying to optimize for stability and thus regulate each other, as well as trying to maximize some growth variable (innovation, profit).
Our inter-agent (social, regulatory, economic, political) systems were built by the application of human intelligen...
One thought that re-occurs to me is that there could be two, related EA movements, which draw from each other. No official barrier to participating in both (like being on LessWrong and EA Forum at the same time). Possible to be a leader in both at the same time (if you have time/energy for it). One of them emphasizes the "effective" in "effective altruists", the other the "altruists". The first more like current EA, the second more focused on increasing the (lasting) altruism of the greatest number of people. Human reso...
"King Emeric's gift has thus played an important role in enabling us to live the monastic life, and it is a fitting sign of gratitude that we have been offering the Holy Sacrifice for him annually for the past 815 years."
(source: https://sancrucensis.wordpress.com/2019/07/10/king-emeric-of-hungary/ )
It seems to me like longtermists could learn something from people like this. (Maintaining a point of view for 800 years, both keeping the values aligned enough to do this and being around to be able to.)
(Also a short blog post by me occasioned by these m...
Moral realism can be useful in letting us know what kind of things should be considered moral.
For instance, if you ground morality in God, you might say: Which God? Well, if we know which one, we might know his/her/its preferences, and that inflects our morality. Also, if God partially cashes out to "the foundation of trustworthiness, through love", then we will approach knowing and obligation themselves (as psychological realities) in a different way (less obsessive? less militant? or, perhaps, less rigorously responsible?).
Sharon Hewitt Rawlette (i...
I can see the appeal in having one ontological world. What is that world, exactly? Is it that which can be proven scientifically (in the sense of, through the scientific method used in natural science)? I think what can be proven scientifically is perhaps what we are most sure is real or true. But things that we are less certain of being real can still exist, as part of the same ontological world. The uncertainty is in us, not in the world. One simplistic definition of natural science is that it is simply rigorous empiri...
Here are some ideas:
The rich have too much money relative to poor:
Taking money versus eliciting money.
Taking via
Eliciting via
Change of culture
Targeting elite money
--
Revolutions are risky and can lead to worse governments.
Taxation might work better. (Closing tax haven loopholes.) Building political will for higher taxes on wealthy. There are people in the US who don't want the...
1. I don't know much about probability and statistics, so forgive me if this sounds completely naive (I'd be interested in reading more on this problem, if it's as simple for you as saying "go read X").
Having said that, though, I may have an objection to fanaticism, or something in the neighborhood of it:
(The following is long, sorry about that. Maybe I should have written it up already as a normal post. A one sentence abstract could be: "Social media algorithms could be dangerous as a part of the overall process of leading people to 'consent' to being lesser forms of themselves to further elite/AI/state goals, perhaps threatening the destruction of humanity's longterm potential.")
It seems plausible to me that something like algorithmic behavior modification (social media algorithms are algorithms designed to modify human behav...
OK, this person on the EA subreddit uses a kind of meditation to reduce irrational/ineffective guilt.
I like the idea of coming up with some kind of practice to retrain yourself to be more altruistic. There should be some version of that idea that works, and maybe exposing yourself to stories / imagery / etc. about people / animals who can be helped would be part of that.
One possibility is that such images could become naturally compelling for people (and thus would tend to be addictive or obsession-producing, because of their awful compellingness) -- for such people, this practice is probably bad, sometimes (often?) a net bad. But for other people, the...
Also, this makes me curious: have things changed any since 2007? Does the promotion of 1 still seem as necessary? What role has the letter (or similar ideas/sentiments) played in whatever has happened with charities and funders over the last 13 years?
I think there's a split between 1) "I personally will listen to brutal advice because I'm not going to let my feelings get in the way of things being better" and 2) "I will give brutal advice because other people's feelings shouldn't get in the way of things being better". Maybe Holden wanted people to internalize 1 at the risk of engaging in 2. 2 may have been his way of promoting 1, a way of invalidating the feelings of his readers, who would go on to then be 1 people.
I'm pretty sure that there's a wa...
It looks like some people downvoted you, and my guess is that it may have to do with the title of the post. It's a strong claim, but also not as informative as it could be, doesn't mention anything to do with climate change or GHGs, for instance.
Similarly, one could be concerned that the rapid economic growth that AI are expected to bring about could cause a lot of GHG emissions unless somehow we (or they) figure out how to use clean energy instead.
Here's a related quote from Eccentrics by David Weeks and Jamie James (pp. 67 - 68) (I think it's Weeks speaking in the following quote:)
My own concept of creativity is that it is effective, empathic problem-solving. The part that empathy plays in this formulation is that it represents a transaction between the individual and the problem. (I am using the word "problem" loosely, as did Ghiselin: for an artist, the problem might be how to depict an apple.) The creative person displaces his point of view into the problem, investing it...
Thinking back on books that have made a big effect on me, I think they were things which spoke to something already in me, maybe something genetic, to a large extent. It's like I was programmed from birth to have certain life movements, and so I could immediately recognize what I read as the truth when it came to me -- "that's what I was always wanting to say, but didn't know how!" I think that probably explains HP:MOR to a large extent (but I haven't read HP:MOR).
My guess is that a large part of Yudkowsky's motivation ...
Interesting. A point I could get out of this is: "don't take your own ideology too seriously, especially when the whole point of your ideology is to make yourself happy."
An extreme hedonism (a really faithful one) is likely to produce outcomes like:
"I love you."
"You mean, I give you pleasure?"
"Well, yeah! Duh!"
Which is a funny thing to say, kind of childish or childlike. (Or one could make the exchange be creepy: "Yeah, you mean nothing more to me than the pleasure you give me.")
Do people really ex...
I can see a scenario where BCI totalitarianism sounds like a pretty good thing from a hedonic utilitarian point of view:
People are usually more effective workers when they're happy. So a pragmatic totalitarian government (like Brave New World) rather than a sadistic one or sadistic/pragmatic (1984, maybe) would want its people to be happy all the time, and would stimulate whatever in the brain makes them happy. To suppress dissent it would just delete thoughts and feelings in that direction as painlessly as possible. Competing governments would hav...
In any case, both that quoted statement of yours and my tweaked version of it seem very different from the claim "if we don't currently know how to align/control AIs, it's inevitable there'll eventually be significantly non-aligned AIs someday"?
Yes, I agree that there's a difference.
I wrote up a longer reply to your first comment (the one marked "Answer'), but then I looked up your AI safety doc and realized that I might better read through the readings in that first.
Yeah, I wasn't being totally clear with respect to what I was really thinking in that context. I was thinking "from the point of view of people who have just been devastated by some not-exactly superintelligent but still pretty smart AI that wasn't adequately controlled, people who want to make that never happen again, what would they assume is the prudent approach to whether there will be more non-aligned AI someday?", figuring that they would think "Assume that if there are more, it is inevitable that there will be some non-alig...
A few things this makes me think of:
explore vs. exploit: The first part of your life (the first 37%?), you gather information, then the last part, you use that information, maximizing and optimizing according to it. Humans have definite lifespans, but movements don't. Perhaps a movement's life depends somewhat on how much exploration they continue to do.
Christianity: I think maybe the only thing all professed Christians have in common is attraction to Jesus, who is vaguely or definitely understood. You could think of Christianity as a movement...
A few free ideas occasioned by this:
1. The fact that this is a government paper makes me think of "people coming together to write a mission statement." To an extent, values are agreed-upon by society, and it's good to bear that in mind. (Working with widespread values instead of against them, accepting that to an extent values are socially-constructed (or aren't, but the crowd could be objectively right and you wrong) and adjusting to what's popular instead of using a lot of energy to try to change things.)
2. My first reaction ...
I'm basically an outsider to EA, but "from afar", I would guess that some of the values of EA are 1) against politicization, 2) for working and building rather than fighting and exposing ("exposing" being "saying the unhealthy truth for truth's sake", I guess), 3) for knowing and self-improvement (your point), 4) concern for effectiveness (Gordon's point). And of course, the value of altruism.
These seem like they are relatively safe to promote (unless I'm missing something.)
Altruism is composed of 1) other...
I may not have understood all of you what you said, but I was left with a few thoughts after finishing this.
1. Creating Bob to have values: if Bob is created to be able to understand that he was created to have values, and to be able to then, himself, reject those values and choose his own, then I say he is probably more free than if he wasn't. But, having chosen his own values, he now has to live in society, a society possibly largely determined by an AI. If society is out of tune with him, he will have limited ability to live out his values, ... (read more)