All of James_Banks's Comments + Replies

Blameworthiness for Avoidable Psychological Harms

Suppose there is some kind of new moral truth, but only one person knows it.  (Arguably, there will always be a first person.  New moral truth might be the adoption of a moral realism, the more rigorous application of reason in moral affairs, an expansion of the moral circle, an intensification of what we owe the beings in the moral circle, or a redefinition of what "harm" means. ) 

This person may well adopt an affectively expensive point of view, which won't make any sense to their peers (or may make all too much sense).  Their peers m... (read more)

2Cullen_OKeefe7moThese are all great considerations! However, I think that it's perfectly consistent with my framework to analyze the total costs to avoiding a harm, including harms to society from discouraging true beliefs or chilling the reasoned exchange of ideas. So in the case you imagine, there's a big societal moral cost from the peers' reactions, which they therefore have good reason to try to minimize. This generalizes to the case where we don't know whose moral ideas are true by "penalizing" (or at least failing to indulge) psychological frameworks that impede moral discourse and reasoning (perhaps this is one way of understanding the First Amendment).
Would you buy from an altruistic shop?

The $100 an item market sounds like fair trade.  So you might compete with fair trade and try to explain why your approach is better.

The $50,000 an item market sounds harder but more interesting.  I'm not sure I would ever buy a $50,000 hoodie or mug, no matter how much money I had or how nice the designs on them were.  But I could see myself (if I was rolling in money and cared about my personal appearance) buying a tailored suit for $50,000, and explaining that it only cost $200 to make (or whatever it really does) and the rest went to cha... (read more)

1arielpontes7moHonestly at this point the $50,000 item is more like a joke, something to make a point. I'd have a listing there to raise awareness of the fact that people really are spending that kind of money on stupid stuff all the time, which I think is a moral scandal. But of course, I'd be happy if somebody actually bought it! But in any case, although people usually don't buy t-shirts or hoodies for $50,000, they buy them for a few hundred dollars all the time. This shop idea is largely inspired by this Patriot Act episode:
Religious Texts and EA: What Can We Learn and What Can We Inform?

This kind of pursuit is something I am interested in, and I'm glad to see you pursue it.

One thing you could look for, if you want, is the "psychological constitution" being written by a text.  People are psychological beings and the ideas they hold or try to practice shape their overall psychological makeup, affecting how they feel about things and act.  So, in the Bhagavad-Gita, we are told that it is good to be detached from the fruits of action, but to act anyway.  What effect would that idea have if EAs took it (to the extent that they h... (read more)

Some thoughts on risks from narrow, non-agentic AI

One possibility that maybe you didn't close off (unless I missed it) is "death by feature creep" (more likely "decline by feature creep").  It's somewhat related to the slow-rolling catastrophe, but with the assumption that AI (or systems of agents including AI,  also involving humans) might be trying to optimize for stability and thus regulate each other, as well as trying to maximize some growth variable (innovation, profit).

 Our inter-agent (social, regulatory, economic, political) systems were built by the application of human intelligen... (read more)

Being Inclusive

One thought that re-occurs to me is that there could be two, related EA movements, which draw from each other.    No official barrier to participating in both (like being on LessWrong and EA Forum at the same time).  Possible to be a leader in both at the same time (if you have time/energy for it).  One of them emphasizes the "effective" in "effective altruists", the other the "altruists".  The first more like current EA, the second more focused on increasing the (lasting) altruism of the greatest number of people.  Human reso... (read more)

James_Banks's Shortform

"King Emeric's gift has thus played an important role in enabling us to live the monastic life, and it is a fitting sign of gratitude that we have been offering the Holy Sacrifice for him annually for the past 815 years."

(source: https://sancrucensis.wordpress.com/2019/07/10/king-emeric-of-hungary/ )

It seems to me like longtermists could learn something from people like this.  (Maintaining a point of view for 800 years, both keeping the values aligned enough to do this and being around to be able to.)

(Also a short blog post by me occasioned by these m... (read more)

The despair of normative realism bot

Moral realism can be useful in letting us know what kind of things should be considered moral.

For instance, if you ground morality in God, you might say: Which God? Well, if we know which one, we might know his/her/its preferences, and that inflects our morality.  Also, if God partially cashes out to "the foundation of trustworthiness, through love", then we will approach knowing and obligation themselves (as psychological realities) in a different way (less obsessive? less militant? or, perhaps, less rigorously responsible?).

Sharon Hewitt Rawlette (i... (read more)

The despair of normative realism bot

I can see the appeal in having one ontological world.  What is that world, exactly?  Is it that which can be proven scientifically (in the sense of, through the scientific method used in natural science)?  I think what can be proven scientifically is perhaps what we are most sure is real or true.  But things that we are less certain of being real can still exist, as part of the same ontological world.  The uncertainty is in us, not in the world.  One simplistic definition of natural science is that it is simply rigorous empiri... (read more)

What types of charity will be the most effective for creating a more equal society?

Here are some ideas:

The rich have too much money relative to poor:

Taking money versus eliciting money.

Taking via

  • revolution
  • taxation

Eliciting via

  • shame, pressure, guilt
  • persuasion, psychological skill
  • friendship

Change of culture

  • culture in general
  • elite culture

Targeting elite money

  • used to be stewards of investments
  • used for personal spending

--

Revolutions are risky and can lead to worse governments.

Taxation might work better. (Closing tax haven loopholes.) Building political will for higher taxes on wealthy. There are people in the US who don't want the... (read more)

Expected value theory is fanatical, but that's a good thing

1. I don't know much about probability and statistics, so forgive me if this sounds completely naive (I'd be interested in reading more on this problem, if it's as simple for you as saying "go read X").

Having said that, though, I may have an objection to fanaticism, or something in the neighborhood of it:

  • Let's say there are a suite of short-term payoff, high certainty bets for making things better.
  • And also a suite of long-term payoff, low certainty bets for making things better. (Things that promise "super-great futur
... (read more)
Are social media algorithms an existential risk?

(The following is long, sorry about that. Maybe I should have written it up already as a normal post. A one sentence abstract could be: "Social media algorithms could be dangerous as a part of the overall process of leading people to 'consent' to being lesser forms of themselves to further elite/AI/state goals, perhaps threatening the destruction of humanity's longterm potential.")

It seems plausible to me that something like algorithmic behavior modification (social media algorithms are algorithms designed to modify human behav... (read more)

Deliberate Consumption of Emotional Content to Increase Altruistic Motivation

I like the idea of coming up with some kind of practice to retrain yourself to be more altruistic. There should be some version of that idea that works, and maybe exposing yourself to stories / imagery / etc. about people / animals who can be helped would be part of that.

One possibility is that such images could become naturally compelling for people (and thus would tend to be addictive or obsession-producing, because of their awful compellingness) -- for such people, this practice is probably bad, sometimes (often?) a net bad. But for other people, the... (read more)

1James_Banks1yOK, this person on the EA subreddit [https://old.reddit.com/r/EffectiveAltruism/comments/iro2d5/how_do_you_handle_the_guilt_of_walking_past/g50y5qw/] uses a kind of meditation to reduce irrational/ineffective guilt.

Also, this makes me curious: have things changed any since 2007? Does the promotion of 1 still seem as necessary? What role has the letter (or similar ideas/sentiments) played in whatever has happened with charities and funders over the last 13 years?

I think there's a split between 1) "I personally will listen to brutal advice because I'm not going to let my feelings get in the way of things being better" and 2) "I will give brutal advice because other people's feelings shouldn't get in the way of things being better". Maybe Holden wanted people to internalize 1 at the risk of engaging in 2. 2 may have been his way of promoting 1, a way of invalidating the feelings of his readers, who would go on to then be 1 people.

I'm pretty sure that there's a wa... (read more)

1James_Banks1yAlso, this makes me curious: have things changed any since 2007? Does the promotion of 1 still seem as necessary? What role has the letter (or similar ideas/sentiments) played in whatever has happened with charities and funders over the last 13 years?
Here's what Should be Prioritized as the Main Threat of AI

It looks like some people downvoted you, and my guess is that it may have to do with the title of the post. It's a strong claim, but also not as informative as it could be, doesn't mention anything to do with climate change or GHGs, for instance.

Here's what Should be Prioritized as the Main Threat of AI

Similarly, one could be concerned that the rapid economic growth that AI are expected to bring about could cause a lot of GHG emissions unless somehow we (or they) figure out how to use clean energy instead.

When can Writing Fiction Change the World?

Here's a related quote from Eccentrics by David Weeks and Jamie James (pp. 67 - 68) (I think it's Weeks speaking in the following quote:)

My own concept of creativity is that it is effective, empathic problem-solving. The part that empathy plays in this formulation is that it represents a transaction between the individual and the problem. (I am using the word "problem" loosely, as did Ghiselin: for an artist, the problem might be how to depict an apple.) The creative person displaces his point of view into the problem, investing it
... (read more)
When can Writing Fiction Change the World?

Thinking back on books that have made a big effect on me, I think they were things which spoke to something already in me, maybe something genetic, to a large extent. It's like I was programmed from birth to have certain life movements, and so I could immediately recognize what I read as the truth when it came to me -- "that's what I was always wanting to say, but didn't know how!" I think that probably explains HP:MOR to a large extent (but I haven't read HP:MOR).

My guess is that a large part of Yudkowsky's motivation ... (read more)

4James_Banks1yHere's a related quote from Eccentrics by David Weeks and Jamie James (pp. 67 - 68) (I think it's Weeks speaking in the following quote:) This makes me think: "You become the problem, and then at high stakes are forced to solve yourself, because now it's a life or death situation for you."
Book Review: Deontology by Jeremy Bentham

Interesting. A point I could get out of this is: "don't take your own ideology too seriously, especially when the whole point of your ideology is to make yourself happy."

An extreme hedonism (a really faithful one) is likely to produce outcomes like:

"I love you."

"You mean, I give you pleasure?"

"Well, yeah! Duh!"

Which is a funny thing to say, kind of childish or childlike. (Or one could make the exchange be creepy: "Yeah, you mean nothing more to me than the pleasure you give me.")

Do people really ex... (read more)

A New X-Risk Factor: Brain-Computer Interfaces

I can see a scenario where BCI totalitarianism sounds like a pretty good thing from a hedonic utilitarian point of view:

People are usually more effective workers when they're happy. So a pragmatic totalitarian government (like Brave New World) rather than a sadistic one or sadistic/pragmatic (1984, maybe) would want its people to be happy all the time, and would stimulate whatever in the brain makes them happy. To suppress dissent it would just delete thoughts and feelings in that direction as painlessly as possible. Competing governments would hav... (read more)

What do we do if AI doesn't take over the world, but still causes a significant global problem?
In any case, both that quoted statement of yours and my tweaked version of it seem very different from the claim "if we don't currently know how to align/control AIs, it's inevitable there'll eventually be significantly non-aligned AIs someday"?

Yes, I agree that there's a difference.

I wrote up a longer reply to your first comment (the one marked "Answer'), but then I looked up your AI safety doc and realized that I might better read through the readings in that first.

What do we do if AI doesn't take over the world, but still causes a significant global problem?

Yeah, I wasn't being totally clear with respect to what I was really thinking in that context. I was thinking "from the point of view of people who have just been devastated by some not-exactly superintelligent but still pretty smart AI that wasn't adequately controlled, people who want to make that never happen again, what would they assume is the prudent approach to whether there will be more non-aligned AI someday?", figuring that they would think "Assume that if there are more, it is inevitable that there will be some non-alig... (read more)

3MichaelA1yI at least approximately agree with that statement. I think there'd still be some reasons to think there won't someday be significantly non-aligned AIs. For example, a general argument like: "People really really want to not get killed or subjugated or deprived of things they care about, and typically also want that for other people to some extent, so they'll work hard to prevent things that would cause those bad things. And they've often (though not always) succeeded in the past." (Some discussions of this sort of argument can be found in the section on "Should we expect people to handle AI safety and governance issues adequately without longtermist intervention?" [https://docs.google.com/document/d/1zGp9qFrMqeZBvMRSvtpAwcGjXWU5VQ5HmbmmjlxEtak/edit#heading=h.wt1vcsupipqw] in Crucial questions [https://forum.effectivealtruism.org/posts/wicAtfihz2JmPRgez/crucial-questions-for-longtermists] .) But I don't think those arguments make significantly non-aligned AIs implausible, let alone impossible. (Those are both vague words. I could maybe operationalise that as something like a 0.1-50% chance remaining.) And I think that that's all that's required (on this front) in order for the rest of your ideas in this post to be relevant. In any case, both that quoted statement of yours and my tweaked version of it seem very different from the claim "if we don't currently know how to align/control AIs, it's inevitable there'll eventually be significantly non-aligned AIs someday"?
Objections to Value-Alignment between Effective Altruists

A few things this makes me think of:

explore vs. exploit: The first part of your life (the first 37%?), you gather information, then the last part, you use that information, maximizing and optimizing according to it. Humans have definite lifespans, but movements don't. Perhaps a movement's life depends somewhat on how much exploration they continue to do.

Christianity: I think maybe the only thing all professed Christians have in common is attraction to Jesus, who is vaguely or definitely understood. You could think of Christianity as a movement... (read more)

3jkmh1yThis is an interesting perspective. It makes me wonder if/how there could be decently defined sub-groups that EAs can easily identify, e.g. "long-termists interested in things like AI" vs. "short-termists who place significantly more weight on current living things" - OR - "human-centered" vs. "those who place significant weight on non-human lives." Like within Christianity, specific values/interpretations can/should be diverse, which leads to sub-groups. But there is sort of a "meta-value" that all sub-groups hold, which is that we should use our resources to do the most good that we can. It is vague enough to be interpreted in many ways, but specific enough to keep the community organized. I think the fact that I could come up with (vaguely-defined) examples of sub-groups indicates that, in some way, the EA community already has sub-communities. I agree with the original post that there is risk of too much value-alignment that could lead to stagnation or other negative consequences. However, in my 2 years of reading/learning about EA, I've never thought that EAs were unaware or overly-confident in their beliefs, i.e. it seems to me that EAs are self-critical and self-aware enough to consider many viewpoints. I personally never felt that just because I don't want (nor can I imagine) an AI singleton that brings stability to humanity meant that I wasn't an EA.
What values would EA want to promote?

A few free ideas occasioned by this:

1. The fact that this is a government paper makes me think of "people coming together to write a mission statement." To an extent, values are agreed-upon by society, and it's good to bear that in mind. (Working with widespread values instead of against them, accepting that to an extent values are socially-constructed (or aren't, but the crowd could be objectively right and you wrong) and adjusting to what's popular instead of using a lot of energy to try to change things.)

2. My first reaction ... (read more)

What values would EA want to promote?

I'm basically an outsider to EA, but "from afar", I would guess that some of the values of EA are 1) against politicization, 2) for working and building rather than fighting and exposing ("exposing" being "saying the unhealthy truth for truth's sake", I guess), 3) for knowing and self-improvement (your point), 4) concern for effectiveness (Gordon's point). And of course, the value of altruism.

These seem like they are relatively safe to promote (unless I'm missing something.)

Altruism is composed of 1) other... (read more)