James_Banks

Wiki Contributions

Comments

Blameworthiness for Avoidable Psychological Harms

Suppose there is some kind of new moral truth, but only one person knows it.  (Arguably, there will always be a first person.  New moral truth might be the adoption of a moral realism, the more rigorous application of reason in moral affairs, an expansion of the moral circle, an intensification of what we owe the beings in the moral circle, or a redefinition of what "harm" means. ) 

This person may well adopt an affectively expensive point of view, which won't make any sense to their peers (or may make all too much sense).  Their peers may have their feelings hurt by this new moral truth, and retaliate against them.  The person with the new moral truth may endure an almost self-destructive life pattern due to the moral truth's dissonance with the status quo, which will be objected to by other peers, who will pressure  that person to give up their moral truth and wear away at them to try to "save" them.  In the process of resisting the "caring peer", the new-moral-truth person does things that hurt the "caring peer"'s feelings.

There are at least two ideologies at play here.  (The new one and the old one, or the old ones if there are more than one.)  So we're looking at a battle between ideologies, played out on the field of accounting personal harm.  Which ideology does a norm of honoring the least-cost principle favor?  Wouldn't all the harm that gets traded back and forth simply not happen if the new-moral-truth person just hadn't adopted their new ideology in the first place?  So the "court" (popular opinion? an actual court?) that enforces the least-cost principle would probably interpret things according to the status quo's point of view and enforce adherence to the status quo.  But if there is such a thing as moral truth, then we are better off hearing it, even if it's unpopular.

Perhaps the least-cost principle is good, but there should be some provision in a "court"for considering whether ideologies are true and thus inherently require a certain set of emotional reactions.

Would you buy from an altruistic shop?

The $100 an item market sounds like fair trade.  So you might compete with fair trade and try to explain why your approach is better.

The $50,000 an item market sounds harder but more interesting.  I'm not sure I would ever buy a $50,000 hoodie or mug, no matter how much money I had or how nice the designs on them were.  But I could see myself (if I was rolling in money and cared about my personal appearance) buying a tailored suit for $50,000, and explaining that it only cost $200 to make (or whatever it really does) and the rest went to charity.  You might have to establish your brand in a conventional way (tailored suits, fancy dresses, runway shows, etc.) and be compelling artistically, as well as have the ethical angle.  You would probably need both to compete at that level, is my guess.

Religious Texts and EA: What Can We Learn and What Can We Inform?

This kind of pursuit is something I am interested in, and I'm glad to see you pursue it.

One thing you could look for, if you want, is the "psychological constitution" being written by a text.  People are psychological beings and the ideas they hold or try to practice shape their overall psychological makeup, affecting how they feel about things and act.  So, in the Bhagavad-Gita, we are told that it is good to be detached from the fruits of action, but to act anyway.  What effect would that idea have if EAs took it (to the extent that they haven't already)?  Or a whole population? (Similarly with its advice to meditate.)  EAs psychologically relate with the fruits of their action, in some way, already.  The theistic religions can blend relationship with ideals or truth itself with relationship with a person.  What difference would that blending make to EAs or the population at large?  I would guess it would produce a different kind of knowing -- maybe not changing object-level beliefs (although it could), but changing the psychology of believing (holding an ideal as a relationship to a person or a loyalty to a person rather than an impersonal law, for instance). 

Some thoughts on risks from narrow, non-agentic AI

One possibility that maybe you didn't close off (unless I missed it) is "death by feature creep" (more likely "decline by feature creep").  It's somewhat related to the slow-rolling catastrophe, but with the assumption that AI (or systems of agents including AI,  also involving humans) might be trying to optimize for stability and thus regulate each other, as well as trying to maximize some growth variable (innovation, profit).

 Our inter-agent (social, regulatory, economic, political) systems were built by the application of human intelligence, to the point that human intelligence can't comprehend the whole, making it hard to solve systemic problems.  So in one possible scenario, humans plus narrow AI might simplify the system at first, but then keep adding features to the system of civilization until it is unwieldy again.  (Maybe a superintelligent AGI could figure it out?  But if it started adding its own features, then maybe not even it understand what had evolved.)  Complexity can come from competitive pressures, but also from technological innovations.  Each innovation stresses the system, until the system can assimilate it more or less safely, by means of new regulation (social media that messes up politics unless / until we can break or manage some of its power).  

Then, if some kind of feedback loop leading toward civilizational decline begins, general intelligences (humans, if humans are the only general intelligences) might be even less capable of figuring out how to reverse course than they currently are.  In a way, this could be narrow AI as just another important technology, marginally complicating the world.  But also,  we might use narrow AI as tools in AI/AI+humans governance (or perhaps in understanding innovation), and they might be capable of understanding things that we cannot (often things that AI themselves made up), creating a dependency that could contribute in a unique way to a decline.  

(Maybe "understand" is the wrong word to apply to narrow AI but "process in a way sufficiently opaque to humans" works and is as bad.)

Being Inclusive

One thought that re-occurs to me is that there could be two, related EA movements, which draw from each other.    No official barrier to participating in both (like being on LessWrong and EA Forum at the same time).  Possible to be a leader in both at the same time (if you have time/energy for it).  One of them emphasizes the "effective" in "effective altruists", the other the "altruists".  The first more like current EA, the second more focused on increasing the (lasting) altruism of the greatest number of people.  Human resource focused.   

Just about anyone could contribute to the second one, I would think.  It could be a pool of people from which to recruit for the first one, and both movements would share ideas and culture (to an appropriate degree).

James_Banks's Shortform

"King Emeric's gift has thus played an important role in enabling us to live the monastic life, and it is a fitting sign of gratitude that we have been offering the Holy Sacrifice for him annually for the past 815 years."

(source: https://sancrucensis.wordpress.com/2019/07/10/king-emeric-of-hungary/ )

It seems to me like longtermists could learn something from people like this.  (Maintaining a point of view for 800 years, both keeping the values aligned enough to do this and being around to be able to.)

(Also a short blog post by me occasioned by these monks about "being orthogonal to history" https://formulalessness.blogspot.com/2019/07/orthogonal-to-history.html )

The despair of normative realism bot

Moral realism can be useful in letting us know what kind of things should be considered moral.

For instance, if you ground morality in God, you might say: Which God? Well, if we know which one, we might know his/her/its preferences, and that inflects our morality.  Also, if God partially cashes out to "the foundation of trustworthiness, through love", then we will approach knowing and obligation themselves (as psychological realities) in a different way (less obsessive? less militant? or, perhaps, less rigorously responsible?).

Sharon Hewitt Rawlette (in The Feeling of Value) grounds her moral realism in "normative qualia", which for her is something like "the component of pain that feels unacceptable" or its opposite in pleasure), which leads her to hedonic utilitarianism.  Not to preference satisfaction or anything else, but specifically to hedonism.

I think both of the above are best grounded in a "naturalism" (a "one-ontological-world-ism" from my other comment), rather than in anything Enochian or Parfitian.  

The despair of normative realism bot

I can see the appeal in having one ontological world.  What is that world, exactly?  Is it that which can be proven scientifically (in the sense of, through the scientific method used in natural science)?  I think what can be proven scientifically is perhaps what we are most sure is real or true.  But things that we are less certain of being real can still exist, as part of the same ontological world.  The uncertainty is in us, not in the world.  One simplistic definition of natural science is that it is simply rigorous empiricism.   The rigor isn't how we are metaphysically connected with things, rather it's the empirical that does so, the experiences contacting or occurring to observers.  The rigor simply helps us interpret our experiences.

We can have random experiences that don't add up to anything.  But maybe whatever experiences that give rise to our concept "morality", which we do seem to be able to discuss with some success with other people, and have done so in different time periods, may be rooted in a natural reality (which is not part of the deliverances of "natural science" as "natural" is commonly understood, but which is part of "natural science" if by "natural" we mean "part of the one ontological world").  Morality is something we try hard to make a science of (hence the field of ethics), but which to some extent eludes us.  But that doesn't mean that there isn't something natural there, but that it's something we have so far not figured out.

What types of charity will be the most effective for creating a more equal society?

Here are some ideas:

The rich have too much money relative to poor:

Taking money versus eliciting money.

Taking via

  • revolution
  • taxation

Eliciting via

  • shame, pressure, guilt
  • persuasion, psychological skill
  • friendship

Change of culture

  • culture in general
  • elite culture

Targeting elite money

  • used to be stewards of investments
  • used for personal spending

--

Revolutions are risky and can lead to worse governments.

Taxation might work better. (Closing tax haven loopholes.) Building political will for higher taxes on wealthy. There are people in the US who don't want there to be higher taxes on wealthy even though it would materially benefit them (culture change opportunity).

Eliciting could be more effective. Social justice culture (OK with shame, pressure, guilt) has philanthropic charities. (Not exactly aligned with EA.) Guerrilla Foundation, Resource Generation. (Already established. You could donate or join now.)

Eliciting via persuasion or psychological tactics sounds like it would appeal to some people to try to do.

Eliciting via friendship: what if a person, or movement, was very good friends with both rich and poor people? Then they could represent the legitimate interests of both to each other in a trustworthy way. I'm not sure anyone is trying this route. Maybe the Giving Pledge counts?

Change of culture. What are the roots of the altruistic mindset? What would help people have, or prepare people to have, values of altruists (a list of such for EA or EA-compatible people; there could be other lists)? Can this be something that gets "in the water" of culture at large? Can culture at large reach into elite culture, or does there have to be a special intervention to get values into elite culture? This sounds more like a project for a movement or set of movements than for a discrete charity.

Elite people have money that they spend on themselves personally -- easy to imagine they could just spend $30,000 a year on themselves and no more, give the balance to charity. But they also have money tied up in investments. Not so easy to ask them to liquidate those investments. If they are still in charge of those investments, then there is an inequality of power, since they can make decisions that affect many people without really understanding the situation of those people. Maybe nationalize industries? But then there can still be an inequality of power between governments and citizens.

If there can be a good flow between citizens and governments, whereby the citizens' voices are heard by the government, then could there be a similar thing between citizens and unelected elite? Probably somebody needs to be in charge of complex and powerful infrastructure, inevitably leading to potential for inequalities of power. Do the elite have an effective norm of listening to non-elite?

--

You might also consider the effect of AI and genetic engineering, or other technologies, on the problem of creating a more equal society. AI will either be basically under human control, or not. If it is, the humans who control it will be yet another elite. If it isn't, then we have to live with whatever society it comes up with. We can hope that maybe AI will enforce norms that we all really want deep down but couldn't enforce ourselves, like equality.

On the other hand, maybe, given the ability to change our own nature using genetic engineering, we (perhaps with the help of the elite) will choose to no longer care about inequality, only a basic sense of happiness which will be attainable by the emerging status quo.

Expected value theory is fanatical, but that's a good thing

1. I don't know much about probability and statistics, so forgive me if this sounds completely naive (I'd be interested in reading more on this problem, if it's as simple for you as saying "go read X").

Having said that, though, I may have an objection to fanaticism, or something in the neighborhood of it:

  • Let's say there are a suite of short-term payoff, high certainty bets for making things better.
  • And also a suite of long-term payoff, low certainty bets for making things better. (Things that promise "super-great futures".)

You could throw a lot of resources at the low certainty bets, and if the certainty is low enough, you could get to the end of time and say "we got nothing for all that". If the individual bets are low-certainty enough, even if you had a lot of them in your suite you would still have a very high probability of getting nothing for your troubles. (The state of coming up empty-handed.)

That investment could have come at the cost of pursuing the short-term, high certainty suite.

So you might feel regret at the end of time for not having pursued the safer bets, and with that in mind, it might be intuitively rational to pursue safe bets, even with less expected value. You could say "I should pursue high EV things just because they're high EV", and this "avoid coming up empty-handed" consideration might be a defeater for that.

You can defeat that defeater with "no, actually the likelihood of all these high-EV bets failing is low enough that the high-EV suite is worth pursuing."

2. It might be equally rational to pursue safety as it is to pursue high EV, it's just that the safety person and the high-EV person have different values.

3. I think in the real world, people do something like have a mixed portfolio, like Taleb's advice of "expose yourself to high-risk, high-reward investments/experiences/etc., and also low-risk, low-reward." And how they do that shows, practically speaking, how much they value super-great futures versus not coming up empty-handed. Do you think your paper, if it got its full audience, would do something like "get some people to shift their resources a little more toward high-risk, high-reward investments"? Or do you think it would have a more radical effect? (A big shift toward high-risk, high-reward? A real bullet-biting, where people do the bare minimum to survive and invest all other resources into pursuing super-high-reward futures?)

Load More