CT

c.trout

50 karmaJoined Oct 2022www.ctrout.art/

Bio

Philosophy graduate interested in metaphysics, meta-ethics, AI safety and whole bunch of other things. Meta-ethical and moral theories of choice: naturalist realism + virtue ethics. 

Thinks longtermism rests on a false premise – that for every moral agent and every value-bearing location, an agent's spatio-temporal distance from a given value-bearing location does not factor into the value born at that location (e.g. no matter how close or far a person is from you, that person should matter the same to you). 

Thinks we should spend a lot more resources trying to delay HLMI – make AGI development uncool. Questions what we really need AGI for anyway. Accepts the epithet "luddite" so long as this is understood to describe someone who:

  • suspects that on net, technological progress yields diminishing marginal human flourishing
  • OR believes workers have a right to organize to defend their interests (you know – what the original Luddites were doing)
  • OR suspects that, with regards to AI, the Luddite fallacy may not be a fallacy: AI really could lead to wide-spread permanent technological unemployment, and that might not be a good thing
  • OR considering the common sense observation that societies can only adapt so quickly, suspects excessive rates of technological change can lead to social harms, independent of how the technology is used. 
    • Assuming some base line amount of good and bad actors, we always need norms/regulations etc to ensure new tech is overall a net benefit for society. But excessively rapid change makes the optimal norms/regulations excessively fast moving targets. On a deeper note, social bonds could start to fray with excessively rapid change: think different generations or groups who adopted a given new tech at a different time/rate not being able to connect with one another, their experience varying too greatly. Think teachers being unable to connect with, guide or prepare their students effectively given that their own experience is already outdated/invalidated and will only become more so by the time students are adults.

Subscribes to Crocker's Rules: unvarnished critical (but constructive) feedback is welcome.

Comments
15

c.trout
6mo23

Thank you very much for your perspective! I recently wrote about something closely related to this "emotions problem" but hadn't considered how the EA community offered a home for neurodivergent folks. I have now added a disclaimer making sure we 'normies' remember to keep you in mind!

c.trout
6mo10

No worries about the strong response – I misjudged how my words would be interpreted. I'm glad we sorted that out.

Regarding overthinking ethical stuff and SBF: 
Unfortunately I fear you've missed my point. First of all, I wasn't really talking about any fraud/negligence that he may have committed. As I said in the 2nd paragraph:

Regarding his ignorance and his intentions, he might be telling the truth. Suppose he is: suppose he never condoned doing sketchy things as a means he could justify by some expected greater good. Where then is the borderline moral nihilism coming from? Note that it's saying "all the right shibboleths" that he spoke of as mere means to an end, not the doing of sketchy things. 

My subject was his attitude/comments towards ethics. Second, my diagnosis was not that:

SBF's problems with ethics came from careful debate in business ethics and then missing a decimal point in the relevant calculations.

My point was that it's getting too comfortable approaching ethics like a careful calculation that can be dangerous in the first place – no matter how accurate the calculation is. It's not about missing some decimal points. Please reread this section if you're interested. I updated the end of it with a reference to a clear falsifiable claim.

c.trout
6mo10

Fair enough!

But also: if the EA community will only correct the flaws in itself that it can measure then... good luck. Seems short-sighted to me.

I may not have the data to back up my hypothesis, but it's also not as if I pulled this out of thin air. And I'm not the first to find this hypothesis plausible.

c.trout
6mo10

I claim that there is a healthy amount of moral calculation one should do, but doing too much of it has harmful side-effects. I claim, for these reasons, that Consequentialism (and the culture surrounding it) tends to result in abuse of moral calculation more so than VE. I don't expect abuse to arise in the majority of people who engage with/follow Consequentialism or something – just more than among those who engage with/follow VE. I also claim, for reasons at the end of this section, that abuse will be more prevalent among those who engage with rationalism than those who don't.

If I'm right about this flaw in the community culture around here, and this flaw in anyway contributed to SBF talking the way he did, shouldn't the community consider taking some steps to curb that problematic tendency?

c.trout
6mo10

It seems like your model is that you assume most people start with a sort of organic, healthy gut-level caring and sense of fellow-feeling, which moral calculation tends to distort.

Moral calculation (and faking it 'til you make it) can be helpful in becoming more virtuous, but to a limited extent – you can push it too far. And anyway, its not the only way to become a better person. I think more helpful is what I mentioned at the end of my post:

Encourage your friends to call out your vices. (In turn, steer your friends away from vice and try to be a good role model for the impressionable). Engage with good books, movies, plays etc. Virtue ethicists note that art has a great potential for exercising and training moral awareness...

If you want to see how the psych literature intersects on a related topic (romantic relationships instead of ethics in general) see Eva Illouz's Why love hurts: A sociological explanation (2012), Chapter 3. Search for the heading "The New Architecture of Romantic Choice or the Disorganization of the Will" (p 90 in my edition) if you want to skip right to it. You might be able to read the entire section through Google books preview? I recommend the book though, if you're interested.

c.trout
6mo32

Yikes! Thank you for letting me know! Clearly a very poor choice of words: that was not at all my intent!

To be clear, I agree with EAs on many many issues. I just fear they suffer from "overthinking ethical stuff too often" if you will.

c.trout
6mo10

...outsourcing our charity evaluations to specialists. I don’t have to decide if bednets or direct donations is better: GiveWell does it for me with their wonderful spreadsheets.

And I don’t have to consider every moment whether deontology or consequentialism is better: the EA movement and my identity as an EA does a lot of that work for me. It also licenses me to defer to habit almost 100% of the time

These are good things, and you're right to point them out! I certainly don't expect to find that every EA is a walking utility calculator – I expect that to be extremely rare. I also don't expect to find internal moral disharmony in every EA, though I expect it to be much less rare than walking utility calculators.

I just want to add one thing, just to be sure everything is clear. I'm glad you see how "a conscious chain of ethical reasoning held in the mind is not what should be motivating our actions" (i.e. we should not be walking utility calculators). But that was just my starting point. Ultimately I want to claim that, whether you're in a "heat of the moment" situation or not, getting too used to applying a calculating maximizer's mindset in realms typically governed by affect can result in the following:

  1. Worst case extreme scenario: you become a walking utility calculator, and are perfectly at peace with yourself about being one. You could be accused of being cold, calculating, uncaring.
  2. More likely scenario: you start adopting a calculating maximizer's mindset when you shouldn't (e.g. when trying to decide whether to go see a sick friend or not) even though you know you shouldn't, or you didn't mean to adopt that mindset. You could be accused of being inadvertently cold and calculating – someone who, sadly, tends to overthink things.
    1. In such situations, because you've adopted that mindset, you will dampen your positive affective attachment to the decision you make (or the object at the center of that decision), even though you started with strong affect toward that decision/object. E.g. when you first heard your friend was in the hospital, you got a pit in your stomach, but it eventually wore away as you evaluated the pros and cons of going to see them or doing something else with your time (as you began comparing friends maybe, to decide who to spend time with). Whatever you do end up deciding to do, you feel ambivalent about it.
    2. Any cognitive dissonance you might have (e.g. your internal monologue sounds like this: "Why am I thinking so hard about this? I should have just gone with my gut"), and the struggle to resolve that dissonance only worsens 2.a. 
  3. Either way: in general, considerations that once engendered an emotional response now start leaving you cold (or colder). This in turn can result in:
    1. A more general struggle to motivate oneself to do what one believes one should do.
    2. Seeing ethics as "just a game."

Was that clear? Since it's getting clearer for me, I fear it wasn't clear in the post... It seems it needed to go through one more draft!

c.trout
6mo10

I meant strong relative to "internal moral disharmony." But also, am I to understand people are reading the label of "schizophrenia" as an accusation? It's a disorder that one gets through no choice of one's own: you can't be blamed for having it. Hypocrisy, as I understand it, is something we have control over and therefore are responsible for avoiding or getting rid of in ourselves.

At most Stocker is blaming Consequentialism and DE for being moral schizophrenia inducing. But it's the theory that's at fault, not the person who suffers it!

c.trout
6mo30

Thanks for the suggestion. I ended up going with "internal moral disharmony" since it's innocuous and accurate enough. I think "hypocrisy" is too strong and too narrow: it's a species of internal moral disharmony (closely related to the "extreme case" in Stocker's terms), one which seems to imply no feelings of remorse or frustration with oneself regarding the disharmony. I wanted to focus on the more "moderate case" in which the disharmony is not too strong, one feels a cognitive dissonance, and one attempts to resolve the disharmony so as not to be a hypocrite.

c.trout
6mo10

Regarding the term "moral schizophrenia":
As I said to AllAmericanBreakfast, I wholeheartedly agree the term is outdated and inaccurate! Hence the scare quotes and the caveat I put in the heading of the same name. But obviously I underestimated how bad the term was since everyone is telling to change it. I'm open to suggestions! EDIT: I replaced it with "internal moral disharmony." Kind of a mouthful but good enough for a blog post. 

Regarding predictions:
You're right, that wasn't a very exact prediction (mostly because internal moral disharmony is going to be hard to measure). Here is a falsifiable claim that I stand by and that, if true, is evidence of internal moral disharmony:

I claim that one's level of engagement with the LW/EA rationalist community can weakly predict the degree to which one adopts a maximizer's mindset when confronted with moral/normative scenarios in life, the degree to which one suffers cognitive dissonance in such scenarios, and the degree to which one expresses positive affective attachment to one's decision (or the object at the center of their decision) in such scenarios.

More specifically I predict that, above a certain threshold of engagement with the community, increased engagement with the LW/EA community correlates with an increase in the maximizer's mindset, increase in cognitive dissonance, and decrease in positive affective attachment in the aforementioned scenarios.

The hypothesis for why I think this correlation exists is mostly at the end of here and here.

But more generally, must a criticism of/concern for the EA community come in the form of a prediction? I'm really just trying to point out a hazard for those who go in for Rationalism/Consequentialism. If everyone has avoided it, that's great! But there seems to be evidence that some have failed to avoid it, and that we might want to take further precautions. SBF was very much one of EA's own: his comments therefore merit some EA introspection. I'm just throwing in my two cents.

Regarding actual EAs:
I would be happy to learn few EAs actually have thoughts too many! But I do know it's a thing, that some have suffered it (personally I've struggled with it at times, and it's literally in Mill's autobiography). More generally, the ills of adopting a maximizer's mindset too often are well documented (see references in footnotes). I thought it was in the community's interest to raise awareness about it. I'm certainly not trying to demonize anyone: if someone in this community does suffer it, my first suspect would be the culture surrounding/theory of Consequentialism, not some particular weakness on the individual's part.

Regarding dry discussion on topics of incredible magnitude:
That's fair. I'm not saying being dry and calculating is always wrong. I'm just saying one should be careful about getting too comfortable with that mindset lest one start slipping into it when one shouldn't. That seems like something rationalists need to be especially mindful of.

Load more