60 karmaJoined Oct 2022www.ctrout.art/


Philosophy graduate interested in metaphysics, meta-ethics, AI safety and whole bunch of other things. Meta-ethical and moral theories of choice: neo-artistotelian naturalist realism + virtue ethics.

Unvarnished critical (but constructive) feedback is welcome.


[Out-of-date-but-still-sorta-representative-of-my-thoughts hot takes below]

Thinks longtermism rests on a false premise – some sort of total impartiality

Thinks we should spend a lot more resources trying to delay HLMI – make AGI development uncool. Questions what we really need AGI for anyway. Accepts the epithet "luddite" so long as this is understood to describe someone who:

  • suspects that on net, technological progress yields diminishing returns in human flourishing.
  • OR believes workers have a right to organize to defend their interests (you know – what the original Luddites were doing). Fighting to uphold higher working standards is to be on the front lines fighting against Moloch (see e.g. Fleming's vanishing economy dilemma and how decreased working hours offers a simple solution).
  • OR suspects that, with regards to AI, the Luddite fallacy may not be a fallacy: AI really could lead to wide-spread permanent technological unemployment, and that might not be a good thing.
  • OR considering the common-sensey thought that societies have a maxmimum rate of adaptation, suspects excessive rates of technological change can lead to harms, independent of how the technology is used. (This thought is more speculative/less researched – would love to hear evidence for or against).


Updated research on the Easterlin Paradox here. Free working draft here. Nice audio/visual overview from one of the authors here. Good discussion on the EA forum here.

Thank you very much for your perspective! I recently wrote about something closely related to this "emotions problem" but hadn't considered how the EA community offered a home for neurodivergent folks. I have now added a disclaimer making sure we 'normies' remember to keep you in mind!

No worries about the strong response – I misjudged how my words would be interpreted. I'm glad we sorted that out.

Regarding overthinking ethical stuff and SBF: 
Unfortunately I fear you've missed my point. First of all, I wasn't really talking about any fraud/negligence that he may have committed. As I said in the 2nd paragraph:

Regarding his ignorance and his intentions, he might be telling the truth. Suppose he is: suppose he never condoned doing sketchy things as a means he could justify by some expected greater good. Where then is the borderline moral nihilism coming from? Note that it's saying "all the right shibboleths" that he spoke of as mere means to an end, not the doing of sketchy things. 

My subject was his attitude/comments towards ethics. Second, my diagnosis was not that:

SBF's problems with ethics came from careful debate in business ethics and then missing a decimal point in the relevant calculations.

My point was that it's getting too comfortable approaching ethics like a careful calculation that can be dangerous in the first place – no matter how accurate the calculation is. It's not about missing some decimal points. Please reread this section if you're interested. I updated the end of it with a reference to a clear falsifiable claim.

Fair enough!

But also: if the EA community will only correct the flaws in itself that it can measure then... good luck. Seems short-sighted to me.

I may not have the data to back up my hypothesis, but it's also not as if I pulled this out of thin air. And I'm not the first to find this hypothesis plausible.

I claim that there is a healthy amount of moral calculation one should do, but doing too much of it has harmful side-effects. I claim, for these reasons, that Consequentialism (and the culture surrounding it) tends to result in abuse of moral calculation more so than VE. I don't expect abuse to arise in the majority of people who engage with/follow Consequentialism or something – just more than among those who engage with/follow VE. I also claim, for reasons at the end of this section, that abuse will be more prevalent among those who engage with rationalism than those who don't.

If I'm right about this flaw in the community culture around here, and this flaw in anyway contributed to SBF talking the way he did, shouldn't the community consider taking some steps to curb that problematic tendency?

It seems like your model is that you assume most people start with a sort of organic, healthy gut-level caring and sense of fellow-feeling, which moral calculation tends to distort.

Moral calculation (and faking it 'til you make it) can be helpful in becoming more virtuous, but to a limited extent – you can push it too far. And anyway, its not the only way to become a better person. I think more helpful is what I mentioned at the end of my post:

Encourage your friends to call out your vices. (In turn, steer your friends away from vice and try to be a good role model for the impressionable). Engage with good books, movies, plays etc. Virtue ethicists note that art has a great potential for exercising and training moral awareness...

If you want to see how the psych literature intersects on a related topic (romantic relationships instead of ethics in general) see Eva Illouz's Why love hurts: A sociological explanation (2012), Chapter 3. Search for the heading "The New Architecture of Romantic Choice or the Disorganization of the Will" (p 90 in my edition) if you want to skip right to it. You might be able to read the entire section through Google books preview? I recommend the book though, if you're interested.

Yikes! Thank you for letting me know! Clearly a very poor choice of words: that was not at all my intent!

To be clear, I agree with EAs on many many issues. I just fear they suffer from "overthinking ethical stuff too often" if you will.

...outsourcing our charity evaluations to specialists. I don’t have to decide if bednets or direct donations is better: GiveWell does it for me with their wonderful spreadsheets.

And I don’t have to consider every moment whether deontology or consequentialism is better: the EA movement and my identity as an EA does a lot of that work for me. It also licenses me to defer to habit almost 100% of the time

These are good things, and you're right to point them out! I certainly don't expect to find that every EA is a walking utility calculator – I expect that to be extremely rare. I also don't expect to find internal moral disharmony in every EA, though I expect it to be much less rare than walking utility calculators.

I just want to add one thing, just to be sure everything is clear. I'm glad you see how "a conscious chain of ethical reasoning held in the mind is not what should be motivating our actions" (i.e. we should not be walking utility calculators). But that was just my starting point. Ultimately I want to claim that, whether you're in a "heat of the moment" situation or not, getting too used to applying a calculating maximizer's mindset in realms typically governed by affect can result in the following:

  1. Worst case extreme scenario: you become a walking utility calculator, and are perfectly at peace with yourself about being one. You could be accused of being cold, calculating, uncaring.
  2. More likely scenario: you start adopting a calculating maximizer's mindset when you shouldn't (e.g. when trying to decide whether to go see a sick friend or not) even though you know you shouldn't, or you didn't mean to adopt that mindset. You could be accused of being inadvertently cold and calculating – someone who, sadly, tends to overthink things.
    1. In such situations, because you've adopted that mindset, you will dampen your positive affective attachment to the decision you make (or the object at the center of that decision), even though you started with strong affect toward that decision/object. E.g. when you first heard your friend was in the hospital, you got a pit in your stomach, but it eventually wore away as you evaluated the pros and cons of going to see them or doing something else with your time (as you began comparing friends maybe, to decide who to spend time with). Whatever you do end up deciding to do, you feel ambivalent about it.
    2. Any cognitive dissonance you might have (e.g. your internal monologue sounds like this: "Why am I thinking so hard about this? I should have just gone with my gut"), and the struggle to resolve that dissonance only worsens 2.a. 
  3. Either way: in general, considerations that once engendered an emotional response now start leaving you cold (or colder). This in turn can result in:
    1. A more general struggle to motivate oneself to do what one believes one should do.
    2. Seeing ethics as "just a game."

Was that clear? Since it's getting clearer for me, I fear it wasn't clear in the post... It seems it needed to go through one more draft!

I meant strong relative to "internal moral disharmony." But also, am I to understand people are reading the label of "schizophrenia" as an accusation? It's a disorder that one gets through no choice of one's own: you can't be blamed for having it. Hypocrisy, as I understand it, is something we have control over and therefore are responsible for avoiding or getting rid of in ourselves.

At most Stocker is blaming Consequentialism and DE for being moral schizophrenia inducing. But it's the theory that's at fault, not the person who suffers it!

Thanks for the suggestion. I ended up going with "internal moral disharmony" since it's innocuous and accurate enough. I think "hypocrisy" is too strong and too narrow: it's a species of internal moral disharmony (closely related to the "extreme case" in Stocker's terms), one which seems to imply no feelings of remorse or frustration with oneself regarding the disharmony. I wanted to focus on the more "moderate case" in which the disharmony is not too strong, one feels a cognitive dissonance, and one attempts to resolve the disharmony so as not to be a hypocrite.

Load more