Philosophy graduate interested in metaphysics, meta-ethics, AI safety and whole bunch of other things. Meta-ethical and moral theories of choice: naturalist realism + virtue ethics.
Thinks longtermism rests on a false premise – that for every moral agent and every value-bearing location, an agent's spatio-temporal distance from a given value-bearing location does not factor into the value born at that location (e.g. no matter how close or far a person is from you, that person should matter the same to you).
Thinks we should spend a lot more resources trying to delay HLMI – make AGI development uncool. Questions what we really need AGI for anyway. Accepts the epithet "luddite" so long as this is understood to describe someone who:
Subscribes to Crocker's Rules: unvarnished critical (but constructive) feedback is welcome.
No worries about the strong response – I misjudged how my words would be interpreted. I'm glad we sorted that out.
Regarding overthinking ethical stuff and SBF:
Unfortunately I fear you've missed my point. First of all, I wasn't really talking about any fraud/negligence that he may have committed. As I said in the 2nd paragraph:
Regarding his ignorance and his intentions, he might be telling the truth. Suppose he is: suppose he never condoned doing sketchy things as a means he could justify by some expected greater good. Where then is the borderline moral nihilism coming from? Note that it's saying "all the right shibboleths" that he spoke of as mere means to an end, not the doing of sketchy things.
My subject was his attitude/comments towards ethics. Second, my diagnosis was not that:
SBF's problems with ethics came from careful debate in business ethics and then missing a decimal point in the relevant calculations.
My point was that it's getting too comfortable approaching ethics like a careful calculation that can be dangerous in the first place – no matter how accurate the calculation is. It's not about missing some decimal points. Please reread this section if you're interested. I updated the end of it with a reference to a clear falsifiable claim.
Fair enough!
But also: if the EA community will only correct the flaws in itself that it can measure then... good luck. Seems short-sighted to me.
I may not have the data to back up my hypothesis, but it's also not as if I pulled this out of thin air. And I'm not the first to find this hypothesis plausible.
I claim that there is a healthy amount of moral calculation one should do, but doing too much of it has harmful side-effects. I claim, for these reasons, that Consequentialism (and the culture surrounding it) tends to result in abuse of moral calculation more so than VE. I don't expect abuse to arise in the majority of people who engage with/follow Consequentialism or something – just more than among those who engage with/follow VE. I also claim, for reasons at the end of this section, that abuse will be more prevalent among those who engage with rationalism than those who don't.
If I'm right about this flaw in the community culture around here, and this flaw in anyway contributed to SBF talking the way he did, shouldn't the community consider taking some steps to curb that problematic tendency?
It seems like your model is that you assume most people start with a sort of organic, healthy gut-level caring and sense of fellow-feeling, which moral calculation tends to distort.
Moral calculation (and faking it 'til you make it) can be helpful in becoming more virtuous, but to a limited extent – you can push it too far. And anyway, its not the only way to become a better person. I think more helpful is what I mentioned at the end of my post:
Encourage your friends to call out your vices. (In turn, steer your friends away from vice and try to be a good role model for the impressionable). Engage with good books, movies, plays etc. Virtue ethicists note that art has a great potential for exercising and training moral awareness...
If you want to see how the psych literature intersects on a related topic (romantic relationships instead of ethics in general) see Eva Illouz's Why love hurts: A sociological explanation (2012), Chapter 3. Search for the heading "The New Architecture of Romantic Choice or the Disorganization of the Will" (p 90 in my edition) if you want to skip right to it. You might be able to read the entire section through Google books preview? I recommend the book though, if you're interested.
...outsourcing our charity evaluations to specialists. I don’t have to decide if bednets or direct donations is better: GiveWell does it for me with their wonderful spreadsheets.
And I don’t have to consider every moment whether deontology or consequentialism is better: the EA movement and my identity as an EA does a lot of that work for me. It also licenses me to defer to habit almost 100% of the time
These are good things, and you're right to point them out! I certainly don't expect to find that every EA is a walking utility calculator – I expect that to be extremely rare. I also don't expect to find internal moral disharmony in every EA, though I expect it to be much less rare than walking utility calculators.
I just want to add one thing, just to be sure everything is clear. I'm glad you see how "a conscious chain of ethical reasoning held in the mind is not what should be motivating our actions" (i.e. we should not be walking utility calculators). But that was just my starting point. Ultimately I want to claim that, whether you're in a "heat of the moment" situation or not, getting too used to applying a calculating maximizer's mindset in realms typically governed by affect can result in the following:
Was that clear? Since it's getting clearer for me, I fear it wasn't clear in the post... It seems it needed to go through one more draft!
I meant strong relative to "internal moral disharmony." But also, am I to understand people are reading the label of "schizophrenia" as an accusation? It's a disorder that one gets through no choice of one's own: you can't be blamed for having it. Hypocrisy, as I understand it, is something we have control over and therefore are responsible for avoiding or getting rid of in ourselves.
At most Stocker is blaming Consequentialism and DE for being moral schizophrenia inducing. But it's the theory that's at fault, not the person who suffers it!
Thanks for the suggestion. I ended up going with "internal moral disharmony" since it's innocuous and accurate enough. I think "hypocrisy" is too strong and too narrow: it's a species of internal moral disharmony (closely related to the "extreme case" in Stocker's terms), one which seems to imply no feelings of remorse or frustration with oneself regarding the disharmony. I wanted to focus on the more "moderate case" in which the disharmony is not too strong, one feels a cognitive dissonance, and one attempts to resolve the disharmony so as not to be a hypocrite.
Regarding the term "moral schizophrenia":
As I said to AllAmericanBreakfast, I wholeheartedly agree the term is outdated and inaccurate! Hence the scare quotes and the caveat I put in the heading of the same name. But obviously I underestimated how bad the term was since everyone is telling to change it. I'm open to suggestions! EDIT: I replaced it with "internal moral disharmony." Kind of a mouthful but good enough for a blog post.
Regarding predictions:
You're right, that wasn't a very exact prediction (mostly because internal moral disharmony is going to be hard to measure). Here is a falsifiable claim that I stand by and that, if true, is evidence of internal moral disharmony:
I claim that one's level of engagement with the LW/EA rationalist community can weakly predict the degree to which one adopts a maximizer's mindset when confronted with moral/normative scenarios in life, the degree to which one suffers cognitive dissonance in such scenarios, and the degree to which one expresses positive affective attachment to one's decision (or the object at the center of their decision) in such scenarios.
More specifically I predict that, above a certain threshold of engagement with the community, increased engagement with the LW/EA community correlates with an increase in the maximizer's mindset, increase in cognitive dissonance, and decrease in positive affective attachment in the aforementioned scenarios.
The hypothesis for why I think this correlation exists is mostly at the end of here and here.
But more generally, must a criticism of/concern for the EA community come in the form of a prediction? I'm really just trying to point out a hazard for those who go in for Rationalism/Consequentialism. If everyone has avoided it, that's great! But there seems to be evidence that some have failed to avoid it, and that we might want to take further precautions. SBF was very much one of EA's own: his comments therefore merit some EA introspection. I'm just throwing in my two cents.
Regarding actual EAs:
I would be happy to learn few EAs actually have thoughts too many! But I do know it's a thing, that some have suffered it (personally I've struggled with it at times, and it's literally in Mill's autobiography). More generally, the ills of adopting a maximizer's mindset too often are well documented (see references in footnotes). I thought it was in the community's interest to raise awareness about it. I'm certainly not trying to demonize anyone: if someone in this community does suffer it, my first suspect would be the culture surrounding/theory of Consequentialism, not some particular weakness on the individual's part.
Regarding dry discussion on topics of incredible magnitude:
That's fair. I'm not saying being dry and calculating is always wrong. I'm just saying one should be careful about getting too comfortable with that mindset lest one start slipping into it when one shouldn't. That seems like something rationalists need to be especially mindful of.
Thank you very much for your perspective! I recently wrote about something closely related to this "emotions problem" but hadn't considered how the EA community offered a home for neurodivergent folks. I have now added a disclaimer making sure we 'normies' remember to keep you in mind!