6318 karmaJoined Jan 2015


Moral Anti-Realism


Sorry, I hate it when people comment on something that has already been addressed.

FWIW, though, I had read the paper the day it was posted on the GPI fb page. At that time, I didn't feel like my point about "there is no objective axiology" fit into your discussion.

I feel like even though you discuss views that are "purely deontic" instead of "axiological," there are still some assumptions from the axiology-based framework that underly your conclusion about how to reason about such views. Specifically, when explaining why a view says that it would be wrong to create only Amy but not Bobby, you didn't say anything that suggests understanding of "there is no objective axiology about creating new people/beings."

That said, re-reading the sections you point to, I think it's correct that I'd need to give some kind of answer to your dilemmas, and what I'm advocating for seems most relevant to this paragraph:

5.2.3. Intermediate wide views

Given the defects of permissive and restrictive views, we might seek an intermediate wide view: a wide view that is sometimes permissive and sometimes restrictive. Perhaps (for example) wide views should say that there’s something wrong with creating Amy and then later declining to create Bobby in Two-Shot Non-Identity if and only if you foresee at the time of creating Amy that you will later have the opportunity to create Bobby. Or perhaps our wide view should say that there’s something wrong with creating Amy and then later declining to create Bobby if and only if you intend at the time of creating Amy to later decline to create Bobby.

At the very least, I owe you an explanation of what I would say here.

I would indeed advocate for what you call the "intermediate wide view," but I'd motivate this view a bit differently.

All else equal, IMO, the problem with creating Amy and then not creating Bobby is that these specific choices, in combination, and if it would have been low-effort to choose differently (or the other way around), indicate that you didn't consider the interests of possible people/beings even to a minimum degree. Considering them to a minimum degree would mean being willing to at least take low-effort actions to ensure your choices aren't objectionable from their perspective (the perspective of possible people/beings). Adding someone with +1 when you could've easily added someone else with +100 just seems careless. If Alice and Bobby sat behind a veil of ignorance, not knowing which of them will be created with +1 or +100 (if someone gets created at all), the one view they would never advocate for is "only create the +1 person." If they favor anti-natalist views, they advocate for creating no one. If they favor totalist views, they'd advocate for creating both. If one favors anti-natalism and the other favors totalism, they might compromise on creating only the +100 person. So, most options here really are defensible, but you don't want to do the one thing that shows you weren't trying at all.

So, it would be bad to only create the +1 person, but it's not "99 units bad" in some objective sense, so this is not always the dominant concern and seems less problematic if we dial up the degree of effort that's needed to choose differently, or when there are externalities like "by creating Amy at +1 instead of Bob at +100, you create a lot of value for existing people." I don't remember if it was Parfit or Singer who first gave this example of delaying pregnancy for a short number of days (or maybe it was three months?) to avoid your future child suffering from a serious illness. There, it seems mainly objectionable not to wait because of how easy it would be to wait. (Quite a few people, when trying to have children, try for years, so a few months is not that significant.)

So, if you're at age 20 and contemplate having a child at happiness level 1, knowing that 15 years later they'll invent embryo-selection therapy to make new babies happier and guarantee happiness level 100, having only the child at 20 is a little selfish, but it's not like "wait 15 years," when you really want a child, is a low-effort accommodation. (Also, I personally think having children is under pretty much all circumstances "a little selfish," at least in the sense of "you could spend your resources on EA instead." But that's okay. Lots of things people choose are a bit selfish.) I think it would be commendable to wait, but not mandatory. (And like Michael ST Jules points out, not waiting is the issue here; after that's happened, it's done, and when you contemplate having a second child 15 years later, it's now a new decision and it no longer matters what you did earlier.)

And although intentions are often relevant to questions of blameworthiness, I’m doubtful whether they are ever relevant to questions of permissibility. Certainly, it would be a surprising downside of wide views if they were committed to that controversial claim.

The intentions are relevant here in the sense of: You should always act with the intention of at least taking low-effort ways to consider the interests of possible people/beings. It's morally frivolous if someone has children on a whim, especially if that leads to them making worse choices for these children than they could otherwise have easily made. But it's okay if the well-being of their future children was at least an important factor in their decision, even if it wasn't the decisive factor. Basically, "if you bring a child into existence and it's not the happiest child you could have, you better have a good reason for why you did things that way, but it's conceivable for there to be good reasons, and then it's okay."

I feel like you're trying to equivocate "wrong or heartless" (or "heartless-and-prejudiced," as I called it elsewhere) with "socially provocative" or "causes outrage to a subset of readers." 

That feels like misdirection.

I see two different issues here:

(1) Are some ideas that cause social backlash still valuable?

(2) Are some ideas shitty and worth condemning?

My answer is yes to both.

When someone expresses a view that belongs into (2), pointing at the existence of (1) isn't a good defense.

You may be saying that we should be humble and can't tell the difference, but I think we can. Moral relativism sucks.

FWIW, if I thought we couldn't tell the difference, then it wouldn't be obvious to me that we should go for "condemn pretty much nothing" as opposed to "condemn everything that causes controversy." Both of these seem equally extremely bad.

I see that you're not quite advocating for "condemn nothing" because you write this bit:

perhaps with some caveats (e.g. that they are the sort of view that a person might honestly come by, as opposed to something invented simply maliciously.)

It depends on what you mean exactly, but I think this may not be going far enough. Some people don't cult-founder-style invent new beliefs with some ulterior motive (like making money), but the beliefs they "honestly" come to may still be hateful and prejudiced. Also, some people might be aware that there's a lot of misanthropy and wanting to feel superior in their thinking, but they might be manipulatively pretending to only be interested in "truth-seeking," especially when talking to impressionable members of the rationality community, where you get lots of social credit for signalling truth-seeking virtues.

To get to the heart of things, do you think Hanania's views are no worse than the examples you give? If so, I would expect people to say that he's not actually racist.

However, if they are worse, then I'd say let's drop the cultural relativism and condemn them.

It seems to me like there's no disagreement by people familiar with Hanania that his views were worse in the past. That's a red flag. Some people say he's changed his views. I'm not per se against giving people second chances, but it seems suspicious to me that someone who admits that they've had really shitty racist views in the past now continues to focus on issues where they – even according to other discussion participants here who defend him – still seem racist. Like, why isn't he trying to educate people on how not to fall victim to a hateful ideology, since he has personal experience with that. It's hard to come away with "ah, now the motivation is compassion and wanting the best for everyone, when previously it was something dark." (I'm not saying such changes of heart are impossible, but I don't view it as likely, given what other commenters are saying.)

Anyway, to comment on your examples:

Singer faced most of the heat for his views on preimplantation diagnostics and disability before EA became a movement. Still, I'd bet that, if EAs had been around back then, many EAs, and especially the ones I most admire and agree with, would've come to his defense.

I just skimmed that eugenics article you link to and it seems fine to me, or even good. Also, most of the pushback there from EA forum participants is about the strategy of still using the word "eugenics" instead of using a different word, so many people don't seem to disagree much with the substance of the article.

In Bostrom's case, I don't think anyone thinks that Bostrom's comments from long ago were a good thing, but there's a difference between them being awkward and tone-deaf, vs them being hateful or hate-inspired. (And it's more forgivable for people to be awkward and tone-deaf when they're young.)

Lastly, on Scott Alexander's example, whether intelligence differences are at least partly genetic is an empirical question, not a moral one. It might well be influenced by someone having hateful moral views, so it matters where a person's interest in that sort of issue is coming from. Does it come from a place of hate or wanting to seem superior, or does it come from a desire for truth-seeking and believing that knowing what's the case makes it easier to help? (And: Does the person make any actual efforts to help disadvantaged groups?) As Scott Alexander points out himself:

Somebody who believes that Mexicans are more criminal than white people might just be collecting crime stats, but we’re suspicious that they might use this to justify an irrational hatred toward Mexicans and desire to discriminate against them. So it’s potentially racist, regardless of whether you attribute it to genetics or culture.

So, all these examples (I think Zach Davis's writing is more "rationality community" than EA, and I'm not really familiar with it, so I won't comment on it) seem fine to me. 

When I said,

None of the people who were important to EA historically have had hateful or heartless-and-prejudiced views (or, if someone had them secretly, at least they didn't openly express it).

This wasn't about, "Can we find some random people (who we otherwise wouldn't listen to when it comes to other topics) who will be outraged."

Instead, I meant that we can look at people's views at the object level and decide whether they're coming from a place of compassion for everyone and equal consideration of interests, or whether they're coming from a darker place.

And someone can have wrong views that aren't hateful:

Many of my extended family members consider the idea that abortion is permissible to be hateful and wrong. I consider their views, in addition to many of their other religious views, to be hateful and wrong.

I'm not sure if you're using "hateful" here as a weird synonym to "wrong," or whether your extended relatives have similarities to the Westboro Baptist Church.

Normally, I think of people who are for abortion bans as merely misguided (since they're often literally misguided about empirical questions, or sometimes they seem to have an inability to move away from rigid-category thinking and not understand the necessity of having a different logic for non-typical examples/edge cases).

When I speak of "hateful," it's something more. I then mean that the ideology has an affinity for appealing to people's darker motivations. I think ideologies like that are properly dangerous, as we've seen historically. (And it applies to, e.g., Communism just as well as to racism.)

I agree with you that conferences do very little "vetting" (and find this is okay), but I think the little vetting that they do and should do includes "don't bring in people who are mouthpieces to ideologies that appeal to people's dark instincts." (And also things like, "don't bring in people who are known to cause harm to others," whether that's through sexually predatory behavior or the tendency to form mini-cults around themselves.)


If even some of the people defending this person start with "yes, he's pretty racist," that makes me think David Mathers is totally right.

Regarding cata's comment:

But I think that the modern idea that it's good policy to "shun" people who express wrong (or heartless, or whatever) views is totally wrong, and is especially inappropriate for EA in practice, the impact of which has largely been due to unusual people with unusual views.

Why move from "wrong or heartless" to "unusual people with unusual views"? None of the people who were important to EA historically have had hateful or heartless-and-prejudiced views (or, if someone had them secretly, at least they didn't openly express it). It would also be directly opposed to EA core principles (compassion, equal consideration of interests).

Whether someone speaks at Manifest (or is on a blogroll, or whatever) should be about whether they are going to give an interesting talk to Manifest, not because of their general moral character.

I think sufficiently shitty character should be disqualifying. I agree with you insofar that, if someone has ideas that seem worth discussing, I can imagine a stance of "we're talking to this person in a moderated setting to hear their ideas," but I'd importantly caveat it by making sure to also expose their shittiness. In other words, I think platforming a person who promotes a dangerous ideology (or, to give a different example, someone who has a tendency to form mini-cults around them that predictably harm some of the people they come into contact with) isn't necessarily wrong, but it comes with a specific responsibility. What would be wrong is implicitly conveying that the person you're platforming is vetted/normal/harmless, when they actually seem dangerous. If someone actually seems dangerous, make sure that, if you do decide to platform them (presumably because you think they also have some good/important things to say), others won't come away with the impression that you don't think they're dangerous.

We can't use the argument that it is better from an impartial view to focus on existing-and-sure-to-exist people/beings because of the classic 'future could be super-long' argument.

I'd say the two are tied contenders for "what's best from an impartial view." 

I believe the impartial view is under-defined for cases of population ethics, and both of these views are defensible options in the sense that some morally-motivated people would continue to endorse them even after reflection in an idealized reflection procedure.

For fixed population contexts, the "impartial stance" is arguably better defined and we want equal considering of [existing] interests, which gives us some form of preference utilitarianism. However, once we go beyond the fixed population context, I think it's just not clear how to expand those principles, and Narveson's slogan isn't necessarily a worse justification than "the future could be super-long/big."

The parent comment here explains ambitious morality vs minimal morality.

My post also makes some other points, such as giving new inspiration to person-affecting views.

For a summary of that, see here.

In my post Population Ethics Without [An Objective] Axiology, I argued that person-affecting views are IMO underappreciated among effective altruists.

Here’s my best attempt at a short version of my argument:

  • The standard critiques of person-affecting views are right in pointing out how person-affecting views don’t give satisfying answers to “what’s best for possible people/beings.”
  • However, they are wrong in thinking that this is a problem.
  • It’s only within the axiology-focused approach (common in EA and utilitarian-tradition academic philosophy) that a theory of population ethics must tell us what’s best for both possible people/beings and for existing (or sure-to-exist) people/beings simultaneously.
  • Instead, I think it’s okay for EAs who find Narveson’s slogan compelling to reason as follows:
    (1) I care primarily about what’s best for existing (and sure-to-exist) people/beings.
    (2) When it comes to creating or not creating people/beings whose existence depends on my actions, all I care about is following some minimal notion of “don’t be a jerk.” That is, I wouldn’t want to do anything that disregards the interests of such possible people/beings according to all plausible axiological accounts, but I’m okay with otherwise just not focusing on possible people/beings all that much.
  • We can think of this stance as analogous to: 
    • The utilitarian parent: “I care primarily about doing what’s best for humanity at large, but I wouldn’t want to neglect my children to such a strong degree that all defensible notions of how to be a decent parent state that I fucked up.”
  • Just like the utilitarian parent had to choose between two separate values (their own children vs humanity at large), the person with person-affecting life goals had to choose between two values as well (existing-and-sure-to-exist people/beings vs possible people/beings).
    • The person with person-affecting life goals: “I care primarily about doing what’s best for existing and sure-to-exist people/beings, but I wouldn’t want to neglect the interests of possible people/beings to such a strong degree that all defensible notions of how to be a decent person towards them state that I fucked up.” 
  • Note that it's not like only advocates of person-affecting morality have to make such a choice. Analogously: 
    • The person with totalist/strong longtermist life goals: “I care primarily about doing what’s best according to my totalist axiology (i.e., future generations whose existence is optional), but I wouldn’t want to neglect the interests of existing people to such a strong degree that all defensible notions of how to be a decent person towards them state that I fucked up.”
  • Anyway, for the person with person-affecting life goals, when it comes to cases like whether it's permissible for them to create individual new people, or bundles of people (one at welfare level 100, the other at 1), or similar cases spread out over time, etc., it seems okay that there isn't a single theory that fulfills both of the following conditions: 
    (1) The theory has the 'person-affecting' properties (e.g., it is the sort of theory that people who find Narveson's slogan compelling would want).
    (2) The theory gives us precise, coherent, non-contradictory guidelines on what's best for newly created people/beings. 
  • Instead, I'd say what we want is to drop (2), and come up with an alternative theory that fulfills only (1) and (3):
    (1) The theory has the 'person-affecting' properties (e.g., it is the sort of theory that people who find Narveson's slogan compelling would want).
    (3) The theory contains some minimal guidelines of the form "don't be a jerk" that tell us what NOT to do when it comes to creating new people/beings. The things it allows us to do are acceptable, even though it's true that someone who cares maximally about possible people/beings on a specific axiological notion of caring [but remember that there's no universally compelling solution here!]) could have done "better". (I put "better" in quotation marks because it's not better in an objectivist moral realist way, just "better" in a sense where we introduce a premise that our actions' effects on possible people/beings are super important.)

What I'm envisioning under (3) is quite similar to how common-sense morality thinks about the ethics of having children. IMO, common-sense morality would say that: 

  • People are free to decide against becoming parents.
  • People who become parents are responsible towards their children. 
  • It's not okay to have a child and then completely abandon them, or to decide to have an unhappy child if you could've chosen a happier child at basically no cost.
  • If the parents can handle it, it's okay for parents to have 8+ children, even if this lowers the resources available per child.
  • The responsibility towards one's children isn't absolute (e.g., if the children are okay, parents aren't prohibited from donating to charity even though the money could further support their children).

The point being: The ethics of having children is more about "here's how not to do it" rather than "here's the only acceptable best way to do it."


The longer version of the argument is in my post. My view there relies on a few important premises:

  • Moral anti-realism
  • Adopting a different ethical ontology from “something has intrinsic value”

I can say a bit more about these here.

As I write in the post: 

I see the axiology-focused approach, the view that “something has intrinsic value,” as an assumption in people’s ethical ontology.

The way I’m using it here, someone’s “ontology” consists of the concepts they use for thinking about a domain – how they conceptualize their option space. By proposing a framework for population ethics, I’m (implicitly) offering answers to questions like “What are we trying to figure out?”, “What makes for a good solution?” and “What are the concepts we want to use to reason successfully about this domain?”

Discussions about changing one’s reasoning framework can be challenging because people are accustomed to hearing object-level arguments and interpreting them within their preferred ontology.

For instance, when first encountering utilitarianism, someone who thinks about ethics primarily in terms of “there are fundamental rights; ethics is about the particular content of those rights” would be turned off. Utilitarianism doesn’t respect “fundamental rights,” so it’ll seem crazy to them. However, asking, “How does utilitarianism address the all-important issue of [concept that doesn’t exist within the utilitarian ontology]” begs the question. To give utilitarianism a fair hearing, someone with a rights-based ontology would have to ponder a more nuanced set of questions.

So, let it be noted that I’m arguing for a change to our reasoning frameworks. To get the most out of this post, I encourage readers with the “axiology-focused” ontology to try to fully inhabit[8] my alternative framework, even if that initially means reasoning in a way that could seem strange.

To get a better sense of what I mean by the framework that I'm arguing against, see here:

>Before explaining what’s different about my proposal, I’ll describe what I understand to be the standard approach it seeks to replace, which I call “axiology-focused.”


The axiology-focused approach goes as follows. First, there’s the search for an axiology, a theory of (intrinsic) value. (E.g., the axiology may state that good experiences are what’s valuable.) Then, there’s further discussion on whether ethics contains other independent parts or whether everything derives from that axiology. For instance, a consequentialist may frame their disagreement with deontology as follows. “Consequentialism is the view that making the world a better place is all that matters, while deontologists think that other things (e.g., rights, duties) matter more.” Similarly, someone could frame population-ethical disagreements as follows. “Some philosophers think that all that matters is more value in the world and less disvalue (“totalism”). Others hold that further considerations also matter – for instance, it seems odd to compare someone’s existence to never having been born, so we can discuss what it means to benefit a person in such contexts.”

In both examples, the discussion takes for granted that there’s something that’s valuable in itself. The still-open questions come afterward, after “here’s what’s valuable.”

In my view, the axiology-focused approach prematurely directs moral discourse toward particular answers. I want to outline what it could look like to “do population ethics” without an objective axiology or the assumption that “something has intrinsic value.”

To be clear, there’s a loose, subjective meaning of “axiology” where anyone who takes systematizing stances[1] on moral issues implicitly “has an axiology.” This subjective sense isn’t what I’m arguing against. Instead, I’m arguing against the stronger claim that there exists a “true theory of value” based on which some things are “objectively good” (good regardless of circumstance, independently of people’s interests/goals).[2]

(This doesn’t leave me with “anything goes.” In my sequence on moral anti-realism, I argued that rejecting moral realism doesn’t deserve any of the connotations people typically associate with “nihilism.” See also the endnote that follows this sentence.[3])

Note also that when I criticize the concept of “intrinsic value,” this isn’t about whether good things can outweigh bad things. Within my framework, one can still express beliefs like “specific states of the world are worthy of taking serious effort (and even risks, if necessary) to bring about.” Instead, I’m arguing against the idea that good things are good because of “intrinsic value.”

So, the above quote described the framework I want to push back against.

The alternative ethical ontology I’m proposing is 'anti-realist' in the sense of: There’s no such thing as “intrinsic value.”

Instead, I view ethics as being largely about interests/goals. 

From that "ethics is about interests/goals" perspective, population ethics seems clearly under-defined. First off, it's under-defined how many new people/beings there will be (with interests and goals). And secondly, it's under-defined which interests/goals new people/beings will have. (This depends on who you choose to create!)

With these building blocks, I can now sketch the summary of my overall population-ethical reasoning framework (this summary is copied from my post but lightly adapted):

  • Ethics is about interests/goals.
  • Nothing is intrinsically valuable, but various things can be conditionally valuable if grounded in someone’s interests/goals.
  • The rule “focus on interests/goals” has comparatively clear implications in fixed population contexts. The minimal morality of “don’t be a jerk” means we shouldn’t violate others’ interests/goals (and perhaps even help them where it’s easy and our comparative advantage). The ambitious morality of “do the most moral/altruistic thing” coincides with something like preference utilitarianism.
  • On creating new people/beings, “focus on interests/goals” no longer gives unambiguous results:[4]
    • The number of interests/goals isn’t fixed
    • The types of interests/goals aren’t fixed
  • This leaves population ethics under-defined with two different perspectives: that of existing or sure-to-exist people/beings (what they want from the future) and that of possible people/beings (what they want from their potential creators).
  • Without an objective axiology, any attempt to unify these perspectives involves subjective judgment calls. (In other words: It likely won't be possible to unify these perspectives in a way that'll be satisfying for anyone.) 
  • People with the motivation to dedicate (some of) their life to “doing the most moral/altruistic thing” will want clear guidance on what to do/pursue. To get this, they must adopt personal (but defensible), population-ethically-complete specifications of the target concept of “doing the most moral/altruistic thing.” (Or they could incorporate a compromise, as in a moral parliament between different plausible specifications.) 
  • Just like the concept “athletic fitness” has several defensible interpretations (e.g., the difference between a 100m sprinter and a marathon runner), so (I argue) does “doing the most moral/altruistic thing.”
  • In particular, there’s a tradeoff where cashing out this target concept primarily according to the perspective of other existing people leaves less room for altruism on the second perspective (that of newly created people/beings) and vice versa.
  • Accordingly, people can think of “population ethics” in several different (equally defensible)[5] ways:
    • Subjectivist person-affecting views: I pay attention to creating new people/beings only to the minimal degree of “don’t be a jerk” while focusing my caring budget on helping existing (and sure-to-exist) people/beings.
    • Subjectivist totalism: I count appeals from possible people/beings just as much as existing (or sure-to-exist) people/beings. On the question “Which appeals do I prioritize?” my view is, “Ones that see themselves as benefiting from being given a happy existence.”
    • Subjectivist anti-natalism: I count appeals from possible people/beings just as much as existing (or sure-to-exist) people/beings. On the question “Which appeals do I prioritize?” my view is, “Ones that don’t mind non-existence but care to avoid a negative existence.”
  • The above descriptions (non-exhaustively) represent “morality-inspired” views about what to do with the future. The minimal morality of “don’t be a jerk” still applies to each perspective and recommends cooperating with those who endorse different specifications of ambitious morality.
  • One arguably interesting feature of my framework is that it makes standard objections against person-affecting views no longer seem (as) problematic. A common opinion among effective altruists is that person-affecting views are difficult to make work.[6] In particular, the objection is that they give unacceptable answers to “What’s best for new people/beings.”[7] My framework highlights that maybe person-affecting views aren’t meant to answer that question. Instead, I’d argue that someone with a person-affecting view has answered a relevant earlier question so that “What’s best for new people/beings” no longer holds priority. Specifically, to the question “What’s the most moral altruistic/thing?,” they answered “Benefitting existing (or sure-to-exist) people/beings.” In that light, under-definedness around creating new people/beings is to be expected – it’s what happens when there’s a tradeoff between two possible values (here: the perspective of existing/sure-to-exist people and that of possible people) and someone decides that one option matters more than the other.

So is it basically saying that many people follow different types of utilitarianism (I'm assuming this means the "ambitious moralities")

Yes to this part. ("Many people" maybe not in the world at large, but especially in EA circles where people try to orient their lives around altruism.)

Also, I'm here speaking of "utilitarianism as a personal goal" rather than "utilitarianism as the single true morality that everyone has to adopt." 

This distinction is important. Usually, when people speak about utilitarianism, or when they write criticisms of utilitarianism, they assume that utilitarians believe that everyone ought to be a utilitarian, and that utilitarianism is the answer to all questions in morality. By contrast, "utilitarianism as a personal morality" is just saying "Personally, I want to devote my life to making the world a better place according to the axiology behind my utilitarianism, but it's a separate question how I relate to other people who pursue different goals in their life."

And this is where minimal morality comes in: Minimal morality is answering that separate question with "I will respect other people's life goals." 

So, minimal morality is a separate thing from ambitious morality. (I guess the naming is unfortunate here since it sounds like ambitious morality is just "more on top of" minimal morality. Instead, I see them as separate things. The reason I named them the way I did is because minimal morality is relevant to everyone as a constraint for how to not go through life as a jerk, while ambitious morality is something only a handful of particularly-morally motivated people are interested in. (Of course, per Singer's drowning child argument, maybe more people should be interested in ambitious morality than is the case.)

but judging which one is better is quite neglible since all the types usually share important moral similarities (I'm assuming what this means "minimal morality")?

Not exactly.

"Judging which one is better" isn't necessarily negligible, but it's a personal choice, meaning there's no uniquely compelling answer that will appeal to everyone.

You may ask "Why do people even endorse a well-specified axiology at all if they know it won't be convincing to everyone? Why not just go with the 'minimum core of morality' that everyone endorses, even if this were to leave lots of things vague and under-defined?"

I've written a dialogue about this in a previous post:

Critic: Why would moral anti-realists bother to form well-specified moral views? If they know that their motivation to act morally points in an arbitrary direction, shouldn’t they remain indifferent about the more contested aspects of morality? It seems that it’s part of the meaning of “morality” that this sort of arbitrariness shouldn’t happen.

Me: Empirically, many anti-realists do bother to form well-specified moral views. We see many examples among effective altruists who self-identify as moral anti-realists. That seems to be what people’s motivation often does in these circumstances.

Critic: Point taken, but I’m saying maybe they shouldn’t? At the very least, I don’t understand why they do it.

Me: You said that it’s “part of the meaning of morality” that arbitrariness “shouldn’t happen.” That captures the way moral non-naturalists think of morality. But in the moral naturalism picture, it seems perfectly coherent to consider that morality might be under-defined (or “indefinable”). If there are several defensible ways to systematize a target concept like “altruism/doing good impartially,” you can be indifferent between all those ways or favor one of them. Both options seem possible.

Critic: I understand being indifferent in the light of indefinability. If the true morality is under-defined, so be it. That part seems clear. What I don’t understand is favoring one of the options. Can you explain to me the thinking of someone who self-identifies as a moral anti-realist yet has moral convictions in domains where they think that other philosophically sophisticated reasoners won’t come to share them?

Me: I suspect that your beliefs about morality are too primed by moral realist ways of thinking. If you internalized moral anti-realism more, your intuitions about how morality needs to function could change.

Consider the concept of “athletic fitness.” Suppose many people grew up with a deep-seated need to study it to become ideally athletically fit. At some point in their studies, they discover that there are multiple options to cash out athletic fitness, e.g., the difference between marathon running vs. 100m-sprints. They may feel drawn to one of those options, or they may be indifferent.

Likewise, imagine that you became interested in moral philosophy after reading some moral arguments, such as Singer’s drowning child argument in Famine, Affluence and Morality. You developed the motivation to act morally as it became clear to you that, e.g., spending money on poverty reduction ranks “morally better” (in a sense that you care about) than spending money on a luxury watch. You continue to study morality. You become interested in contested subdomains of morality, like theories of well-being or population ethics. You experience some inner pressure to form opinions in those areas because when you think about various options and their implications, your mind goes, “Wow, these considerations matter.” As you learn more about metaethics and the option space for how to reason about morality, you begin to think that moral anti-realism is most likely true. In other words, you come to believe that there are likely different systematizations of “altruism/doing good impartially” that individual philosophically sophisticated reasoners will deem defensible. At this point, there are two options for how you might feel: either you’ll be undecided between theories, or you find that a specific moral view deeply appeals to you.

In the story I just described, your motivation to act morally comes from things that are very “emotionally and epistemically close” to you, such as the features of Peter Singer’s drowning child argument. Your moral motivation doesn’t come from conceptual analysis about “morality” as an irreducibly normative concept. (Some people do think that way, but this isn’t the story here!) It also doesn’t come from wanting other philosophical reasoners to necessarily share your motivation. Because we’re discussing a naturalist picture of morality, morality tangibly connects to your motivations. You want to act morally not “because it’s moral,” but because it relates to concrete things like helping people, etc. Once you find yourself with a moral conviction about something tangible, you don’t care whether others would form it as well.

I mean, you would care if you thought others not sharing your particular conviction was evidence that you’re making a mistake. If moral realism was true, it would be evidence of that. However, if anti-realism is indeed correct, then it wouldn’t have to weaken your conviction.

Critic: Why do some people form convictions and not others?

Me: It no longer feels like a choice when you see the option space clearly. You either find yourself having strong opinions on what to value (or how to morally reason), or you don’t.

So, some people may feel too uncertain to choose right away, while others will be drawn to a particular personal/subjective answer to "What utilitarian axiology do I want to use as my target criterion for making the world a better place?"

Different types of utilitarianism can give quite opposing recommendations for how to act, so I wouldn't say the similarities are insignificant or that there's no reason to pay attention to there being differences.

However, I think people's attitudes to their personal moral views should be different if they see their moral views as subjective/personal, as opposed to objective/absolutist.

For instance, let's say I favor a tranquilist axiology that's associated with negative utilitarianism. If I thought negative utilitarianism was the single correct moral theory that everyone would adopt if only they were smart and philosophically sophisticated enough, I might think it's okay to destroy the world. However, since I believe that different morally-motivated people can legitimately come to quite different conclusions about how they want to do "the most moral/altruistic thing," there's a sense in which I only use my tranquilist convictions to "cast a vote" in favor of my desired future, but wouldn't unilaterally act on it in ways that are bad for other people's morality.

This is a bit like the difference between Democrats and Republicans in the US. If Democrats thought "being a Democrat" is the right answer to everything and Replublicans are wrong in a deep sense, they might be tempted to poison the tea of their Republican neighbor on election day. However, the identities "Democrat" or "Republican" are not all that matters! In addition, people should have an identity of "It's important to follow the overarching process of having a democracy." 

"It's important to follow the overarching process of having a democracy" is here analogous to recognizing the importance of minimal morality.

I realized the same thing and have been thinking about writing a much shorter, simplified account of this way of thinking about population ethics. Unfortunately, I haven't gotten around to writing that.

I think the flowchart in the middle of the post is not a terrible summary to start at, except that it doesn't say anything about what "minimal morality" is in the framework.

Basically, the flowchart shows how there are several defensible "ambitious moralities" (axiological frameworks such as the ones in various types of utilitarianism, which specify how someone scores their actions toward the goal of "doing the most moral/altruistic thing"). Different people might be drawn to different ambitious moralities, or they may remain uncertain or undecided between options.

The reason why people pursuing different ambitious moralities don't get into tensions with each other over their differences is because ambitious moralities aren't all that matters. Most people who are moral anti-realists also endorse something like minimal morality (though they may use different terminology!), which is about "not being a jerk." Acting as though your anti-realist ambitious-morality moral views justify overriding everyone else moral views (or their personal self-oriented goals) would be being a jerk.

I don't want to spend too much time on this so won't answer to all points, but I wanted to point you to some examples for this bit about evasiveness by saying things like, "I don't know what this is referring to": 

I'd be interested to hear examples (genuinely)

See the transcript here: the word "referring" occurs 30 times and at least a couple of those times strike me as the weasel-like suspicious behavior of someone whose approach to answering questions is "never admit to anything unless you learn that they already have the evidence." So, he always answers first with "not sure/don't know what you refer to/don't remember" and only admits to things when shown evidence. 

This behavior is strikingly abnormal and different from how a person who doesn't have anything to hide would behave. 

(Edit – and again, it seems to me like it's different from autistic literal-mindedness! Literally answering the question would mean to comb your memory and answer without regard for what the prosecution is referring to. It would also include saying confidently "no" if you're sure you never said something.)

Someone trustworthy would answer questions immediately, sometimes admitting to things that the prosecution may not already know.

Some examples:

Q. You also marketed FTX as a safe crypto exchange compared to your competitors, didn’t you?

A. With FTX US I think that may be the case. I am not sure about FTX International.

Q. Did you or did you not market FTX International as safe compared to other crypto exchanges?

A. I don’t specifically remember that. I am not sure.

MS. SASSOON: If we could pull up Government Exhibit 900

A. The government offers Government Exhibit 900

THE COURT: Hearing no objection, it’s received. (Government Exhibit 900A received in evidence)

MR. COHEN: I thought it was in already.

THE COURT: No harm, no foul.

MS. SASSOON: I believe the full video is in. This is just a screenshot. Mr. Bianco, if you could publish that, please. We can go ahead and take that down.

Q. You publicly described FTX as the most regulated crypto MBAN3 exchange by far, didn’t you?

A. I think that’s right.

Q. And you also acted like you cared about customer protections, right?

A. I think I did care about them, yes.

Q. And you made public statements to that effect, didn’t you?

A. I probably did.

Q. I didn’t hear you.

A. I probably did.

Q. Yes or no, do you recall making statements that you cared about customer protections?

A. Yes.

Q. In fact, over and over again in public forums you described FTX platform as safe, correct?

A. I am not sure specifically what that is referring to. I may have.

Q. Yes or no, do you recall making numerous public statements to the effect that the FTX platform was safe?

A. I recall with respect to FTX US. It may be true with respect to FTX International, but I don’t specifically recall, no.

Q. You were CEO of FTX International, yes?

A. Yes.

Q. Sitting here today, you cannot recall one way or the other whether you made public statements that FTX was a safe MBAN3 platform? 1

A. I am not sure exactly what you are referring to. I made a lot of public statements.

Q. Yes or no, do you recall making public statements that FTX was a safe platform?

A. I can’t think of a specific one off the top of my head.

Q. Generally, do you recall in substance making statements that FTX was a safe platform?

MR. COHEN: Objection.

THE COURT: Overruled.

A. Some things that were sort of like that, yes. I am not sure exactly what you are referring to. But I am not saying —

THE COURT: Mr. Bankman-Fried, the issue is not what she is referring to. Please answer the question.

Q. Putting aside what I’m referring to, I’m asking whether you recall making statements as CEO of FTX that in substance stated that the FTX platform was safe.

A. I remember things around specific parts of the FTX platform that were related to that. I don’t remember a general statement to that effect. I am not sure there wasn’t one.

Q. In media interviews isn’t it true that you insisted on that FTX had protections for retail customers?

A. Yup.

Q. You told your customers that users’ funds and safety come MBAN3 first, didn’t you?

A. Something to that effect, yes. 2

Q. And you also made statements that you would always allow withdrawals, didn’t you?

A. Yup.

MS. SASSOON: If we could pull up what’s marked as Government Exhibit 829. The government offers Government Exhibit 829.

MR. COHEN: No objection.

THE COURT: Received. (Government Exhibit 829 received in evidence)

MS. SASSOON: Mr. Bianco, can you publish that.

Q. Mr. Bankman-Fried, can you read the first line of your tweet from August 9, 2021.

A. Sure. And, as always, our users’ funds and safety come first.

Q. Beneath that do you see where it says, we will always allow withdrawals except in cases of suspected money laundering/theft/etc.?

A. Yup.

MS. SASSOON: We can take that down.


Q. You also claimed that FTX had a conservative approach to managing risk, didn’t you?

A. OnóóI’m not sure exactly what that was referring to.

Q. You don’t recall saying that?

A. I don’t remember the context.

Q. Do you recall saying that in any context?

A. I’m not confident.


Q. So is it your testimony that as CEO of FTX, after this catastrophic event, you did not learn the details of the code change that you directed?

A. That’s correct. I trusted Gary and Nishad.

Q. You testified on direct that FTX had an AWS database, correct?

A. Yup.

Q. And you described its content, right? For example, it stored customer account information?

A. Yup, that’s right.

Q. And it had information about trades?

A. Yup.

Q. And customer balances?

A. Yup.

Q. And as CEO, you had access to the database, correct?

A. Nope.

Q. Your testimony is that you did not have the ability to access the database?

A. I never did. To my knowledge, I didn’t have access to it.

Q. I’m asking you whether you had authorization to search the database.

A. I have no idea whether someone had created an account in my name that in theory was designed for me. If so, I never used it.

Q. And so it’s your testimony that until October 2022, you never looked in the database.

A. That’s correct. And even as of then, I never looked in the AWS database.

Q. After FTX declared bankruptcy, isn’t it true that one of the first things you did was try to restore your administrative access to the AWS database?

A. That’s not how I would put it.

Q. Isn’t it true that in the weeks following the bankruptcy, you asked to have your access to the AWS database restored?

A. NotóóI was not specifically looking for my personal access to the AWS database.

Q. Isn’t it true you were requesting AWS access?

A. I was requesting it on behalf of the joint provisional liquidators in the Bahamas.

Q. So yes or no: You made requests to restore access to the AWS database?

A. I’m not sure exactly what you’re referring to here.

THE COURT: Look, could you just answer the question instead of trying to ask the questioner what she’s referring to? THE WITNESS: Okay.

A. No.

Q. Isn’t it true that you made to-do lists after FTX’s 4 collapse that included things like “try to get AWS access”?

A. Probably.

Q. And so isn’t it true that you were trying to get AWS access after FTX declared bankruptcy?

A. Yes.

To me, the focus on "What this is referring to" is illuminating because it shows how SBF is laser-focused on what the prosecution has on him. What's strikingly absent is a desire to try hard at remembering so he can tell as much of the truth as possible.

I downvoted the question.

I'd have found it okay if the question had explicitly asked for just good summaries of the trial coverage or the sentencing report.

(E.g., there's the twitter handle Inner city press that was tweeting transcript summaries of every day on trial, or the Carl Reilly youtube channel for daily summaries of the trial. And there's the more recent sentencing report that someone here linked to.)

Instead, the question came across as though there's maybe a mystery here for which we need the collective smarts and wisdom of the EA forum.

There are people who do trial coverage for a living who've focused on this case. EAs are no longer best-positioned to opine on this, so it's a bit weird to imply that this is an issue that EAs should discuss (as though it's the early days of Covid or the immediate aftermath of FTX, when the EA forum arguably had some interesting alpha).

It's also distracting.

I think part of what made me not like this question is that OP admits on the one hand that they struggled with finding good info on Google, but then they still give their own summary about what they've found so far. Why give these half-baked takes if you're just a not-yet-well-informed person who's struggled to find good summaries? It feels like "discussion baiting." 

Now, if someone did think that SBF/FTX didn't do anything illegal, I think that could be worth discussing, but it should start with a high-quality post where someone demonstrates that they've done their homework and have good reasons for disagreeing with those who have followed the trial coverage and concluded, like the jury, that SBF/FTX engaged in fraud. 

Load more