Lukas_Gloor

Comments

How much does performance differ between people?

To give an example of what would go into research taste, consider the issue of reference class tennis (rationalist jargon for arguments on whether a given analogy has merit, or two people throwing widely different analogies at each other in an argument). That issue comes up a lot especially in preparadigmatic branches of science. Some people may have good intuitions about this sort of thing, while others may be hopelessly bad at it. Since arguments of that form feel notoriously intractable to outsiders, it would make sense if "being good at reference class tennis" were a skill that's hard to evaluate. 

How much does performance differ between people?

The awakening of slumbering papers may be fundamentally un- predictable in part because science itself must advance before the implications of the discovery can unfold.

Except to the authors themselves, who may often have an inkling that their paper is important. E.g., I think Rosenblatt was incredibly excited/convinced about the insights in that sleeping beauty paper. (Small chance my memory is wrong about this, or that he changed his mind at some point.) 

I don't think this is just a nitpicky comment on the passage you quoted. I find it plausible that there's some hard-to-study quantity around 'research taste' that predicts impact quite well. It'd be hard to study because the hypothesis is that only very few people have it. To tell who has it, you kind of need to have it a bit yourself. But one decent way to measure it is asking people who are universally regarded as 'having it' to comment on who else they think also has it. (I know this process would lead to unfair network effects and may result in false negatives and so on, but I'm advancing a descriptive observation here; I'm not advocating for a specific system on how to evaluate individuals.) 

Related: I remember a comment (can't find it anymore) somewhere by Liv Boeree or some other poker player familiar with EA. The commenter explained that monetary results aren't the greatest metric for assessing the skill of top poker players. Instead, it's  best to go with assessments by expert peers. (I think this holds mostly for large-field tournaments, not online cash games.) 

Moral Anti-Realism Sequence #4: Why the Moral Realism Wager Fails

I will probably  rename this post eventually to "Why the Irreducible Normativity Wager Fails." I now think there are three separate wagers related to moral realism: 

  • An infinitely strong wager to act as though Irreducible Normativity applies
  • An infinitely strong wager to act as though normative qualia exist (this can be viewed as a subcategory of the Irreducible Normativity wager) 
  • A conditionally strong wager to expect moral convergence
    • I will argue that this is not per se a wager for "moral realism" but actually equivalent to a wager for valuing moral reflection under anti-realism; the degree to which it applies depends on one's prior intuitions and normative convictions. 

I don't find the first two wagers convincing. The last wager definitely works in my view, but since it's only conditionally strong, it doesn't quite work the way people think it does. I will devote future posts to wagers 2 and 3 in the list above. This post here only covers the first wager. 

Can I have impact if I’m average?

I fully agree! It's certainly possible to have a lot of impact if your skills are average! And any amount of impact matters by definition. I suspect that it doesn't always seem like it because people tend to try to have impact in only the more established, direct ways. Or because some average-skilled people don't want to acknowledge that others are more suited for certain projects. I like the framework introduced by Ryan Carey and Tegan McCaslin  here. One of the steps is "Get humble: Amplify others’ impact from a more junior role."

I also like to think of EA (and life in general) as a video game with varying difficulty levels, and if your skills are only average (or you suffer from mental health issues more so than others), you're playing at a higher level of difficulty and you can't expect to earn the same amount of (non-adjusted) points. Upwards comparisons don't make sense for that reason! 

Lukas_Gloor's Shortform

Thanks for bringing up this option! I don't agree with this framing for two reasons: 

  • As I point out in my sequence's first post, some ways in which "moral facts exist" are underwhelming. 
  • I don't think moral indeterminacy necessarily means that there's convergence of expert judgments. At least, the way in which I think morality is underdetermined explicitly predicts expert divergence. Morality is "real" in the sense that experts will converge up to a certain point, and beyond that, some experts will have underdetermined moral values while others will have made choices within what's allowed by indeterminacy. Out of the ones that made choices, not all choices will be the same.

I think what I describe in the second bullet point will seem counterintuitive to many people because they think that if morality is underdetermined, your views on morality should be underdetermined, too. But that doesn't follow! I understand why people have the intuition that this should follow, but it really doesn't work that way when you look at it closely. I've been working on spelling out why. 

Why Research into Wild Animal Suffering Concerns me

It's maybe worth noting that there's an asymmetry: For people who think wild-animal lives are net positive, there are many things that contain even more sentient value than rainforest. By contrast, if you think wild-animal lives are net negative, only few things contain more sentient disvalue than rainforest. (Of course, in comparison to expected future sentience, biological life only makes up a tiny portion, so rainforest is unlikely to be a priority from a longtermist perspective.)

I understand the worries described in the OP (apart from the "let's better not find out" part).  I
 think it's important for  EAs in the WAS reduction movement to proactively counter simplistic memes and advocate interventions that don't cause great harm from the perspective of some very popular moral perspectives. I think that's a moral responsibility for animal advocates with suffering-focused views. (And as we see in other replies here, this sounds like it's already common practice!) 

At the same time, I feel like the discourse on this topic can be a bit disingenuous sometimes, where people whose actions otherwise don't indicate much concern for the moral importance of the action-omission distinction (esp. when it comes to non-persons) suddenly employ rhetorical tactics that make it sound like "wrongly thinking animal lives are negative" is a worse mistake than "wrongly thinking they are positive". 

I also think this issue is thorny because, IMO, there's no clear answer. There are moral judgment calls to make that count for at least as much as empirical discoveries.
 

What is a book that genuinely changed your life for the better?

I also read Animorphs! I saw this tweet about it recently that was pretty funny. 
 

What is a book that genuinely changed your life for the better?

The Ancestor's Tale got me hooked with trying to understand the world. It was the perfect book for me at the time I read it (2008) because my English wasn't that good yet and I would plausibly have been too overwhelmed with reading The Selfish Gene right away. And it was just way too cool to have this backwards evolutionary journey to go through. Apart from the next item on this list, I can't remember another book that I was so eager to read once I saw what it's about. I really wish I could have that feeling again!

Practical Ethics was life-changing for the obvious reasons and also because it got me far enough into ethics to develop the ambition to solve all the questions Singer left open.

Atonement  was maybe the fiction book that influenced me the most. I had to re-read it for an English exam and it got me thinking about typical mind fallacy and how people can perceive/interpret the same situation in very different ways.

Fiction books I read when I was younger must have affected me in various ways, but I can't point to any specific effect with confidence.

What is a "Kantian Constructivist view of the kind Christine Korsgaard favours"?

I'm not sure I remember this the right way, but here's an attempt: 

"Constructivism" can refer to a family of normative-ethical views according to which objectively right moral facts are whatever would be the output of some constructive function, such as an imagined social contract or the Kantian realm of ends. "Constructivism" can also refer to a non-realist metaethical view that moral language doesn't refer to moral facts that exist in an outright objective sense, but are instead "construed" intersubjectively via some constructive function. 

So, a normative-ethical constructivist uses constructive functions to find the objectively right moral facts, while a metaethical constructivist uses constructive functions to explain why we talk as though there are some kind of moral facts at all, and what their nature is.

I'm really not sure I got this exactly right, but I am confident that in the context of this "letter to a young philosopher," the author meant to refer to the metaethical version of constructivism. It's mentioned right next to subjectivism, which is another non-realist metaethical position. Unlike some other Kantians, Korsgaard is not an objectivist moral realist. 

So, I think the author of this letter is criticizing consequentialist moral realism because there's a sense in which its recommendations are "too impartial." The most famous critique of this sort is the "Critique of Utilitarianism" by Bernard Williams. I quoted the most relevant passage here. One way to point to the intuitive force of this critique is as follows: If your moral theory gives the same recommendation whether or not you replace all existing humans with intelligent aliens, something seems (arguably) a bit weird. The "human nature element," as well as relevant differences between different people, are all lost! At least, to anyone who cares about something other than "The one objectively correct thing to care about," the objective morality will seem wrong and alienating. Non-objectivist morality has the feature that moral actions depend on "who's here." That morality arises from people rather than people being receptacles for it. 

I actually agree with this type of critique – I just wouldn't say that it's incompatible with EA. It's only incompatible with how many EAs (especially Oxford-educated ones) currently think about the foundations of ethics.

Importantly, it doesn't automatically follows from this critique of objectivist morality that a strong focus on (some type of) effectiveness is misguided, or that "inefficient" charities suddenly look a lot better. Not at all. Maybe it can happen that certain charities/projects look better from that vantage point, depending on the specifics and so on. But this would require further arguments. 

Load More