Lukas_Gloor

Wiki Contributions

Comments

On the limits of idealized values

Here's another issue: 

Lack of morally urgent causes

 In the blogpost On Caring, Nate Soares writes: “It's not enough to think you should change the world — you also need the sort of desperation that comes from realizing that you would dedicate your entire life to solving the world's 100th biggest problem if you could, but you can't, because there are 99 bigger problems you have to address first.” In the moral-reflection environment, the world is on pause. If you’ve suffered from poverty, illnesses or abuse in your life, these things are no longer an issue. Also, there are no people to lift out of poverty and no factory farms to shut down. You’re no longer in a race against time to prevent bad things from happening, seeking friends and allies while trying to defend your cause against corrosion from influence-seeking people. 

Without morally urgent causes, it’s harder to form a strong identity around wanting to do good. It’s still morally important what you decide – after all, your deliberations in the reflection procedure determine how to allocate your caring capacity. Still, you’re deliberating about how to do that from a perspective where everything is well. For better or worse, this perspective could change the nature of moral reflection as compared to how people adopt moral convictions in real-life conditions.

Linch's Shortform

I agree that the vibe you're describing tends to be a bit cultish precisely because people take it too far. That said, I think it's true that low-prestige jobs within highly impactful teams are sometimes a lot more impactful than high-prestige jobs somewhere further away from where things matter. (I'm making a general point; I'm not saying that MIRI is necessarily a great example for "where things matter," nor am I saying the opposite.) In particular, personal assistant strikes me as an example of a highly impactful role (because it requires a hard-to-replace skillset).

(Edit: I don't expect you to necessarily disagree with any of that, since you were just giving a plausible explanation for why the comment above may have turned off some people.)
 

On the limits of idealized values

I think this post is brilliant! 

I plan to link to it heavily in an upcoming piece for my moral anti-realism sequence. 

On X., Passive and active ethics

Rather, what I’m trying to point at is a way that importing and taking for granted a certain kind of realist-flavored ethical psychology can result in an instructive sort of misfire. Something is missing, in these cases, that I expect the idealizing subjectivist needs. In particular: these agents, to the end, lack an affordance for a certain kind of direct, active agency — a certain kind of responsibility, and self-creation. They don’t know how to choose, fully, for themselves.

Yeah, I think there's a danger for people who expect that "having more information," or other features of some idealized reflection procedure, would change the phenomenology of moral reasoning, such that once they're in the reflection procedure, certain answers will stick out to them. But, as you say, this point may never come!  So instead, it could continue to feel like one has to make difficult judgment calls left and right, with no guarantee that one is doing moral reasoning "the right way." 

(In fact, I'm convinced such a phase change won't come. I have a draft on this.)

In a sense, what I’m saying here is that idealizing subjectivism is, and needs to be, less like “realism-lite,” and more like existentialism, than is sometimes acknowledged.

I've also used the phrase "more like existentialism" in this context. :)

On IX., Hoping for convergence, tolerating indeterminacy

This is an excellent strategy for people who find themselves without strong object-level intuitions about their goals/values. (Or people who only have strong object-level intuitions about some aspects of their goals/values, but not the details. E.g., being confident that one would want to be altruistic, but unsure about population ethics or different theories of well-being. [In these cases, perhaps with a guarantee for the reflection procedure to not to change the overarching objective – being altruistic, or finding a suitable theory of well-being, etc.])

Some people would probably argue that "Hoping for convergence, tolerating indeterminacy" is the rational strategy in the light of our metaethical uncertainty.  (I know you're not necessarily saying this in your post.) For example, they might argue as follows:

"If there's convergence among reflection procedures, I miss out if I place too much faith in my object-level intuitions and already formed moral convictions. By contrast, if there's no convergence, then it doesn't matter – all outcomes would be on the same footing." 

I want to push back against this stance, "rationally mandated wagering on convergence." I think it only makes sense for people whose object-level values are still under-defined. By contrast, if you find yourself with solid object-level convictions about your values, then you not only stand something to gain from wagering on convergence. You also stand things to lose. You might be giving up something you feel is worth fighting for to follow the kind-of-arbitrary outcome of some reflection procedure.

My point is, the currencies are commensurable: What's attractive about the possibility of many reflection procedures converging is the same thing that's attractive to people who already have solid object-level convictions about their values (assuming they're not making one of the easily identifiable mistakes, i.e., assuming that, for them, there'd be no convergence among reflection procedures that are open-ended enough to get them to adopt different values). Namely, when they reflect to the best of their abilities, they feel drawn to certain moral principles or goals or specific ways of living their lives.

In other words, the importance of moral reflection for someone is exactly proportional to their credence in it changing their thinking. If someone feels highly uncertain, they almost exclusively have things to gain. By contrast, the more certain you already are in your object-level convictions, the larger the risk that deferring to some poorly understood reflection procedure would lead you to an outcome that constitutes a loss, in a sense relevant to your current self. Of course, one can always defer to conservative reflection procedures, i.e., procedures where one is fairly confident that they won't lead to drastic changes in one's thinking. Those could be used to flesh out one's thinking in places where it's still uncertain (and therefore, possibly, under-defined), while protecting convictions that one would rather not put at risk. 

You can now apply to EA Funds anytime! (LTFF & EAIF only)

Is the map/territory distinction central to your point? I get the impression that you're mostly expressing the opinion that the LTFF has too high a bar or idiosyncratic (or too narrow) research taste. (I'd imagine that grantmakers are trying to do what's best on impact grounds.) 

Progress studies vs. longtermist EA: some differences

It sounds like we both agree that when it comes to reflecting about what's important to us, there should maybe be a place for stuff like "(idiosyncratic) reactive attitudes," "psychotherapy or raising a child or 'things the humanities do'" etc. 

Your view seems to be that you have two modes of moral reasoning: The impartial mode of analytic philosophy, and the other thing (subjectivist/particularist/existentialist).  

My point with my long comment earlier is basically the following: 
The separation between these two modes is not clear!  

I'd argue that what you think of the "impartial mode" has some clear-cut applications, but it's under-defined in some places, so different people will gravitate toward different ways of approaching the under-defined parts, based on using appeals that you'd normally place in the subjectivist/particularist/existentialist mode. 

Specifically, population ethics is under-defined. (It's also under-defined how to extract "idealized human preferences" from people like my parents, who aren't particularly interested in moral philosophy or rationality.) 

I'm trying to point out that if you fully internalized that population ethics is going to be under-defined no matter what, you then have more than one option for how to think about it. You no longer have to think of impartiality criteria and "never violating any transitivity axioms" as the only option. You can think of population ethics more like this: Existing humans have a giant garden (the 'cosmic commons') that is at risk of being burnt, and they can do stuff with it if they manage to preserve it, and people have different preferences about what definitely should or shouldn't be done with that garden. You can look for the "impartially best way to make use of the garden" – or you could look at how other people want to use the garden and compromise with them, or look for "meta-principles" that guide who gets to use which parts of the garden (and stuff that people definitely shouldn't do, e.g., no one should shit in their part of the garden), without already having a fixed vision for how the garden has to look like at the end, once it's all made use of. Basically, I'm saying that "knowing from the very beginning exactly what the 'best garden' has to look like, regardless of the gardening-related preferences of other humans, is not a forced move (especially because there's no universally correct solution anyway!). You're very much allowed to think of gardening in a different, more procedural  and 'particularist' way."
 

How well did EA-funded biorisk organisations do on Covid?

that seems to imply that developing countries had lower survival rates, despite their more favourable demographics, which would be sad.

This isn't impossible because there seems to be a correlation where people with lower socioeconomic status have worse Covid outcomes, but I still doubt that the IFR was worse  overall in developing countries. The demographics (esp. the proportion of people age 70-80, and older)  make a huge difference. 

But I never looked into this in detail, and my impression was also that for a long time at least, there wasn't any reliable data. 

From excess deaths in some locations, such as Guayaquil (Ecuador), one could rule out the possibility that the IFR in developing countries was incredibly low (it would have been at least 0.3% given plausible assumptions about the outbreak there, and possibly a lot higher). 

How well did EA-funded biorisk organisations do on Covid?

IFR (but back in February/March 2020, a lot of people called everything "CFR"). I think he was talking about high-income countries (that's what my 0.9% estimate for 2020 referred to – note that it's lower for 2020+2021  combined because of better treatment and vaccines). I'd have to look it up again, but I doubt that Adalja was talking about a global IFR that includes countries with much younger demographics than the US. It could be that he left it ambiguous. 

Here's the Sam Harris podcast in question; I haven't re-listened to it yet. 

How well did EA-funded biorisk organisations do on Covid?

To be fair, the Johns Hopkins Center isn't just Adalja. I'm not aware of the list of things they do, but for instance, they kept an updated database in the early stage of the virus outbreak that was extremely helpful for forecasting!

How well did EA-funded biorisk organisations do on Covid?

He said he travelled internationally "yesterday" (which would have been February 9th if the video was uploaded the day of the lecture) and didn't wear a mask.

This seems totally okay to me, FWIW. In most places (e.g., London or the US), it would have seemed a bit overly cautious to wear masks before the end of February, no? 

I think his prediction and advice should probably be judged negatively and reflect poorly on him / Center for Health Security, but I'm not sure how harshly he/ CHS should be judged.

I generally agree with that, but it's worth noting that it was extremely common for Western epidemiologists to repeat the mantra "you cannot do what Asian countries are doing; there's no way to contain the virus." 

Load More