Oops, one correction: "public justification" doesn't mean "justification to the people a policy will affect", it means "justification to all reasonable people"; "reasonable people" is roughly everyone except Nazis and others with similarly extreme views.
I know this doesn't solve the actual problem you're getting at, but here's a translation of that sentence from philosophese to English. "Pro tanto" essentially means "all else equal": a "pro tanto" consideration is a consideration, but not necessarily an overriding one. "Public justification" just means justifying policy choices with reasons that would/could be persuasive to the public/to the people they will affect. So the sentence as a whole means something like "While moral uncertainty doesn't mean that governments (and other institutions) should always justify their decisions to the people, it does mean they should do so when they can."
This is something I'm dealing with right now, so reading this was helpful. Thanks
If you happen to not be aware of this video already, you really should be.
I'd be interested in this. Even though "generalist researcher" is well-known, I think it's easy from the outside to get a distorted picture of the "content" of the job. Aside from this recent post, I don't know of write ups about it off the top of my head (though there could be ones I don't know about), and of course multiple writeups are useful since different people's situations and experiences will be different.
I had this reaction as well. Can't speak for OP, but one issue with this is that audio is harder to look back at than writing; harder to skim when you're just looking for that one thing you think was said but you want to be sure. One solution here would be transcription, which could probably be automated because it wouldn't have to be perfect, just good enough to be able to skim to the part of the audio you're looking for.
You might check out this SEP article: https://plato.stanford.edu/entries/morality-biology/. Haven't read it myself, but looking at the table of contents it seems like it might be helpful for you (SEP is generally pretty high-quality). People have made a lot of different arguments that start from the observation that human morality has likely been shaped by evolutionary pressures, and it's pretty complicated to try to figure out what conclusions to draw from this observation. It's not at all obvious that it implies we should try to "escape the shackles of e... (read more)
So I am a philosopher and thus fundamentally unqualified to answer this question. So take these thoughts with a grain of salt. However:
Let me say this: I am extremely confused, either about what your goals are with this post, or about how you think your chosen strategy for communication is likely to achieve those goals.
Not Jeff, but I agree with what he said, and here are my reasons:
Ooh, I would also very much like to see this post
Hm. Do you think it would be useful for me to write a short summary of the arguments against taking normative uncertainty into account and post it to the EA forum? (Wrote a term paper last semester arguing against Weatherson, which of course involved reading a chunk of that literature.)
I'd be very interested in hearing more about the views you list under the "more philosophical end" (esp. moral uncertainty) -- either here or on the 80k podcast.
Definitely, I'll send it along when I design it. Since intro ethics at my institution is usually taught as applied ethics, the basic concept would be to start by introducing the students to the moral catastrophes paper/concept, then go through at least some of the moral issues Williams brings up in the disjunctive portion of the argument to examine how likely they are to be moral catastrophes. I haven't picked particular readings yet though as I don't know the literatures yet. Other possible topics: a unit on historical moral catastrophes (e.g. slavery in
I'm entering philosophy grad school now, but in a few years I'm going to have to start thinking about designing courses, and I'm thinking of designing an intro course around this paper. Would it be alright if I used your summary as course material?
David Moss mentioned a "long tradition of viewing ethical theorising (and in particular attempts to reason about morality) sceptically." Aside from Nietzsche, another very well-known proponent of this tradition is Bernard Williams. Take a look at his page in the Stanford Encyclopedia of Philosophy, and if it looks promising check out his book Ethics and the Limits of Philosophy. You might also check out his essays "Ethical Consistency" (which I haven't read; in his essay collection Problems of the Self) and "Conflicts of Value... (read more)
Counterpoint (for purposes of getting it into the discussion; I'm undecided about antinatalism myself): that argument only applied to people who are already alive, and thus not to most of the people who would be affected by the decision whether to extend the human species or not (I.e. those who don't yet exist). David Benatar argues (podcast, book) that while, as you point out, many human lives may well be worth continuing, those very same lives (he thinks all lives, but that's more than I need to make this argument) may nevertheless not have been worth st
What I was describing wasn't exactly Pascal's mugging. Pascal's mugging is an attempted argument *against* this sort of reasoning, by arguing that it leads to pathological conclusions (like that you ought to pay the mugger, when all he's told you is some ridiculous story about how, if you don't, there's a tiny chance that something catastrophic will happen). Of course, some people bite the bullet and say that you just should pay the mugger, others claim that this sort of uncertainty reasoning doesn't actually lead you to ... (read more)
Here are two other considerations that haven't yet been mentioned:
1. EA is supposed to be largely neutral between ethical theories. In practice, most EAs tend to be consequentialists, specifically utilitarians, and a utilitarian might plausibly think that killing one to save ten was the right thing to do (though others in this thread have given reasons why that might not be the case even under utilitarianism), but in theory one could unite EA principles with most ethical systems. So if the ethical system you think is most likely to be correct includes... (read more)
Scott Alexander has actually gotten academic citations, e.g. in Paul Bloom's book Against Empathy (sadly I don't remember which article of his Bloom cites), and I get the impression a fair few academics read him.
What would be the attitude towards someone who wanted to work with you after undergrad for a year or two, but then go on to graduate school (likely for philosophy in my case), with an eye towards then continuing to work with you or other EA orgs after grad school?