Hi Andreas! I'm worried that the maximality rule will overgeneralize, implying that little is rationally required of us. Consider the decision whether to have children. There are obvious arguments both for and against from a self-interested point of view, and it isn't clear exactly how to weigh them against each other. So, plausibly, having children will max EU according to at least one probability function in our representor, whereas not having children will max EU according to at least one other probability function in our representor. Result via maximal...
Hi Michael, thanks for the post! I was really happy to see something like this on the EA Forum. In my view, EAs* significantly overestimate the plausibility of total welfarist consequentialism**, in part due to a lack of familiarity with the recent literature in moral philosophy. So I think posts like this are important and helpful.
* I mean this as a generic term (natural language plurals (usually) aren't universally quantified).
** This isn't to suggest that I think there's some other moral theory that is very plausible. They're all implausible, as far as I can tell; which is partly why I lean towards anti-realism in meta-ethics.
I'd love to see Johann Frick (Philosophy, UC Berkeley) on the podcast. Johann is a nonconsequentialist who defends the procreation Asymmetry and thinks longtermism is deeply misguided. Imo, his recent paper on the Asymmetry is one of the best; he'll be able to steel-person many philosophical views that challenge common EA commitments; and he's an engaging speaker.
Hi Saul, since this is a discussion-based seminar rather than a lecture course, we won't be recording. However, I plan to teach this course again in the future and may change the format - so future iterations may be recorded.
Hi Joe, thanks for sharing this. I enjoyed it - as I have enjoyed and learned from many of your philosophy posts recently!
A couple things:
1) I'm curious about your thoughts on the role of knowledge in epistemology and decision theory. You write, e.g., 'Consider the divine commands of the especially-big-deal-meta-ethics spaghetti monster...'. On pain of general skepticism, don't we get to know that a spaghetti monster is not 'the foundation of all being'? (I don't have a strong commitment here, but after talking with a colleague who works in epistemol...
Neff's book has been huge for my mental health. However, sometimes I find myself applying the self-compassion framework in a way that's too formulaic, making it feel like a chore. (E.g., 'step 1: what would my best friend say to me right now? Step 2: remind myself that I'm not the only one experiencing/struggling with [whatever]. Step 3: pause to let yourself feel what you're feeling.) I'd be interested if she has any tips for making it feel more warm/spontaneous/etc. and less rote
Thanks, Oliver! And am I reading the website correctly that the fellowship is full time, such that participants won't be able to devote any time to their current research agendas (aside from weekends/evenings etc.)?
Will this program recur, or is this a one-off opportunity? (I'm quite interested, but unfortunately unsure whether I can take seven months off my PhD during this particular academic year.)
Really interesting! Do you have anything in mind for goods identified by competing ethical theories that you think would compete with, e.g., the beatific vision for the Christian or nirvana for the Buddhist? (A clear example here would be a valuable update for me.)
+1 on your comment that 'Giving the right answers for the wrong reasons is still deeply unsatisfying.' I think this is an under appreciated part of ethical theorizing and would even take a stronger methodological stance: getting the right explanatory answers (why we ought to do what we ought to) is just as important as getting the right extensional answers (what we ought to do). If an ethical theory gives you the wrong explanation, it's not the right ethical theory!
Hi Michael, thanks for your comments! A few replies:
Re: amplification, I'm not sure about this proposal (I'm familiar with that section of the book). From the perspective of a supreme soteriology (e.g. (certain conceptions of) Christianity), attaining salvation is the best possible outcome, full stop. It is, to use MacAskill, Bykvist, and Ord's terminology, maximally choiceworthy. It therefore seems to me wrong that 'those other views could be further amplified lexically, too, all ad infinitum.' To insist that we could lexically amplify a supreme soteriolo...
Is there any room in the application process for applicants to submit samples of original research or academic letters of recommendation?
Thank you!
Hi Cam, I'm glad you found the notes useful! Most of these (with The Precipice being an exception) were notes taken from audiobooks. As I was listening, I'd write down brief notes (sometimes as short as a key word or phrase) on the Notes app on iPhone. Then, once a day/once every couple days, I'd reference the Notes app to jog my memory, and write down the longer item of information in a Gdoc. Then, when I'd finished the book, I'd organize/synthesize the Gdoc into a coherent set of notes with sections etc.
These days I follow a similar system, but use... (read more)