Philosophy
Philosophy
Investigation of the abstract features of the world, including morals, ethics, and systems of value

Quick takes

11
12d
Just sharing my 2024 Year in Review post from Good Thoughts. It summarizes a couple dozen posts in applied ethics and ethical theory (including issues relating to naive instrumentalism and what I call "non-ideal decision theory") that would likely be of interest to many forum readers. (Plus a few more specialist philosophy posts that may only appeal to a more niche audience.)
76
8mo
5
This is a cold take that’s probably been said before, but I thought it bears repeating occasionally, if only for the reminder: The longtermist viewpoint has gotten a lot of criticism for prioritizing “vast hypothetical future populations” over the needs of "real people," alive today. The mistake, so the critique goes, is the result of replacing ethics with math, or utilitarianism, or something cold and rigid like that. And so it’s flawed because it lacks the love or duty or "ethics of care" or concern for justice that lead people to alternatives like mutual aid and political activism. My go-to reaction to this critique has become something like “well you don’t need to prioritize vast abstract future generations to care about pandemics or nuclear war, those are very real things that could, with non-trivial probability, face us in our lifetimes.” I think this response has taken hold in general among people who talk about X-risk. This probably makes sense for pragmatic reasons. It’s a very good rebuttal to the “cold and heartless utilitarianism/pascal's mugging” critique. But I think it unfortunately neglects the critical point that longtermism, when taken really seriously — at least the sort of longtermism that MacAskill writes about in WWOTF, or Joe Carlsmith writes about in his essays — is full of care and love and duty. Reading the thought experiment that opens the book about living every human life in sequential order reminded me of this. I wish there were more people responding to the “longtermism is cold and heartless” critique by making the case that no, longtermism at face value is worth preserving because it's the polar opposite of heartless. Caring about the world we leave for the real people, with emotions and needs and experiences as real as our own, who very well may inherit our world but who we’ll never meet, is an extraordinary act of empathy and compassion — one that’s way harder to access than the empathy and warmth we might feel for our neighbors
10
1mo
2
a moral intuition i have: to avoid culturally/conformistly-motivated cognition, it's useful to ask: if we were starting over, new to the world but with all the technology we have now, would we recreate this practice? example: we start and out and there's us, and these innocent fluffy creatures that can't talk to us, but they can be our friends. we're just learning about them for the first time. would we, at some point, spontaneously choose to kill them and eat their bodies, despite us having plant-based foods, supplements, vegan-assuming nutrition guides, etc? to me, the answer seems obviously not. the idea would not even cross our minds. (i encourage picking other topics and seeing how this applies)
6
21d
15
Contra Vasco Grilo on GiveWell may have made 1 billion dollars of harmful grants, and Ambitious Impact incubated 8 harmful organisations via increasing factory-farming? The post above explores how under the utilitarian hedonistic moral framework, the meat-eater problem may result in GiveWell grants or AIM charities to be net-negative. The post seems to argue that one expected value grounds, one should let children die of malaria because they could end up eating chicken, for example. I find this argument morally repugnant and want to highlight it. Using some of the words I have used in a reply: Let me quote William MacAskill comments on "What We Owe the Future" and his reflections on FTX (https://forum.effectivealtruism.org/posts/WdeiPrwgqW2wHAxgT/a-personal-statement-on-ftx): Finally, let me say the post itself seems to pit animal welfare against global poverty causes, which I found divisive and probably counterproductive. I downvoted this post because it is not representative of the values I believe EA should strive for. It may have been sufficient to show disagreement, but if someone goes for the first time into the forum and sees the post with many upvotes, their impression will be negative and may not become engaged with the community. If a reporter reads the forum and reads this, they will negatively cover both EA and animal welfare. And if someone was considering taking the 10% pledge or changing their career to support either animal welfare or global health and read this, they will be less likely to do so. I am sorry, but I will strongly oppose "ends justify the means" argument put forward by this post.
40
8mo
1
Having a baby and becoming a parent has had an incredible impact on me. Now more than ever, I feel more connected and concerned about the wellbeing of others. I feel as though my heart has literally grown. I wanted to share this as I expect there are many others who are questioning whether to have children -- perhaps due to concerns about it limiting their positive impact, among many others. But I'm just here to say it's been beautiful, and amazing, and I look forward to the day I get to talk with my son about giving back in a meaningful way.  
10
4mo
2
[crossposted from my blog; some reflections on developing different problem-solving tools] When all you have is a hammer, everything sure does start to look like a nail. This is not a good thing. I've spent a lot of my life variously 1) Falling in love with physics and physics fundamentalism (the idea that physics is the "building block" of our reality) 2) Training to "think like a physicist" 3) Getting sidetracked by how "thinking like a physicist" interacts with how real people actually do physics in practice 4) Learning a bunch of different skills to tackle interdisciplinary research questions 5) Using those skills to learn more about how different people approach different problems While doing this, I've come to think that identity formation - especially identity formation as an academic - is about learning how to identify different phenomena in the world as nails (problems with specific characteristics) and how to apply hammers (disciplinary techniques) to those nails. As long as you're just using your hammer on a thing that you're pretty sure is a nail, this works well. Physics-shaped hammers are great for physics-shaped nails; sociology-shaped hammers are great for sociology-shaped nails; history-shaped hammers are great for history-shaped nails. The problem with this system is that experts only have hammers in their toolboxes, and not everything in the world is a nail. The desire to make everything into one kind of nail, where one kind of hammer can be applied to every problem, leads to physics envy, to junk science, to junk policy, to real harm. The desire to make everything into one kind of nail also makes it harder for us to tackle interdisciplinary problems - ones where lots of different kinds of expertise are required. If we can't see and understand every dimension of a problem, we haven't a hope in hell of solving it. The biggest problems in the world today - climate breakdown,  pandemic prevention, public health - are wicked problems, ones that
3
1mo
Imperfect Parfit (written by by Daniel Kodsi and John Maier) is a fairly long review (by 2024 internet standards) of Parfit: A Philosopher and His Mission to Save Morality. It draws attention to some of his oddities and eccentricity (such as brushing his teeth for hours, or eating the same dinner every day (not unheard of among famous philosophers)). Considering Parfit's influence on the ideas that many of us involved in EA have, it seemed worth sharing here.
14
9mo
American Philosophical Association (APA) announces two $10,000 AI2050 Prizes for philosophical work related to AI, with June 23, 2024 deadline:  https://dailynous.com/2024/04/25/apa-creates-new-prizes-for-philosophical-research-on-ai/ https://www.apaonline.org/page/ai2050 https://ai2050.schmidtsciences.org/hard-problems/
Load more (8/47)