Philosophy
Philosophy
Investigation of the abstract features of the world, including morals, ethics, and systems of value

Quick takes

33
20d
4
An informal research agenda on robust animal welfare interventions and adjacent cause prioritization questions Context: As I started filling out this expression of interest form to be a mentor for Sentient Futures' project incubator program, I came up with the following list of topics I might be interested in mentoring. And I thought it was worth sharing here. :) (Feedback welcome!) Animal-welfare-related research/work: 1. What are the safest (i.e., most backfire-proof)[1] consensual EAA interventions? (overlaps with #3.c and may require #6.) 1. How should we compare their cost-effectiveness to that of interventions that require something like spotlighting or bracketing (or more thereof) to be considered positive?[2] (may require A.) 2. Robust ways to reduce wild animal suffering 1. New/underrated arguments regarding whether reducing some wild animal populations is good for wild animals (a brief overview of the academic debate so far here). 2. Consensual ways of affecting the size of some wild animal populations (contingent planning that might become relevant depending on results from the above kind of research). 1. How do these and the safest consensual EAA interventions (see 1) interact? 3. Preventing the off-Earth replication of wild ecosystems. 3. Uncertainty on moral weights (some relevant context in this comment thread). 1. Red-teaming of different moral weights that have been explicitly proposed and defended (by Rethink Priorities, Vaso Grilo, ...). 2. How and how much do cluelessness arguments apply to moral weights and inter-species tradeoffs? 3. What actions are robust to severe uncertainty about inter-species tradeoffs? (overlaps with #1.) 4. Considerations regarding the impact of saving human lives (c.f. top-GiveWell charities) on farmed and wild animals. (may require 3 and 5.) 5. The impact of agriculture on soil nematodes and other numerous soil animals, in terms of total population. 6. Evaluating the backfir
6
20d
4
I just want to point out that I have a degree in philosophy and have never heard the word "epistemics" used in the context of academic philosophy. The word used has always been either epistemology or epistemic as adjective in front of a noun (never on its own, always used as an adjective, not a noun, and certainly never pluralized).  From what I can tell, "epistemics" seems to be weird EA Forum/LessWrong jargon. Not sure how or why this came about, since this is not obscure philosophy knowledge, nor is it hard to look up. If you Google "epistemics" philosophy, you get 1) sources like Wikipedia that talk about epistemology, not "epistemics", 2) a post from the EA Forum and a page from the Forethought Foundation, which is an effective altruist organization, 3) some unrelated, miscellaneous stuff (i.e. neither EA-related or academic philosophy-related), and 4) a few genuine but fairly obscure uses of the word "epistemics" in an academic philosophy context. This confirms that the term is rarely used in academic philosophy. I also don't know what people in EA mean when they say "epistemics". I think they probably mean something like epistemic practices, but I actually don't know for sure. I would discourage the use of the term "epistemics", particularly as its meaning is unclear, and would advocate for a replacement such as epistemology or epistemic practices (or whatever you like, but not "epistemics").
76
2y
5
This is a cold take that’s probably been said before, but I thought it bears repeating occasionally, if only for the reminder: The longtermist viewpoint has gotten a lot of criticism for prioritizing “vast hypothetical future populations” over the needs of "real people," alive today. The mistake, so the critique goes, is the result of replacing ethics with math, or utilitarianism, or something cold and rigid like that. And so it’s flawed because it lacks the love or duty or "ethics of care" or concern for justice that lead people to alternatives like mutual aid and political activism. My go-to reaction to this critique has become something like “well you don’t need to prioritize vast abstract future generations to care about pandemics or nuclear war, those are very real things that could, with non-trivial probability, face us in our lifetimes.” I think this response has taken hold in general among people who talk about X-risk. This probably makes sense for pragmatic reasons. It’s a very good rebuttal to the “cold and heartless utilitarianism/pascal's mugging” critique. But I think it unfortunately neglects the critical point that longtermism, when taken really seriously — at least the sort of longtermism that MacAskill writes about in WWOTF, or Joe Carlsmith writes about in his essays — is full of care and love and duty. Reading the thought experiment that opens the book about living every human life in sequential order reminded me of this. I wish there were more people responding to the “longtermism is cold and heartless” critique by making the case that no, longtermism at face value is worth preserving because it's the polar opposite of heartless. Caring about the world we leave for the real people, with emotions and needs and experiences as real as our own, who very well may inherit our world but who we’ll never meet, is an extraordinary act of empathy and compassion — one that’s way harder to access than the empathy and warmth we might feel for our neighbors
40
2y
1
Having a baby and becoming a parent has had an incredible impact on me. Now more than ever, I feel more connected and concerned about the wellbeing of others. I feel as though my heart has literally grown. I wanted to share this as I expect there are many others who are questioning whether to have children -- perhaps due to concerns about it limiting their positive impact, among many others. But I'm just here to say it's been beautiful, and amazing, and I look forward to the day I get to talk with my son about giving back in a meaningful way.  
23
10mo
1
Hi! I’m looking for help with a project. If you’re interested or know someone who might be, it would be really great if you let me know/share. I'll check the forum forum for dms. 1. Help with acausal research and get mentoring to learn about decision theory * Motivation: Caspar Oesterheld (inventor/discoverer of ECL/MSR), Emery Cooper and I are doing a project where we try to get LLMs to help us with our acausal research. * Our research is ultimately aimed at making future AIs acausally safe. * Project: In a first step, we are trying to train an LLM classifier that evaluates critiques of arguments. To do so, we need a large number of both good and bad arguments about decision theory (and other areas of Philosophy.) * How you’ll learn: If you would like to learn about decision theory, anthropics, open source game theory, …, we supply you with a curriculum. There’s a lot of leeway for what exactly you want to learn about. You go through the readings. * If you already know things and just want to test your ideas, you can optionally skip this step. * Your contribution: While doing your readings, you, write up critiques of arguments you read. * Bottom-line: We get to use your arguments/critiques for our projects and you get our feedback on them. (We have to read and label them for the project anyway.) * Logistics: Unfortunately, you’d be a volunteer. I might be able to pay you a small amount out-of-pocket, but it’s not going to be very much. Caspar and Em are both university employed and I am similar in means to an independent researcher. We are also all non-Americans based in the US which makes it harder for us to acquire money for projects and such for boring and annoying reasons. * Why are we good mentors: Caspar has dozens of publications on related topics. Em has a handful. And I have been around. 2. Be a saint and help with acausal research by doing tedious manual labor and getting little in return We also need help with various grindy tasks that a
5
3mo
What happened to the Global Priorities Institute? Given that they published an updated research agenda ~8mo ago, seems somewhat abrupt.
10
8mo
2
Here's an argument I made in 2018 during my philosophy studies: A lot of animal welfare work is technically "long-termist" in the sense that it's not about helping already existing beings. Farmed chickens, shrimp, and pigs only live for a couple of months, farmed fish for a few years. People's work typically takes longer to impact animal welfare. For most people, this is no reason to not work on animal welfare. It may be unclear whether creating new creatures with net-positive welfare is good, but only the most hardcore presentists would argue against preventing and reducing the suffering of future beings. But once you accept the moral goodness of that, there's little to morally distinguish the suffering from chickens in the near-future from the astronomic amounts of suffering that Artificial Superintelligence can do to humans, other animals, and potential digital beings. It could even lead to the spread of factory farming across the universe! (Though I consider that unlikely) The distinction comes in at the empirical uncertainty/speculativeness of reducing s-risk. But I'm not sure if that uncertainty is treated the same as uncertainty about shrimp or insect welfare. I suspect many people instead work on effective animal advocacy because that's where their emotional affinity lies and it's become part of their identity, because they don't like acting on theoretical philosophical grounds, and they feel discomfort imagining the reaction of their social environment if they were to work on AI/s-risk. I understand this, and I love people for doing so much to make the world better. But I don't think it's philosophically robust.
11
1y
Just sharing my 2024 Year in Review post from Good Thoughts. It summarizes a couple dozen posts in applied ethics and ethical theory (including issues relating to naive instrumentalism and what I call "non-ideal decision theory") that would likely be of interest to many forum readers. (Plus a few more specialist philosophy posts that may only appeal to a more niche audience.)
Load more (8/65)