Hide table of contents

TLDR; This post explains why I think we should be more explicit about our values when making impact estimates/ value predictions and provides some example suggestions of how to do this.

There has been a string of recent posts discussing and predicting characteristics (EV, Variance, etc) about future value. (How Binary is Longterm Value?, The Future Might Not Be So Great, Parfit + Singer + Aliens = ?, shameless plug , etc. )

Moreover, estimating the "impact" of interventions is a central theme in this community. It is perhaps the core mission of Effective Altruism. 

Most of the time I see posts discussing impact/value, I don't see a definition of value.[1]  What we define value to mean (sometimes called, ethics, morality, etc.) is the function that converts material outcomes (the is) into a number for which, bigger = better (the ought).

If someone makes a post engaging in value estimating and doesn't define value, it seems like there are two likely possibilities.

  • Most people engaging with the post will use their own internal notion of value.
  • Most people will engage with the post using what they perceive to be the modal value in the community, so probably total utilitarianism.

I believe these are both sub-optimal outcomes.  I do not believe most people engaging with these posts are trying to actively grapple with meta-ethics, so in the first place, they might not care to talk through the fact that they have different internal notions of value. More importantly, the ability to identify and isolate cruxes is central to rationality. We should always aim to modularize our discussions as this clarifies disagreements in the moment and allows the conclusions of the conversation to be much more plug-and-pull in the future. On some questions of impact, it could be the case that the answer to the question is not a function of the value system we use. But I think this is incredibly unlikely[2] and anyway we should explicitly come to that conclusion rather than assume it. 

If the second outcome, at least most of us would be on the same page. Of course, not everyone would be on the same page. Also, it isn't like total utilitarianism is clearly defined. You still need to give utility a useable definition, you need to create a weighting rule or map for sentient beings, and you need to define if there is such a thing as negative lives (and if yes, where the line is), etc.[3] So you would still have a lesser version of the above point. Plus, we then have also created an environment with a de facto ethic, which doesn't seem like a good vibe to me. 

Suggestions

Primary suggestion: Write your definition of value in your bio, and if you don't clarify in your comment/post, people should default to using this definition of value. I'm not sure there is an easily generalizable blueprint for all ethical systems, but here is an example of what a utilitarian version might look like (not my actual values). Note that this could probably be fleshed out more and/or better but I don't think it matters for the purpose of this post.

BIO

Ethical Framework: Total Utilitarianism

Definition of Utility: QALYS, but rescaled so that quality of life can dip negative

Weighting functions: Amount of neurons

Additional Clarifications: I believe this is implicit in my weighting function but I consider future and digital minds to be morally valuable. My definition of a neuron is (....). I would prefer to use my Coherent Extrapolated Volition over my current value system. 

 

Other suggestions I like less:

Suggestion: Define value in your question/comment post. [4]

Suggestion: Make a certain form of total utilitarianism the de jure meaning of value on the forum when people don't clearly define value or don't set a default value in their bio.[5]

Suggestion: Don't do impact estimates in one go, do output/outcome estimates. Then extrapolate separately. I.E. ask questions like "How many QALYs will there be in the future" "How many human rights will be violated" etc. 

  1. ^

    Sometimes I will see something like "my ethics are suffering focused, so this leads me x instead of y". 

  2. ^

    If we think of morality as being an arbitrary map that takes the world as an input and spits out a (real) number, then it is an arbitrary map from   or  to R, where F some set (Technically, the dimensions of the universe are not necessarily comprised of the same sets so this notation is wrong, plus I don't actually have any idea what I'm talking about).  If this is the case, we can basically make the "morality map" do whatever we want. So when asking questions about how the value of the world will end up looking, we can almost certainly create two maps(moralities) that will spit out very different answers for the same world. 

  3. ^

    I understand how strict of a bar clarifying these things every post would be, and I don't think we need to be strict about it, but we should keep these things in mind and push towards a world where we are communicating this information. 

  1. ^

    This seems laborious

  2. ^

    We can of course make it explicit that we don't endorse this, and it is just a discussion norm. I would still understand if people feel this opens us up to reputational harms and thus is a bad idea. 

Show all footnotes
Comments2


Sorted by Click to highlight new comments since:

Cool I'll check it out.

Curated and popular this week
 ·  · 20m read
 · 
Advanced AI could unlock an era of enlightened and competent government action. But without smart, active investment, we’ll squander that opportunity and barrel blindly into danger. Executive summary See also a summary on Twitter / X. The US federal government is falling behind the private sector on AI adoption. As AI improves, a growing gap would leave the government unable to effectively respond to AI-driven existential challenges and threaten the legitimacy of its democratic institutions. A dual imperative → Government adoption of AI can’t wait. Making steady progress is critical to: * Boost the government’s capacity to effectively respond to AI-driven existential challenges * Help democratic oversight keep up with the technological power of other groups * Defuse the risk of rushed AI adoption in a crisis → But hasty AI adoption could backfire. Without care, integration of AI could: * Be exploited, subverting independent government action * Lead to unsafe deployment of AI systems * Accelerate arms races or compress safety research timelines Summary of the recommendations 1. Work with the US federal government to help it effectively adopt AI Simplistic “pro-security” or “pro-speed” attitudes miss the point. Both are important — and many interventions would help with both. We should: * Invest in win-win measures that both facilitate adoption and reduce the risks involved, e.g.: * Build technical expertise within government (invest in AI and technical talent, ensure NIST is well resourced) * Streamline procurement processes for AI products and related tech (like cloud services) * Modernize the government’s digital infrastructure and data management practices * Prioritize high-leverage interventions that have strong adoption-boosting benefits with minor security costs or vice versa, e.g.: * On the security side: investing in cyber security, pre-deployment testing of AI in high-stakes areas, and advancing research on mitigating the ris
 ·  · 32m read
 · 
Summary Immediate skin-to-skin contact (SSC) between mothers and newborns and early initiation of breastfeeding (EIBF) may play a significant and underappreciated role in reducing neonatal mortality. These practices are distinct in important ways from more broadly recognized (and clearly impactful) interventions like kangaroo care and exclusive breastfeeding, and they are recommended for both preterm and full-term infants. A large evidence base indicates that immediate SSC and EIBF substantially reduce neonatal mortality. Many randomized trials show that immediate SSC promotes EIBF, reduces episodes of low blood sugar, improves temperature regulation, and promotes cardiac and respiratory stability. All of these effects are linked to lower mortality, and the biological pathways between immediate SSC, EIBF, and reduced mortality are compelling. A meta-analysis of large observational studies found a 25% lower risk of mortality in infants who began breastfeeding within one hour of birth compared to initiation after one hour. These practices are attractive targets for intervention, and promoting them is effective. Immediate SSC and EIBF require no commodities, are under the direct influence of birth attendants, are time-bound to the first hour after birth, are consistent with international guidelines, and are appropriate for universal promotion. Their adoption is often low, but ceilings are demonstrably high: many low-and middle-income countries (LMICs) have rates of EIBF less than 30%, yet several have rates over 70%. Multiple studies find that health worker training and quality improvement activities dramatically increase rates of immediate SSC and EIBF. There do not appear to be any major actors focused specifically on promotion of universal immediate SSC and EIBF. By contrast, general breastfeeding promotion and essential newborn care training programs are relatively common. More research on cost-effectiveness is needed, but it appears promising. Limited existing
 ·  · 11m read
 · 
Our Mission: To build a multidisciplinary field around using technology—especially AI—to improve the lives of nonhumans now and in the future.  Overview Background This hybrid conference had nearly 550 participants and took place March 1-2, 2025 at UC Berkeley. It was organized by AI for Animals for $74k by volunteer core organizers Constance Li, Sankalpa Ghose, and Santeri Tani.  This conference has evolved since 2023: * The 1st conference mainly consisted of philosophers and was a single track lecture/panel. * The 2nd conference put all lectures on one day and followed it with 2 days of interactive unconference sessions happening in parallel and a week of in-person co-working. * This 3rd conference had a week of related satellite events, free shared accommodations for 50+ attendees, 2 days of parallel lectures/panels/unconferences, 80 unique sessions, of which 32 are available on Youtube, Swapcard to enable 1:1 connections, and a Slack community to continue conversations year round. We have been quickly expanding this conference in order to prepare those that are working toward the reduction of nonhuman suffering to adapt to the drastic and rapid changes that AI will bring.  Luckily, it seems like it has been working!  This year, many animal advocacy organizations attended (mostly smaller and younger ones) as well as newly formed groups focused on digital minds and funders who spanned both of these spaces. We also had more diversity of speakers and attendees which included economists, AI researchers, investors, tech companies, journalists, animal welfare researchers, and more. This was done through strategic targeted outreach and a bigger team of volunteers.  Outcomes On our feedback survey, which had 85 total responses (mainly from in-person attendees), people reported an average of 7 new connections (defined as someone they would feel comfortable reaching out to for a favor like reviewing a blog post) and of those new connections, an average of 3
Recent opportunities in Building effective altruism
47
Ivan Burduk
· · 2m read