When people ask what aspiring effective altruists work on, I often start by saying that we do research into how you can help others the most. For example, GiveWell has found that distributing some 600 bed nets, at a cost of $3,000, can prevent one infant dying of malaria. For the same price, they have also found you could deliver 6,000 deworming treatments that work for around a year.

A common question at this point is 'how can you compare the value of helping these different people in these different ways?' Even if the numbers are accurate, how could anyone determine which of these two possible donations helps others the most?

I can't offer a philosophically rigorous answer here, but I can tell you how I personally approach this puzzle. I ask myself the question:

  • Which would I prefer, if, after making the decision, I were equally likely to become any one of the people affected, and experience their lives as they would? [1]

Let's work through this example. First, we'll make the number of people we are considering a manageable number: for $5, I could offer 10 children deworming treatments, or alternatively offer 1 child a bed-net, which has a 1 in 600 chance of saving their life. To make this decision, I should compare three options:

  1. I don't donate, and so none of the 11 children receive any help
  2. Ten of the children receive deworming treatment, but the other one goes without a bed-net
  3. The one child receives a bed-net, but the other ten go without deworming
If I didn't know which of these 11 children I was about to become, which choice would be more appealing?

Obviously 2 and 3 are better than 1 (no help), but deciding between 2 and 3 is not so simple. I am confident that a malaria net is more helpful than a deworming tablet, but it is ten times more useful?

This question has the virtue of:
  • Being 'fair', because in theory everyone's interests are given 'equal consideration'
  • Putting the focus on how much the recipients' value the help, rather than how you feel about it as a donor
  • Motivating you to actually try to figure out the answer, by putting you in the shoes of the people you are trying to help.
You'll notice that this approach looks a lot like the veil of ignorance, a popular method among moral philosophers for determine whether a process or outcome is 'just'. It should also be very appealing to any consequentialist who cares about 'wellbeing', and thinks everyone's interests ought to be weighed equally. [2] It also looks very much like the ancient instruction to "love your neighbor as yourself".

In my experience, this thought experiment pushes you towards asking good concrete questions like:
  • How much would deworming improve my quality of life immediately, and then in the long term?
  • How harmful is it for an infant to die? How painful is it to suffer from a case of malaria?
  • What risk of death might I be willing to tolerate to get the long-term health and incomes gains offered by deworming?
  • And so on.
I find the main weakness of applying this approach is that thousands of people might be affected in some way by a decision. For instance, we should not only consider the harm to young children who die of preventable diseases, but also the grief and hardship experienced by their families as a result. But that's just the start: health treatments delivered today will change the rate of economic development in a country and therefore the quality of life of all future generations. A big part of the case for deworming is that it improves nutrition, and thereby raises education levels and incomes for people when they are adults - benefits that are then passed on to their children and their children's children.

This doesn't make this question the wrong one to ask, but rather that tracking and weighing the impact on the hundreds of people who might be affected by an action is beyond what most of us can do in a casual way. However, I find you can still make useful progress by thinking through and adding up the impacts on paper, or in a spreadsheet. [3] When you apply this approach, it is usually possible to narrow down your choices to just a few options, though in my experience you may then not have enough information to confidently decide among that remaining handful.

--

[1] A very similar, probably equivalent, question is: Which would I prefer if, after making the decision, I then had to sequentially experience the remaining lives of everyone affected by both options?

[2] One weakness is that this question is ambiguous about how to deal with interventions that change who exists (for instance, policies that raise or lower birth rates). If you assume that you must become someone - non-existence is not an option - you would end up adopting the 'average view', which actually has no supporters in moral philosophy. If you simply ignored anyone whose existence was conditional on your decision, you would be adopting the 'person affecting view', which itself has serious problems. If you do include those people in the population of people you could become, and add 'non-existence' as the alternative for the choices which cause those people not to exist, you would be adopting the 'total view'.

[3] Alternatively, if you were convinced that these long-term prosperity effects were the most important impact, and were similarly valuable across countries, you could try to estimate the increase in the rate of economic growth per $100 invested in different projects, and just seek to maximise that.

Comments3


Sorted by Click to highlight new comments since:

This is a neat approach, Rob, and some form of it seems likely to be one of the best ways of thinking about this. I think the emphasis on putting yourself in the shoes of those you're trying to help rather than acting for yourself is particularly valuable. I think there is one extra difficulty that you haven't mentioned, though, which is to do with people having other preferences than yours.

Even if I'm able to work out that, given a random chance of being one of the participants I would prefer 2 to 3, it doesn't necessarily follow that 2 is preferable to 3 in an objective sense. It is interesting to imagine what the participants themselves would choose behind your veil (if they were fully informed about the tradeoffs etc.).

In many cases, one finds that people tend to think that their own condition is less bad than people who don't have the condition do. (That is, if you ask sighted people how bad it would be to be blind they say it would be much worse than blind people do when asked.) This suggests that, behind a veil of ignorance where self-interest is not at play, those at risk of malaria but not worms might regard treating worms as most important and those at risk of worms but not malaria would treat malaria. It seems hard to know whom to prioritise then.

There's also the eternal problem with imagining what one would choose - people often choose poorly. I assume you're making some sort of assumptions choosing under the best possible conditions. It may be, though, that your values depend on your decision-making conditions.

Of course, you still have to choose and like you say it's clear that 2 and 3 are both preferable to 1. I think this tool will get you answers most of the time, and can focus your mind on important questions, but there's a intrinsic uncertainty (or maybe indeterminateness) about the ordering.

I would go for:

1) use their preferences and experiences (pretend you don't know what you personally want)

2) imagine you knew everything you could about the impacts.

Which I think is considered the standard approach when thinking behind a veil.

As you say, you might find it hard to do 1) properly, but I think that effect is small in the scheme of things. It's also better than not trying at all!

"This suggests that, behind a veil of ignorance where self-interest is not at play, those at risk of malaria but not worms might regard treating worms as most important and those at risk of worms but not malaria would treat malaria."

Wouldn't they then cancel out if you took the average of the two when deciding?

[anonymous]0
0
0

I know you qualify this process as you own heuristic rather than a philosophical justification, but I fail to see the value of empathetic projection in this case which, in practice, is an invite for all sorts of biases. To state just two points: (i) imagining the experiential world of someone else isn't the same, or anywhere near to, experientially being someone else; (ii) it is not obvious that the imagined person's emotional and value set have any normative force as to what distributions we should favour in the world, i.e. X preferring Y to Z is not a normative argument for privileging Y over Z.

In Rawls' original position, judgement is exercised by a representative invested with a books-worth of qualifications as to why its conclusions are normatively important, i.e. Rawls tries to exactly model the person as free and equal in circumstances of fairness (it has frequently been argued, quite correctly, that Rawls' OP is superfluous to Rawls' actual argument, for the terms of agreement are well-defined outside of it). In the case of your procedure, judgement is exercised by whoever happens to be using it.

IMO, the possibility of normative interpersonal comparisons requires at least: (i) that we can justify a delimited bundle of goods as normatively prior to other goods; (ii) that those goods, within and between themselves, are formally commensurable; (iii) that we can produce a cardinal measure of those goods in the real-world; (iv) that we use that measure effectively to calculate correlations between the presence of those goods and the interventions in which we are interested; (v) that we complement this intervention efficacy with non-intervention variables, i.e. if intervention X yields 5 goods and intervention Y 10 goods, but we can deliver 2.5 X at the price of 1 Y in circumstance Z, then in circumstance Z we should prioritise X intervention.

I'm sure that, firstly, you know this better and more comprehensively than I, and secondly, that this process itself is a highly ineffective (i.e. resource-consuming) means of proceeding with interpersonal comparisons unless massively scaled. That said, I don't see why it shouldn't be a schematic ideal against which to exercise our non-ideal judgements. Your heuristic might roughly help (iii), and in this respect might be very helpful at the stage of first-evaluations, but there is more exacting means, and four other stages, besides.

Curated and popular this week
Sam Anschell
 ·  · 6m read
 · 
*Disclaimer* I am writing this post in a personal capacity; the opinions I express are my own and do not represent my employer. I think that more people and orgs (especially nonprofits) should consider negotiating the cost of sizable expenses. In my experience, there is usually nothing to lose by respectfully asking to pay less, and doing so can sometimes save thousands or tens of thousands of dollars per hour. This is because negotiating doesn’t take very much time[1], savings can persist across multiple years, and counterparties can be surprisingly generous with discounts. Here are a few examples of expenses that may be negotiable: For organizations * Software or news subscriptions * Of 35 corporate software and news providers I’ve negotiated with, 30 have been willing to provide discounts. These discounts range from 10% to 80%, with an average of around 40%. * Leases * A friend was able to negotiate a 22% reduction in the price per square foot on a corporate lease and secured a couple months of free rent. This led to >$480,000 in savings for their nonprofit. Other negotiable parameters include: * Square footage counted towards rent costs * Lease length * A tenant improvement allowance * Certain physical goods (e.g., smart TVs) * Buying in bulk can be a great lever for negotiating smaller items like covid tests, and can reduce costs by 50% or more. * Event/retreat venues (both venue price and smaller items like food and AV) * Hotel blocks * A quick email with the rates of comparable but more affordable hotel blocks can often save ~10%. * Professional service contracts with large for-profit firms (e.g., IT contracts, office internet coverage) * Insurance premiums (though I am less confident that this is negotiable) For many products and services, a nonprofit can qualify for a discount simply by providing their IRS determination letter or getting verified on platforms like TechSoup. In my experience, most vendors and companies
 ·  · 4m read
 · 
Forethought[1] is a new AI macrostrategy research group cofounded by Max Dalton, Will MacAskill, Tom Davidson, and Amrit Sidhu-Brar. We are trying to figure out how to navigate the (potentially rapid) transition to a world with superintelligent AI systems. We aim to tackle the most important questions we can find, unrestricted by the current Overton window. More details on our website. Why we exist We think that AGI might come soon (say, modal timelines to mostly-automated AI R&D in the next 2-8 years), and might significantly accelerate technological progress, leading to many different challenges. We don’t yet have a good understanding of what this change might look like or how to navigate it. Society is not prepared. Moreover, we want the world to not just avoid catastrophe: we want to reach a really great future. We think about what this might be like (incorporating moral uncertainty), and what we can do, now, to build towards a good future. Like all projects, this started out with a plethora of Google docs. We ran a series of seminars to explore the ideas further, and that cascaded into an organization. This area of work feels to us like the early days of EA: we’re exploring unusual, neglected ideas, and finding research progress surprisingly tractable. And while we start out with (literally) galaxy-brained schemes, they often ground out into fairly specific and concrete ideas about what should happen next. Of course, we’re bringing principles like scope sensitivity, impartiality, etc to our thinking, and we think that these issues urgently need more morally dedicated and thoughtful people working on them. Research Research agendas We are currently pursuing the following perspectives: * Preparing for the intelligence explosion: If AI drives explosive growth there will be an enormous number of challenges we have to face. In addition to misalignment risk and biorisk, this potentially includes: how to govern the development of new weapons of mass destr
jackva
 ·  · 3m read
 · 
 [Edits on March 10th for clarity, two sub-sections added] Watching what is happening in the world -- with lots of renegotiation of institutional norms within Western democracies and a parallel fracturing of the post-WW2 institutional order -- I do think we, as a community, should more seriously question our priors on the relative value of surgical/targeted and broad system-level interventions. Speaking somewhat roughly, with EA as a movement coming of age in an era where democratic institutions and the rule-based international order were not fundamentally questioned, it seems easy to underestimate how much the world is currently changing and how much riskier a world of stronger institutional and democratic backsliding and weakened international norms might be. Of course, working on these issues might be intractable and possibly there's nothing highly effective for EAs to do on the margin given much attention to these issues from society at large. So, I am not here to confidently state we should be working on these issues more. But I do think in a situation of more downside risk with regards to broad system-level changes and significantly more fluidity, it seems at least worth rigorously asking whether we should shift more attention to work that is less surgical (working on specific risks) and more systemic (working on institutional quality, indirect risk factors, etc.). While there have been many posts along those lines over the past months and there are of course some EA organizations working on these issues, it stil appears like a niche focus in the community and none of the major EA and EA-adjacent orgs (including the one I work for, though I am writing this in a personal capacity) seem to have taken it up as a serious focus and I worry it might be due to baked-in assumptions about the relative value of such work that are outdated in a time where the importance of systemic work has changed in the face of greater threat and fluidity. When the world seems to