Hide table of contents

TLDR: GiveWell’s moral weights can include additional categories, based on intended beneficiaries’ and prospective funders’ preferences.

Introduction

GiveWell’s cost-effectiveness analyses can comprise a greater scope of moral weight categories. Currently, preventing death at different ages and increased income/consumption are the only considered criteria (pp. 5–6). Intended beneficiaries may have a complex set of priorities, which can resonate with prospective funders. 

Covering a broader range of non-profit goals can increase international cooperation and safeguard peace. A caveat is that current power dynamics among nations may be strengthened, although individuals, globally, presented with a greater variety of lifestyle choices. 

Sufficiently disaggregated national statistics can be used to estimate the impact of programs on different moral weight categories. World Bank data aggregates these statistics.

With additional moral weights, the impact of individual programs will likely be nominally smaller. This could discourage funders looking to save a large number of lives and to increase others' income relatively highly from donating. However, a well-developed set of weights can better showcase the strategic importance of specific funding focus as well as the need for philanthropic coordination.

My specific recommendations rely on a biased interpretation and selection of evidence. Research that mitigates human biases can be conducted.

GiveWell should research intended beneficiaries’ and prospective funders’ preferences robustly, if it can thus maintain its competitive advantage over larger charity assessors. Alternatively, GiveWell can seek to cooperate with other assessment organizations that define development more broadly.

Intended beneficiaries’ preferences

Based on in-person interactions with globally poor individuals in Asia, Africa, and Latin America, as well as international development undergraduate and graduate coursework, IPA and J-PAL resources, and other studies and expert insights, I conclude that many intended beneficiaries have, at least some of, the following priorities:

  • Prospects for themselves and family
  • Ability to afford emergency healthcare
  • Mutual respect within community
  • Adequate rest and physical health
  • Enjoyable living environment
  • Healthy family relationships
  • Safety from conflict and financial risks

This perspective suggests that moral weights should include some of the following categories:

  • Employment and underemployment
  • Education rates and quality
  • Preventive healthcare and insurance
  • Emergency healthcare availability and affordability
  • Community epistemics and relationships
  • Healthy and strenuous exercise
  • Rest, sleep, noise levels
  • Environmental sustainability
  • Spousal cooperation and relationships
  • Treatment of children
  • Civil and international conflict
  • Local crime rates

These categories can be understood as examples of what intended beneficiaries may value and what could be measured to resolve and prevent underlying issues. An extensive study should be conducted to understand these preferences better. Alternatively, existing studies can be synthesized by an impartial team or a software. Enumerator bias should be controlled for, including by non-leading survey design and impartial and trusted enumerator engagement.

Funders’ and prospective funders’ values

The values commonly presented in the Giving Pledge letters and by the top 100 Forbes billionaires, in my perspective, include:

  • Family and business success
  • Healthcare innovation and longevity
  • Technological innovation and competitive advantage
  • Meaningful AI advancement and safety
  • Education and upskilling
  • Market growth and brand recognition
  • Environmental sustainability
  • Economic inclusion (mainly within the US)
  • Art, culture, heritage

This would suggest that GiveWell moral weights should include:

  • Family cooperation and relationships
    • Assuming that prospective funders’ would be willing to support also other families’ happiness
  • Global value chain (GVC) participation
  • Healthcare systems advancement
    • Assuming the willingness to share innovations abroad
  • Life expectancy at different ages
  • Supplier efficiency increase
  • Education rates and economic relevance, job training
  • Total and brand product consumption
  • Climate change mitigation, adaptation, and preparedness
  • Impact of industrialization on nature and animals
    • For example, industrialization can prevent the suffering of wild animals
  • Income equality
    • Assuming preference for income equality also abroad
  • Artistic expression, cultural celebration, heritage preservation
    • Assuming preferences for such also in other nations

My perspective may be biased by the set of resources that I reviewed and their order as well as the frameworks that I used to filter and synthesize prospective funders’ preferences. GiveWell should dedicate significant effort to understand its customers better, in order to present a competitive product.

International cooperation, security, power distribution 

Programs with a broader set of objectives can have wider impacts on international cooperation, security, and power distribution.

Job training and education

More inclusive job training and education can prevent the rise of elites abroad, which can disempower these nations compared to those where elites have already emerged. However, greater economic inclusion and GVC integration can safeguard global peace: countries that trade together have more to lose in a conflict.

Family and community relationships

Positive family and community relationships can influence constituents’ and leaders' preference for a peaceful conflict resolution. Cooperative norms among nations with weaker institutions are increasingly more important as destructive technologies become more affordable.

Health

Health can have little direct impact on international security, although indirectly, increased economic integration due to healthier workforce can prevent the use of force in conflict resolution.

Environment

The profitability of environmental commitments and green technology adaptation should advantage nations with the greatest potential to implement relevant changes and develop technologies. Climate change mitigation, adaptation, and preparedness can prevent disputes related to environmental migration. Considering nature and non-human animals can increase societal empathy, which can motivate peaceful conflict resolution.

Art, culture, heritage

The impact of the advancement of art, culture, and heritage can have a range of impact, depending on the sentiments that it inspires. For example, a gallery that presents family farm pictures can have a widely different impact from a place that glorifies medieval torture instruments.

World Bank data

The World Bank periodically collects thousands of indicators relevant to various moral weight categories and collates additional thousands of datasets. GiveWell should peruse these datasets to find the most relevant and complete and least biased statistics.

While the DataBank databases display national-level data, national statistics may exist at the individual, household, and other levels that present sufficient disaggregation and sample sizes for statistically robust studies.

Examples of metrics relevant to moral weight categories preferred by both beneficiaries’ and prospective funders include (World Development Indicators):

  • “Labor force with basic education (% of total working-age population with basic education)”
  • “Primary education, pupils (% female)”
  • “Firms offering formal training (% of firms)”
  • “Children in employment, unpaid family workers (% of children in employment, ages 7-14)”
  • “Women participating in the three decisions (own health care, major household purchases, and visiting family) (% of women age 15-49)”
  • “Births attended by skilled health staff (% of total)”
  • “Number of people spending more than 25% of household consumption or income on out-of-pocket health care expenditure”
  • “CPIA policy and institutions for environmental sustainability rating (1=low to 6=high)”
  • “Total fisheries production (metric tons)”

Nominal values, strategic importance, philanthropic cooperation

A greater variety of moral weight categories will make the contribution of individual programs nominally smaller. For example, a chlorine dispenser program can greatly improve an aspect of the WASH metric, which falls under the health moral weight category as well as community relationships and culture (chatting during water collection). However, the program can have no impact on education, environment, global value chain integration, other aspects of health, and art. Thus, the impact of this globally top program can look relatively small. This small value could discourage a prospective donor from selecting the program.

Businesses, however, can see the strategic importance of supporting specific aspects of global development. For example, a company that supplies clean water in rural areas can be interested in early brand recognition in an emerging market by donating chlorine dispensers, in addition to its philanthropic contribution.

With multiple moral weight categories, it will become apparent that an individual, or a single foundation, cannot resolve all global issues by itself. This can motivate philanthropic coordination.

While GiveWell can facilitate this philanthropic coordination by implementing additional categories in its own analyses, it can also cooperate with other assessors that evaluate charities based on a broader range of impact criteria.

Note on personal biases

I am biased by the selection of resources that I engaged with, their order, and the frameworks that I reviewed them with. It is possible that intended beneficiaries and prospective funders have different priorities. Research that mitigates human biases can be conducted.

Conclusion

I suggested that GiveWell should include additional moral weight categories in its cost-effectiveness analyses or cooperate with charity assessors that use broader impact criteria. I hypothesized intended beneficiary and prospective funders’ preferences. Further, I briefly discussed the effects of a broader set of philanthropic objectives on international cooperation, security, and power distribution. Relevant World Bank data series and metric examples were overviewed. The impact of nominal changes in GiveWell’s analyses on the global philanthropic landscape was debated. I concluded with a note on personal biases.

Comments7


Sorted by Click to highlight new comments since:

Strong upvote! I came here to say something similar.  One of your most compelling points is addressing the needs and wants of the intended beneficiaries, in contrast with pursuing the most economically efficient cause area. I think there is significant moral weight in ensuring people have what they want and need, which cannot be commodified. 

Thanks!

I'm going to say yes, but it doesn't matter too much, because while money and survival isn't everything, it's probably more like 90% of things.

This is because of the fact that money is very useful for most things (and the areas where money isn't useful are cherry picked, rare examples). Money isn't a total panacea, but it's the closest we've had to one, with others being worse.

More generally, because of limited resources, GiveWell must prioritize. The world where people's preferences and economies are met is better, but also a naive, fabricated option at scale as of October 14th, 2022.

Links below on the importance of economics and money:

https://www.lesswrong.com/posts/hRa5c5GaMNkNGtnXq/insights-from-modern-principles-of-economics

https://www.lesswrong.com/posts/QyZvL6hrmS6AEXJLF/things-i-wish-they-d-taught-me-when-i-was-younger-why-money

I think this applies in settings where people know how to spend money to maximize utility and enjoy (money independent) good relationships and people-centered systems.

Let me argue that normative environment matters more than money. People in absolute poverty can be doing great, if they are safe, know that they receive treatment if they need, many friends around them are quite cool, families are loving, and they have always something to learn which makes them better in some way.

Monetarily, this can be achieved with health insurance and maybe textbooks/various informational radio shows. Otherwise, it is the norms. Individuals cannot spend on normative development, because others need to progress with them.

For example, I stayed in a $60/month place and it was cool because of the engineering students' housemates' norms (helped me install my bednet, great convos on race and gender, mutual respect for personal space but enjoyment to greet), guard and good padlocks (we had thieves outside of the doors for a few hours one night but they did nothing because they did not have equipment to cut locked iron doors), malaria testing available about 50m away from the door and medicine for $4, and great work environment and caring colleagues.

I also stayed (just for a month) at a $200/month place, where the landlady was complaining in front of her two young daughters that contraception was not popular so she regrets, also gave me incorrect information to make me sign the (vague) lease. Also, apparently her cleaning lady stole her valuables when she was away.  I also saw four?-year-olds play with money imitation. The boys took the money from the girl/denied her the money when she was excited to play.

My argument is that a $3/day (rent+food) secure place with cool nice people who normatively enjoy cooperative progress is better than a $10/day place which is less secure, it can be argued that families are not as loving, relationships not as respectful, and mutual support and inspiration to learn is limited.

This is to illustrate how it can be argued that giving  individuals money can have limited impact, if the normative environment is not up to speed. (Maybe they can try to make people sign vague lease agreements with higher value, thieves can get better equipment, men can make it a reality that women do not get money, and children can continue to be rejected while receiving more expensive toys.)

We do not know how different normative environments are. The two places which I described were a walking distance apart. If you just give people money, you don't know what is going to scale up.

The claim I'm making is that at scale, trade-offs like this will exist, and thus the world where everyone has less money but more of everything else is essentially a fabrication.

Also, market forces are more robust in that even if everyone is guided by self-interest, good things will still happen. And at scales of 1000+, this is very relevant.

OK, for now I disagree but the time when I agree can come within a few years.

Thank you for your entry!

Curated and popular this week
LintzA
 ·  · 15m read
 · 
Cross-posted to Lesswrong Introduction Several developments over the past few months should cause you to re-evaluate what you are doing. These include: 1. Updates toward short timelines 2. The Trump presidency 3. The o1 (inference-time compute scaling) paradigm 4. Deepseek 5. Stargate/AI datacenter spending 6. Increased internal deployment 7. Absence of AI x-risk/safety considerations in mainstream AI discourse Taken together, these are enough to render many existing AI governance strategies obsolete (and probably some technical safety strategies too). There's a good chance we're entering crunch time and that should absolutely affect your theory of change and what you plan to work on. In this piece I try to give a quick summary of these developments and think through the broader implications these have for AI safety. At the end of the piece I give some quick initial thoughts on how these developments affect what safety-concerned folks should be prioritizing. These are early days and I expect many of my takes will shift, look forward to discussing in the comments!  Implications of recent developments Updates toward short timelines There’s general agreement that timelines are likely to be far shorter than most expected. Both Sam Altman and Dario Amodei have recently said they expect AGI within the next 3 years. Anecdotally, nearly everyone I know or have heard of who was expecting longer timelines has updated significantly toward short timelines (<5 years). E.g. Ajeya’s median estimate is that 99% of fully-remote jobs will be automatable in roughly 6-8 years, 5+ years earlier than her 2023 estimate. On a quick look, prediction markets seem to have shifted to short timelines (e.g. Metaculus[1] & Manifold appear to have roughly 2030 median timelines to AGI, though haven’t moved dramatically in recent months). We’ve consistently seen performance on benchmarks far exceed what most predicted. Most recently, Epoch was surprised to see OpenAI’s o3 model achi
Dr Kassim
 ·  · 4m read
 · 
Hey everyone, I’ve been going through the EA Introductory Program, and I have to admit some of these ideas make sense, but others leave me with more questions than answers. I’m trying to wrap my head around certain core EA principles, and the more I think about them, the more I wonder: Am I misunderstanding, or are there blind spots in EA’s approach? I’d really love to hear what others think. Maybe you can help me clarify some of my doubts. Or maybe you share the same reservations? Let’s talk. Cause Prioritization. Does It Ignore Political and Social Reality? EA focuses on doing the most good per dollar, which makes sense in theory. But does it hold up when you apply it to real world contexts especially in countries like Uganda? Take malaria prevention. It’s a top EA cause because it’s highly cost effective $5,000 can save a life through bed nets (GiveWell, 2023). But what happens when government corruption or instability disrupts these programs? The Global Fund scandal in Uganda saw $1.6 million in malaria aid mismanaged (Global Fund Audit Report, 2016). If money isn’t reaching the people it’s meant to help, is it really the best use of resources? And what about leadership changes? Policies shift unpredictably here. A national animal welfare initiative I supported lost momentum when political priorities changed. How does EA factor in these uncertainties when prioritizing causes? It feels like EA assumes a stable world where money always achieves the intended impact. But what if that’s not the world we live in? Long termism. A Luxury When the Present Is in Crisis? I get why long termists argue that future people matter. But should we really prioritize them over people suffering today? Long termism tells us that existential risks like AI could wipe out trillions of future lives. But in Uganda, we’re losing lives now—1,500+ die from rabies annually (WHO, 2021), and 41% of children suffer from stunting due to malnutrition (UNICEF, 2022). These are preventable d
Rory Fenton
 ·  · 6m read
 · 
Cross-posted from my blog. Contrary to my carefully crafted brand as a weak nerd, I go to a local CrossFit gym a few times a week. Every year, the gym raises funds for a scholarship for teens from lower-income families to attend their summer camp program. I don’t know how many Crossfit-interested low-income teens there are in my small town, but I’ll guess there are perhaps 2 of them who would benefit from the scholarship. After all, CrossFit is pretty niche, and the town is small. Helping youngsters get swole in the Pacific Northwest is not exactly as cost-effective as preventing malaria in Malawi. But I notice I feel drawn to supporting the scholarship anyway. Every time it pops in my head I think, “My money could fully solve this problem”. The camp only costs a few hundred dollars per kid and if there are just 2 kids who need support, I could give $500 and there would no longer be teenagers in my town who want to go to a CrossFit summer camp but can’t. Thanks to me, the hero, this problem would be entirely solved. 100%. That is not how most nonprofit work feels to me. You are only ever making small dents in important problems I want to work on big problems. Global poverty. Malaria. Everyone not suddenly dying. But if I’m honest, what I really want is to solve those problems. Me, personally, solve them. This is a continued source of frustration and sadness because I absolutely cannot solve those problems. Consider what else my $500 CrossFit scholarship might do: * I want to save lives, and USAID suddenly stops giving $7 billion a year to PEPFAR. So I give $500 to the Rapid Response Fund. My donation solves 0.000001% of the problem and I feel like I have failed. * I want to solve climate change, and getting to net zero will require stopping or removing emissions of 1,500 billion tons of carbon dioxide. I give $500 to a policy nonprofit that reduces emissions, in expectation, by 50 tons. My donation solves 0.000000003% of the problem and I feel like I have f