Comments40
Sorted by Click to highlight new comments since:

The Economist, last week: "[EA] doesn’t seriously question whether attempting to quantify an experience might miss something essential about its nature. How many WALYs for a broken heart?" (Nov 2022, Source)

HLI, this week: "...Hence, an overall effect of grief per death prevented is (0.72 x 5 x 0.5) x 4.03 = 7.26 WELLBYs" 

 

Great article – well done!!!

Well, if we think feelings matter, we should try to quantify them in a sensible way. That's what we try to do.

But I share the sentiment that you really do miss something if you try to quantify feelings by measuring something other than feelings, such as income.

Here’s one problematic case. A woman is pregnant. She can take a drug that will treat an illness but will cause substantial birth defects (a real-life example of this would be Thalidomide). According to TRIA, the mother would do little or nothing wrong in taking the drug because the foetus currently has quite weak interests in its own future.

I don't think this is accurate. In explaining the wrong of pre-natal injury (which causes harm to the future adult), McMahan writes that "we must evaluate the act in terms of its effect on all those time-relative interests it affects, present or future." (The Ethics of Killing, p. 283.)  That is, while the pre-natal being has little or no time-relative interest in avoiding the pre-natal injury, the future adult's time-relative interests would be gravely affected (we may suppose), which explains why the pre-natal injury is a morally weighty affair.  (In case of pre-natal death, by contrast, there are no future time-relative interests to be negatively affected; the death prevents those interests from arising in the first place.)

You outline a moral dichotomy between the following:

  1. Actions which negatively affect a person's future interests,
    • e.g. a mother taking a drug which causes birth defects in her child,
    • which are morally wrong
  2. Actions which prevent the occurrence of a person having future interests,
    •  e.g. a mother preventing the birth of her child,
    • which are morally neutral

It seems to me that longtermism explicitly rejects this dichotomy, because longtermists believe the prevention of the occurrence of the interests of innumerable future people would be a catastrophic moral loss. A believer in this dichotomy would argue that a human extinction event is morally neutral with respect to the interests of innumerable future people who would have lived, because the extinction event simply "prevents those interests from arising in the first place". Do you agree that this dichotomy is inconsistent with longtermism?

Not exactly -- though it is a good question!

The dichotomy merely suggests that failing to create a person does not harm or wrong that individual in the way that negatively affecting their interests (e.g. by killing them as a young adult) does.  Contraception isn't murder, and neither is abstinence.

But avoiding wrongs isn't all that matters.  We can additionally claim that there's always some (albeit weaker) reason to positively benefit possible future people by bringing them into a positive existence.  So there's some moral reason to have kids, for example, even though it doesn't wrong anyone to remain childless by choice.

And when you multiply those individually weak reasons by zillions, you can end up with overwhelmingly strong reasons to prevent human extinction, just as longtermists claim. (This reason is so strong it would plausibly be wrong to neglect or violate it, even though it does not wrong any particular individual. Just as the non-identity problem shows that one outcome can be worse than another without necessarily being worse for any particular individual.)

I'm currently taking a class with Jeff McMahan in which he discusses prenatal injury, and I'm pretty sure he would agree with how you put it here, Richard. This doesn't affect your point, but he now likes to discuss a complication to this: what he calls "the divergent lives problem." The idea is that an early injury can lead to a very different life path, and that once you're far enough down this path—and have the particular interests that you do, and the particular individuals in your life who are important to you—Jeff thinks it can be irrational to regret the injury. So, if someone's being injured as a fetus leads them to later meet the particular life partner they love and to have particular the children they have, and if their life is good,  Jeff thinks they probably shouldn't regret the injury—even if avoiding the injury would have led to their having a life with more wellbeing. That's because avoiding the injury would have led to them having particular people and interests in their life which they don't in fact care about from their standpoint now. However, Jeff does add that if an early injury makes the later life not worth living, or maybe even barely worth living, then the future person who developed from the injured fetus does have reason to regret that injury. He would say that children of mothers who took Thalidomide have reason to regret that. 

Hello Richard.  I'm familiar with the back-and-forths between McMahan and others over the nature and plausibility of TRIA, e.g. those in Gamlund and Solberg (2019) which I assume is still the state of the art (if there's something better, I would love to know). However, I didn't want to get into the details here as it would require the introduction of lots of conceptual machinery for very little payoff. (I've even been to a whole term of seminars by Jeff McMahan on this topic when I was at Oxford)

But seeing as you've raised it ... 

As Greaves (2019) presses, there is an issue of which person-stages count:

Are the relevant time-relative interests, for instance, only those of present person-stages (“presentism”)? All actual person-stages (“actualism”)? All person-stages that will exist regardless of how one resolves one’s decision (“necessitarianism”)? All person-stages that would exist given some resolution of one’s decision (“possibilism”)? Or something else again?

Whichever choice the TRIA-advocate makes, they will inherit structurally the same issues for those as one finds for the equivalent theories in population ethics (for those, see Greaves (2017)).

The version of TRIA you are referring to is, I think, actualist person-stage version: if so, then the view is not action-guiding (the issue of normative invariance). If you save the child, it will have those future stages, so it'll be good that it lived; if you don't save the child, it won't, so it won't be bad that it didn't. Okay, should you save the child? Well, the view doesn't tell you either way! 

The actualist version can't be the one at hand, as it doesn't say that it's good (for the child) if you save it (vs the case where you don't). 

I am, I  think, implicitly assuming a present-stage-interest version of TRIA, as that's the one that generates the value-of-death-at-different-ages curve that is relevantly different from the deprivationist one.

Just to clarify, is the only benefit you're considering from AMF a longer life for a few people, or are you also taking into account all the people who don't get sick from malaria?

Yes, we try to account for the benefit that'd accrue to everyone who doesn't get sick. We mention this at the beginning of Section 3 and discuss the calculations in more detail in the appendix.  Overall we think these "life improving" (as opposed to "life extending") effects are relatively small. And if malaria prevention is cost-effective (which we argue depends on your philosophy), then I think it'll be the life-extending benefits that drive that. 

Currently, we're primarily relying on evidence about how malaria prevention increases incomes -- and then converting that into estimates of how it'd affect people's subjective wellbeing. We're currently missing good data on how malaria prevention affects people's feelings. 

It looks like you're only accounting for increased incomes and avoided grief when estimating the effects of AMF other than on the people who would have died. Have you looked at all about trying to account for all the factors GiveWell lists under supplemental adjustments in their CEA? Just to take an example, GiveWell increases AMF's benefits 9% to account for the reduction in malaria morbidity (time spent ill). If you take out the benefits from averted deaths, the morbidity effect is still 9% of the original benefit, not 9% of the new lower benefit. I would think that (and the other supplemental adjustment effects) might moderately shift your conclusions if you aren't currently accounting for them. 

We already include GiveWell's adjustments, including their supplemental intervention adjustments in the lives saved and income-generated figures we use (and then adjust in the case of the income effects).

I guess I'm not quite sure what to make of your answer here.  To take a concrete example, when computing AMF's cost-effectiveness in an Epicurian framework, how many WELLBYs/$1k are you attributing to reduced morbidity? 

This is implicitly 0.42 WELLBYs per $1k if we go with the HLI adjusted figures, or it's 1.67 WELLBYs per $1k if you take GiveWell's income figures at face value. 

Again, GiveWell doesn't explicitly model the morbidity effects other than by inflating the value of malaria prevention's life-saving and income-increasing effect by 9%. We didn't tinker with the supplemental charity-level adjustments, supplemental intervention-level adjustments or their leverage/funging adjustments because that is, I expect, a whole can of worms. Because we kept these adjustments that GiveWell uses to tweak the value of income and life --  the morbidity effects that GiveWell implicitly incorporates, we implicitly include it as well. 

Basically, if you think that the morbidity effects should merit a different adjustment than 9%, we don't account for that. If you're satisfied with 9%, then it's already accounted for, just in a weird opaque way as part of GiveWell's suite of subjective adjustments. 

Thanks for the response! If I'm understanding you right, then I'm not convinced I like your approach to this specific aspect of the model. But I do think any approach to handling the morbidity benefits is going to be very coarse without a lot of further research. 

To try and illustrate my concern, let me just give a quick example for the DRC, working in GiveWell's units of value rather than WELLBYs (because that's what I'm more familiar with). If we take GiveWell's estimate that the morbidity reduction is equal to a 9% of AMF's pre-adjustment benefits, that means that per $100000, morbidity reduction generates 0.09*8570 = 771 units of value. 

Based on how I think you're doing it, when you go to calculate the non-life-extension benefits of AMF, you compute the benefits of morbidity reduction as 9% of (development benefits + avoided bereavement). If we just work with GiveWell's numbers (which don't include the bereavement effects), that would be 0.09*2582 = 232 units of value/$100000. Then when you go to calculate the life-extension benefits, you add in a 9% adjustment for morbidity reduction, which is  0.09*5987 = 539 units of value/$100000. But that bookkeeping doesn't make any sense, as all the morbidity benefits should be accounted for in the non-life-extension category.  Doing it the way I think you are, where you're adding a factor of 0.09 to all benefits, ends up making AMF look worse in the Epicurean case, as well as in all cases with a higher neutral point for happiness. 

This is actually even more important because you're applying such a big downward adjustment to GiveWell's numbers. If we divide GiveWell's estimate of the development benefits by 4,  then the development benefits are about 709 units of value/$100000 and the total morbidity reduction benefits under GiveWell's assumptions are 9% of the new total, or 597 units of value/$100000. If you do the bookkeeping how I think you're doing, only 58 units of value/$100000 of morbidity reduction would get attributed to the non-life-extension benefit category. Correctly attributing all 597 units of value/$100000 from morbidity reduction nearly doubles the estimated non-life-extension benefits of AMF. 

I recognize that to some extent we're working with made-up numbers here. But I think the general point that the supplemental adjustments need to be handled with care when doing this kind of component analysis is an important one. However, I do apologize in advance if I'm misunderstanding how you're approaching this right now. 

Thanks Joel!

This is great work, and I think it's super valuable to everyone thinking about the best ways to go about improving global health and wellbeing. I wanted to share a couple comments on it as well as a couple specific questions.

First, I think it's really good you're looking at these questions in so much detail despite the fact that they can feel unpleasant or difficult. I personally think that there's something pretty screwed up about the world when a random upper-middle class person in the United States can choose whether someone in Sub-Saharan Africa lives or dies depending on the choice of how much and where to donate. But given that's the world we live in, I think it's really important to do our best to try and figure out answers to these kinds of moral questions you're looking at. (Note: I also think it's worth trying to bring about systemic changes to remove the need for aid, but systemic change isn't something that we can snap our fingers and bring about. It's a complex process that carries its own costs and benefits, and the potential for systemic change doesn't remove the need to think carefully about options for improving wellbeing immediately). 

In light of my more general feelings, I'm pretty strongly in favor of efforts that try and ask aid recipients what they want and value, rather than trying to make these tradeoffs based on our own moral reasoning. For that reason, I've been very pleased that GiveWell has incorporated surveys in its computation of moral weights, and I'll be very excited to see the results from HLI's survey on the neutral point. I would love to see more survey work that tries to tease out views on deprivationism vs. TRIA vs. Epicurianism among individuals actually impacted by aid. However, these might be complicated enough questions that focus groups and other more in-depth approaches would work better than surveys. 

With those comments out of the way, I also wanted to ask a couple specific questions:

  1. When you computed your 4 WELLBY/$1,000 cost-effectiveness estimate for AMF's development effects, is that after correcting for the previous arithmetic error from the Dozen Doubts post? (Edit: I tried to check this myself, but the Google sheet link didn't seem to work for me).
  2. Does anyone use a simple total utilitarian framework when doing this kind of cost-benefit accounting? I would think that such a framework might look sort of in between deprivationism and Epicurianism, depending on how interventions impacted fertility. I don't want to come across as endorsing the total utilitarian framework as a uniquely desirable one to use, but it seems worth at least thinking about in this context. 

Hi MHR,

I also wish that these choices need not be made. 

I find myself still trying to form a view on how much we can and should outsource our moral reasoning by surveying people in general and beneficiaries in particular. I think this is a tricky question, as there are many technical questions we don't and probably shouldn't survey the affected people to decide on the best course of action (i.e., setting interest rates). That being said, I would welcome more work understanding people's views on these tough philosophical questions. I think this could be a promising line of research[1]

Answering your specific comments. 

  1. Yes, this is after correcting the previous error. 
  2. I would hesitate to ever put "simple" next to "total utilitarian"! But more seriously, we considered adding this perspective, but we found it difficult to implement and more difficult to communicate. The difficulty with incorporating fertility is when to stop counting. Take the immediate "replacement" effect. Roodman (2014) estimates for every  4 children who die, this leads to between 1 and 2 extra children being born. So if we just stop there, adding fertility in a totalist perspective makes AMF look relatively worse. But why stop there? Why not count the next generation? If we save 4 people, but only 2 are counterfactually created, then there are still 2 people who could have 3 children of their own. So instead of just saving 4 people (which would have been the result in the non-totalist analysis), you've actually saved  8! Double the value! But would it make sense to add the generation after that? What about the 4th, 5th or 6th generations? At what point are we just making up numbers? A further issue is that this whole discussion doesn't incorporate a whole lot of more speculative effects that could potentially swamp the totalists calculus. These are infamously tough problems like: is it better or worse for the rest of the population for more people to exist? AKA is the world under or overpopulated? Does adding new people increase or decrease existential risk? I don't have good, confident answers to these questions, and I'd be sceptical of anyone who claims to! My hunch is that if you're a totalist, your view on saving lives will probably not be driven by crunching the numbers about fertility but about your speculation on how adding people affects the wellbeing of the population and existential security. 

 

 

  1. ^

    But the IDinsight report on this topic (2019) made me think this type of work may be more difficult to do in very low-income settings than I might have hoped. 

Thanks so much for your answer. I generally think what you're saying here makes sense, but I wanted to dig into one specific point. You say: 

My hunch is that if you're a totalist, your view on saving lives will probably not be driven by crunching the numbers about fertility but about your speculation on how adding people affects the wellbeing of the population and existential security. 

What worries me here is that you don't need to be a totalist to have these concerns. Even under a TRIA framework, wouldn't you still care about the population-level wellbeing impacts of any intervention (at least on the portion of the population that exists in both the world where the intervention happens and does not happen)? It feels like a little bit of a selective demand for rigor to say that this makes the total utilitarian calculus  intractable, but not that it makes any of the other calculi intractable.

Still, I do recognize that total utilitarianism sometimes leads to galaxy-brained worries about higher-order effects. 

What worries me here is that you don't need to be a totalist to have these concerns.

Right, I should have clarified that the gnarly thing with totalism is considering the effect on all future 14k+ generations and the likelihood they exist, not just the higher-order effects on the presently existing population. 

However, I'm not the philosopher, so Michael may disagree with my sketch of the situation. 

Thanks HLI. I really like the post.

Pointing out an issue with the links to sheets that are referenced. Remove everything after "/edit" to make them work (as per below), and the latter one regardless is not publicly accessible:

https://docs.google.com/spreadsheets/d/1NMAU-a1X4vqjodjI6kf8KnUyCJaK9uyNvXWj5VetDZw

https://docs.google.com/spreadsheets/d/1RrBuiPVgL-t8hlr6EqkqABiaqdHMGkpvfeiqiiX49LU

---

Regarding the content,  as explained in Joel's comment above the immediate expected replacement effects are not included, and if they were to be you need to ask why stop after the first generation. Is there a legitimate argument however to count the first generation and stop there? Because:

  • first generation replacement effect is relevant immediately, or at least within the next few years. Second generation is relevant in ~20 years, when the state of the world is much less predictable. Hopefully subjective wellbeing of people in these regions will be noticeably better in 20 years, and any replacement effect/rate might also be noticeably different.
  • it is similar in immediacy and measurability (I think) to developmental, morbidity and grievance impacts that are included.

 

Thanks.

Thanks very much for flagging the issues with the spreadsheet links. I believe I've fixed them all now but do let me know if you encounter any further issues.

Yep that makes sense to me

Thanks for the analysis!

  1. Worldview Diversification (Karnofksy, 2016): Divide your resources across different theoretical ‘buckets’ in proportion to your confidence in each theory, then choose the best option for each ‘bucket’. For example, if you have 30% credence in deprivationism and 100% credence that the neutral point is below one, you should award 30% of your resources to AMF and the rest to StrongMinds.

It is worth noting the portfolio approach corresponding to worldview diversification applies to the allocation of resources of the community as a whole, as far as I understand. So, even if one has "30% credence in deprivationism and 100% credence that the neutral point is below one", assuming AMF (or other life-saving interventions which score highly under deprivationist views) currently receives more than 30 % of resources, one could reasonably direct (at the current margin) all donations to StrongMinds (or other life-improving interventions which score highly under epicurean views).

I also wonder whether GiveWell's moral weights being majorly determined by its donors (60 %) has the intention of capturing other effects besides those directly related to the death of the person. For example, Wilde 2019 analyses the effect of bednets on fertility, concluding:

[Abstract:] The effect on fertility is positive only temporarily – lasting only 1-3 years after the beginning of the ITN distribution programs – and then becomes negative. Taken together, these results suggest the ITN distribution campaigns may have caused fertility to increase unexpectedly and temporarily, or that these increases may just be a tempo effect – changes in fertility timing which do not lead to increased completed fertility

[Conclusion:] In contrast, our findings do not support the contention that erosion of international funding for malaria control, specifically of ITNs, would lead to higher fertility rates in the short-run. While our results are suggestive that this may be the case for long-run fertility, we show the exact opposite for the short-run

If the results are suggestive that decreasing ITNs leads to higher long-run fertility, AMF would tend to decrease longterm fertility. This would tend to decrease the future population size, but it is unclear whether this is good or bad.

I guess population size considerations did not play much of a role on the moral weight answers of GiveWell's donors. However, such considerations could play a major role in determining the longterm cost-effectiveness of life-saving interventions, so they should arguably be investigated.

It is worth noting the portfolio approach corresponding to worldview diversification applies to the allocation of resources of the community as a whole, as far as I understand.

It does? Says who? And why does it? Given that attempts there have been, as far as I can tell, almost nil attempts to think through the worldview diversification approach - despite it being appealed to in decision-making for many years - it strikes me as an open question about how it should be understood. I see moral uncertainty as asking a first-personal question - what should I do, given my beliefs about morality? 

I also wonder whether GiveWell's moral weights being majorly determined by its donors (60 %) has the intention of capturing other effects besides those directly related to the death of the person

Ah, I too used to spend many hours wondering what GiveWell really thought about things. But now I am a man, I have put away such childish things. 

Hi Michael,

Thanks for the reply!

It does? Says who? And why does it?

The decision of which interventions to support depends on their marginal cost-effectiveness, which in turn depends on the amount of resources invested in the interventions globally, not just by me.

Given that attempts there have been, as far as I can tell, almost nil attempts to think through the worldview diversification approach - despite it being appealed to in decision-making for many years - it strikes me as an open question about how it should be understood.

I agree the worldview diversification approach is quite ad hoc, and I much prefer the softmax approach suggested here by Jan Kulveit and Gavin Leech.

Ah, I too used to spend many hours wondering what GiveWell really thought about things. But now I am a man, I have put away such childish things.

I think it is useful to understand the reasoning behind certain assumptions (e.g. giving large weight to donors' moral weights), because they may inform our own analyses. However, one should still question whether the reasoning makes sense.

Skimming through this, this is great! My only bone to pick is that, skimming through this, I don't have an easy understanding of the intuition behind the key results. For example, it would be good to know (from reading the summary?) what the intuition behind the following results are:

  • Why AMF is the best under deprivationism followed by TRIA (AC=5), then TRIA (AC=25), then Epicureanism.
    • If AMF is better under deprivationism than TRIA, is this because we tend to be saving younger rather than older people from death by giving to AMF?
    • Similarly, if AMF is better under TRIA (AC=5) than TRIA (AC=25), presumably this is because we are saving some young people (younger than 25 years)?
    • AMF is the worst under epicureanism because, quite simply, death isn't bad under epicureanism (other than the pain of death), so you get relatively little from averting deaths due to malaria.
  • Why giving to AMF becomes worse if the neutral point is higher.
    • The higher the neutral point, the less well-off people of a given life satisfaction level are and therefore the less bad it is if they die from malaria.
  • Why StrongMinds is generally better than AMF (almost irregardless of what your philosophical view is).
    • This could be for a few reasons I guess. Maybe:
      • People are generally pretty sad on a life satisfaction view, or the pain of death is little. So saving lives is generally just not that good.
      • StrongMinds is just really cost-effective at improving life satisfaction, compared to how cost-effective AMF is at averting deaths.

Not sure what others think, but personally I like to understand intuitions like the above!

Hi Jack, thanks for the feedback! I think your suggested intuitions are about right for the first two points.

Regarding your third point, I'm inclined to phrase it slightly differently. People aren't generally pretty sad on a life-satisfaction view, it's more that the people AMF saves will live hard lives. That's why we're trying to help them. If we expected them to be considerably more satisfied with their lives, then I think the cost-effectiveness comparison would look very different. 

Why AMF is the best under deprivationism followed by TRIA (AC=5), then TRIA (AC=25), then Epicureanism.

Um, because these are literally the results these views are structured to give! To me, your question is akin to asking "why does consequentialism care more about consequences than deontology?" Sorry, maybe I've misunderstood. 

Why StrongMinds is generally better than AMF (almost irregardless of what your philosophical view is).

To be clear, there is no intuition here! These are the outputs of an empirical analysis. There's absolutely no reason it has to be true that the purported best life-extending intervention is better, under a range of different philosophical assumptions, than the purported best life-improving one. In a nearby possible world, AMF* could have been very many times more cost-effective on the assumptions most generous to saving lives. 

Is it not true that if AMF generally saved older people then giving to AMF would have been equally as good under TRIA as deprivationism?

If so I think it’s worth making this explicit. It’s an interesting and important point that for interventions that prolong the lives of older people, it isn’t nearly as consequential what moral theory you choose. It’s far more consequential for interventions that save young people.

I generally think intuitions like this are very useful as it allows your analysis to be applied to more scenarios than just the one at hand.

Very interesting report thanks folks! I am very sympathetic to the WELLBY approach. I wonder if anyone shares the intuition that the real improvement of subjective wellbeing doesn't follow linearly when people report on a 10 point scale. When I think about how I rate something something like how tasty a meal was, there is a bigger real difference between a 6/10 and 7/10 than between a 5/10 and a 6/10 even though the difference in my ratings was 1/10 in both cases. I wonder if  something similar applies to ratings of SWB. 

This could have important  implications for which groups to target with certain interventions, if for example through further research we found something like: the first WELLBY  point increase from the neutral point is 'worth more' than a WELLBY point increase at a higher starting point. 

I'm fairly new to this area and have just engaged with it through HLI's posts so I'm not sure if this has been talked about before. Keep up the awesome work. 

Hi! Thanks for engaging. Yes, the issue you raise does get discussed quite a bit, and is much worried about (by effective altruists but not in general). I've got a working paper here where I review the theory and evidence and tentatively conclude people probably do interpret the difference between each unit as having the same value (i.e. the scales are interpreted linearly). 

My colleague, Casper Kaiser, who is also an HLI trustee, has a more recent paper which shows an approximately linear relationship between reports and later behaviour.

Internally at HLI we're working on doing our own survey on this stuff too!  

Awesome, will check it those out. 

Based on this post I felt motivated to try to give a more visual and interactive explanation of the underlying philosophies and their bearing on WELLBYs. This is now available here (with an EA forum version here).

I don't know if StrongMinds explicitly has a goal of reducing suicides, or what its predicted effect on suicide risk might be, but searching for "suicide" on the StrongMinds site (https://strongminds.org/?s=suicide) brings up a lot of results. Whether or not suicide prevention is part of their mission, treating depression would seem to potentially reduce the risk of suicide for some people . If so, some of the value of StrongMinds might come from the extension of lives. This would mean the value of StrongMinds could vary depending on which view of the harm of death we take.  

Hello Rhyss. We actually hadn't considered incorporating a suicide-reducing effect of talk therapy onto our model. I think suicide rates in eg Uganda, one place where SM works, are pretty low - I gather they are pretty low in low-income countries in general. 

Quick calculation. I came across these Danish numbers, which found that "After 10 years, the suicide rate for those who had therapy was 229 per 100,000 compared to 314 per 100,000 in the group that did not get the treatment." Very very naively, then, that's one life saved via averted suicide per 1,000 treated, or about $150k to save a life via therapy (vs $3-5k for AMF), so probably wouldn't make much difference. But  that is just looking at suicide. We could look at the all-cause mortality effects on treating depression (mental and physical health are often comorbid, etc.).

Is there any website or spreadsheet where we can see how different ethical views affect the ranking of charities we should donate to? Specifically, where we can plug in our own parameters into the model, like we can do with the GiveWell CEA, except for more complicated positions (like the ones expressed in this post)? 

If not, would such a project potentially be in the cards for HLI?

Unfortunately, no. I don't know of a website or spreadsheet that does this. I made a toy version of this just for the AMF to GiveDirectly and StrongMinds comparison, but it only includes the topics discussed in this report. 

This is on HLI's wishlist of things to do, but we have a very small team, so I'm not sure when we'll get around to it. I'm personally very into this idea. 

Serious question: Tanae, what else would you like to see? We've already displayed the results of the different ethical views, even if we don't provide a means of editing them

I've really enjoyed reading this topic and reading the comments here as well. I'm really excited to join the community and although I find myself vastly under-educated in the terminologies and brilliantly contrasting epistemological memes I do hope to add some Earth to the arena.

One of the most dangerous fallacies in the practical applications of social ethics and structured altruism an issue that tends to occur in the quantification of third-party experiential qualia which weeatquince, MichaelPlant, and Richard Y Chappell beautifully highlighted, and the specific point I'd like to substantiate is the sunk-cost fallacy that becomes almost a default of a happenstance when any quantification of a third-party's hypothetical experiential valued outcomes to the charted and plotted undertakings of a presupposing individual or party.

That presupposition being - that they, the individual or party that is doing the quantifying of the third-party individual's values, do actually care about that individual which their oversight and conjecture by all means is going to impact (assuming there is a actual real-world application which the hypothesis is being formulated for, whether the individual is precise or hypothetical themselves.)

Notice the extreme difference in the amount of actual consideration to all aspects and dynamics we attend to when the consideration is of a loved one in mind. With a loved one in mind whom we care about we do not leave room for any variable to arise and cause harm to them, and we are committed to the safety and well-being of them as was we can very well consider a universally human virtue, as we do take responsibility for those we love and care for.

If we look at the characters in the old tale of Cain and Abel, the projected shame denied facing by Cain is elementally psychological in the depicted scenario through the expression "Am I my brother's keeper?" - which is the very sunk-cost fallacy that is the whole of my point.

We are "our brother's keepers"  when we sincerely care. We take ownership of the risks and the dangers personally and we put ourselves responsible and accountable for all possible dangers and damages that may occur to or upon another - when we love another. 

That is the only valid quantifier which can be attributed scientifically procuring, as it is the only stance which allows for the continuance of accounting of all data.

A third-party stance conjecturing hypothesis does not account for all data by the very nature of it's outside-party perspective, ergo, forms of itself a sunk-cost fallacy in the terms of its limits by the dimensions it can quantify values from.

And we can see this is a default to the process in every application of it we find in history, so much so that it's been the very reason we created terms like "socially immoral" and "community guidelines" to begin with, because there is a inherent understanding that when it comes to the application of ethics coming from a policy where the authors of the policy are alien to it's subjects -there is a X-factor of unknown immoral infliction (or just as likely, a known immoral infliction) that is due to the policy makers being alien to the subjects over which they are policing.

To me this does not leave us at an impasse, the impasse we've so-far carried over this clearly defined and outlined structural flaw is the shadow of the denial we've ironically deflected ownership of, and there is a vast amount of uncharted grounds which we could invent and create off of it if we ever did commit to the admittance if these factors and the sincere actuation, and sincere application of those illuminated avenues and realities arisen from what implications we're left with.

Here's one example of use-case I can think of just off the top of my head: upon the first sign of a politician lacking care for what harm and damages their actions had caused- they should no longer be entrusted with governance of that area and criteria which they were not seen holding themselves accountable to.

That should be common sense, it should be law, and in fact I might argue that it's even a threat to national security to not have that as a deterrent to protect the innocent.

Maybe I'm wrong, and maybe I'm shortsighted even somewhere, but that was just off the top of my head and I do hope someone more knowledgeable and better educated than I am can either take this to practical application or teach me how, because I care enough to want to see it through.

No one should be entrusted with care over things they do not care about, and the verification of care is as simple as measuring the actions taken in responsibility over the care of it, not just words. Anyone can say any words. The proof is in the actual real world pudding.

The difference between a person who actually cares for the individuals of a matter and a person who doesn't care for them is as enormous the difference between a native Alaskan Eskimo being asked to map out a valley's frozen lake and a blind flat-Earther from New Mexico who's never seen snow before in his life and thinks you're talking about "hypothetical" things that only exist in his imagination but are completely non-existent to him.

And that's what it is like every time when we think our 2-dimensional conceptualizing can somehow grasp a 5-dimensional reality impact, which accounts for ongoing-time with our inarguable accountability unquestionably married to the answering-for and managing of all risks and interrelating aspects including all interpersonal impacts caused by, from, to, and with -what influences over the ultra-personal values of the lives of others we plot to tread on.

It's always hypothetical to the CEO until it's money in the bank -and then it becomes "proof of concept" for them, while it's called structural violence to us.

If you found this post helpful, please consider completing HLI's 2022 Impact Survey.

Most questions are multiple-choice and all questions are optional. It should take you around 15 minutes depending on how much you want to say.

Curated and popular this week
Relevant opportunities