G

GeorgeBridgwater

181 karmaJoined Oct 2018

Comments
14

That seems like the 80/20 of this would be appropriate for a lot of candidates. I guess I assume that a lot of EA candidates have a higher bar for claims made in typical fundraising material so would benefit from delving deeper into the numbers. This depends on how much trust you already have in organisaions. Where if you think groups are already assessed with enough rigor by funders, e.g they have a GiveWell recommendation, then the time cost of going through the numbers makes less sense. I think this would work best for meta-groups like the organisaion I work for Animal Ask or others like Animal Advocacy Careers, Charity Entrepreneurship, 80,000 hours, Rethink Priorities, Global Priorities Insitute etc. 

Hey Sofia, Great idea. Groups have usually indicated they would spend <10% of the time we spend researching without our involvement, so this seems like a more viable idea than one may expect. There are some reasons this may not entirely cross-apply to the rest of our work. Such as concerns with groups anchoring too much to their more shallow research, which usually results in more optimistic assessments (Optimizer's Curse). Or possibly a selection effect with the groups that are willing to do this being more likely to make better decisions. We are tracking the asks other similar organisations are using in the regions or areas we have worked in to. This gives us some sense of this, but a more direct experiment of this kind could be valuable. Particularly if we ran it with a few groups using different advocacy methods. We will look into the idea more as well as some of the other ways we could amend our pre/post surveys before we partner with the next group!

Hello Joel,

I agree that in hindsight a summary of each indicator would probably have been useful to provide the reader with an overall assessment given the information I reviewed in the report. 

wellbeingmeasured=accuracy∗importance= (reliability∗cardinality)∗(validity∗wellbeingaccount)

That model is roughly the way I was thinking of this assessment, with validity and interpersonal comparisons being how much I would update on a perfectly accurate measure, and reliability giving some sense how wide the confidence interval would be from a real world measurement. The trade off of these, between groups of indicators and individual indicators, adds some nuance so that a single physiological measure is reliable but can vary due to numerous other factors but a combination of them allows us to measure the welfare benefit of things that health can’t capture easily. For example, it may be better to minimise disease rates instead of blood glucose levels if given no context but disease rates would be unable to assess the importance of different types of environmental enrichment.  

If more people comment to express interest in  an overview of each section, I am happy to invest the time to go back through the report to add in these sections.  

I think the ideal system would have a single measure that perfectly tracks what matters, no?

I definitely agree, which is partially why I put an example of self-reports in humans (which are in my opinion as close to ideal as we can get) alongside the measures we have available in other animals. This is what I currently view as the best available ('ideal') system given the weaker methods available. 

My last question is: what are y'all's thoughts on making across species comparisons? This is the question that really interests me, and most of these indicators presented seem to be much, much more suitable to within species assessments of welfare. 

 In this context, many of these indicators struggle on cross species comparisons. Take cortisol for example, where different species have different cortisol levels, making it difficult to compare levels or even percentage changes across species. We can gain some sense of the relative importance of different improvements or events for an individual from the degree of change of an indicator. An example of this within an operant test could be a human showing a mild preference for social contact vs food compared to a fox who shows the opposite relationship. Yet, this still only gives us information within the range of their utility functions and doesn't tell us how their ranges compare. It's a challenging question and due to this we have mostly been deferring to Rethink Priorities’s work on moral weights and Open Philanthropy Project’s report on consciousness. At the moment, we approach this by using an assessment of an animal's quality of life, to gauge how important an improvement is within an individual's utility function, and then adjust this based on these considerations. However, I would be cautious about concluding that an ask is more promising if the deciding factors are based across species comparisons given the range of plausible views on the topic. 

Thanks for the feedback

In these sorts of discussions, I don't think comparing ourselves to the rest of the population is a great guide. It should probably be our base rate but many other factors can affect how income impacts our happiness.

If we look at the overall population the income level required to get the maximum benefit from consumption is pretty high. However, there is some evidence that for people who adopt voluntary simplicity can achieve greater life satisfaction on less income. Boujbel (2012) explanation for this is 'that the control of one’s consumption desires is a significant mediator of the relationship between voluntary simplicity and life satisfaction among consumers who have limited financial resources'.

So then the question is can you reduce your consumption desires if you start life with high consumption? I've seen a few people achieve this and I think I've reduced consumption desires over time as well. But this is pretty weak evidence to make broader inferences so I don't put too much weight on it.

I'd be much more interested in studying how income (specifically consumed rather than donated income) effects life satisfaction and value drift amongst EAs. I'd weight this much more than the general population for my own decision making. If I had to bet I would expect similar findings to Boujbel.

Excellent point, I was considering this when writing the report as it would be possible to use remote volunteers. This would make it a great way for EA university groups to volunteer their time and encourage additional engagement from members. Beyond a pure commitment device volunteers will need to be able to answer some basic questions about the from of psychotherapy they are delivering. However, the skill requirement for providing support is still very low and training would be short. One of the groups in the incubation program is looking into this more and I think it could be a really great model for giving more people an effective way to donate their time.

Something that could explain the public backlash is the large percentage of people who are so called 'non-traders' or 'zero traders' when asked to do time trade offs when weighting QALYs. About 57% of respondents don't trade off any length of life for quality increases. As you note the public revealed preferences show they will trade off quality for quantity but when asked to actual think about this a lot of people refuse to do this. Which explain why a large proportion of the public would view an argument for an improved quality of life vs reduced life poorly. This finding is the same when looking at QALY vs $ trade offs with a large proportion of people unwilling to trade off any amount of money against the value of a life.

I would disagree with two steps in your reasoning one the relative importance of different animals but Cameron_Meyer_Shorb comment already covers this point. Although your conclusion would probably not change if you valued animals more highly making the combined effect of an american diet equal to one or up to maybe ten equivalent years of human life per year ( $430 dollars of enjoyment).

Instead, I think your argument breaks down when accounting for moral uncertainty where if you are not 100% certain in consequentialist ethics then almost any other moral system would hold you much more accountable for pain you cause rather than fail to prevent. Particularly if we increase the required estimate for $ of the enjoyment gained even if they are met. This makes it a different case to other altruistic trade offs you might make in that you are not trading a neutral action.

Another argument against this position is its effect on your moral attitudes as Jeff Sebo argued in his talk at EA global in 2019. You could dismiss this if you are certain it will not effect the relative value you place on other being and by not advertising your position as to not effect others.

I've slowly been updating towards lower expected WP returns to improved DO based on conversations I have had with Fish Welfare Initiative. It seem likely that more fish are in the lower end of welfare benefit for DO optimization because of the natural incentives that exist for farmers in regards to DO. Low DO levels increase mortality and fluctuation in air pressure can cause DO to plummet so farmers often use extra buffer. Therefore any fish suffering -40 WP from DO levels alone would probably die , I think log-normal best captures this. Thanks for pointing this out as i did not make it explicit in the report.

I think the third option is best to try to test. Apps like SmartMood could track the effect on your mood. I suppose the problem with this though is that something like eating a marginal apple will probably have very small effects (if any) and so practically you won't actual be able to measure it with the method. Things like meditation and a 10 min walk I would guess would be measurable though.

I think the reason summing counterfactual impact of multiple people leads to weird results is not a problem with counterfactual impact but with how you are summing it. Adding together each individual's counterfactual impact by summing is adding the difference between world A where they both act and world B and C where each of them act alone. In your calculus, you then assume this is the same as the difference between world A and D where nobody acts.

The true issue in maximising counterfactual impact seems to arise when actors act cooperatively but think of their actions as an individual. When acting cooperatively you should compare your counterfactuals to world D, when acting individually world B or C.

The Shapley value is not immune to error either I can see three ways it could lead to poor decision making:

  1. For the Vaccine Reminder example, It seems more strange to me to attribute impact to people who would otherwise have no impact. We then get the same double-counting problem or in this case infinite dividing which is worse as It can dissuade you of high impact options. If I am not mistaken, then in this case the Shapley value is divided between the NGO, the government, the doctor, the nurse, the people driving logistics, the person who built the roads, the person who trained the doctor, the person who made the phones, the person who set up the phone network and the person who invented electricity. In which case, everyone is attributed a tiny fraction of the impact when only the vaccine reminder intentionally caused it. Depending on the scope of other actors we consider this could massively reduce the impact of the action.
  2. Example 6 reveals another flaw as attributing impact this way can lead you to make poor decisions. If you use the Shapley value then when examining whether to leak information as the 10th person you see that the action costs -1million utilities. If I was offered 500,000 utils to share then under Shapley I should not do so as 500,00 -1M is negative. However, this thinking will just prevent me from increasing overall utilis by 500,000.
  3. In example 7 the counterfactual impact of the applicant who gets the job is not 0 but the impact of the job the lowest impact person gets. Imagine each applicant could earn to give 2 utility and only has time for one job application. When considering counterfactual impact the first applicant chooses to apply to the EA org and gets attributed 100 utility (as does the EA org). The other applicants now enter the space and decide to earn to give as this has a higher counterfactual impact. They decrease the first applicant's counterfactual utility to 2 but increase overall utility. If we use Shapely instead then all applicants would apply for the EA org and as this gives them a value of 2.38 instead of 2.

I may have misunderstood Shapely here so feel free to correct me. Overall I enjoyed the post and think it is well worth reading. Criticism of the underlying assumptions of many EAs decision-making methods is very valuable.

Load more