G

GeorgeBridgwater

211 karmaJoined Oct 2018

Comments
16

At Animal Ask we did later hear some of that feedback ourselves and one of our early projects failed for similar reasons. Our programs are very group-led, as in we select our research priorities based on groups looking to pursue new campaigns. This means the majority of our projects tend to focus on policy rather than corporate work, given more groups consider new country-specific campaigns and want research to inform this decision.

In the original report from CE, they do account for the consolidation of corporate work behind a few asks. They expected the research on corporate work to be 'ongoing' deeper' and 'more focused research'. So strategically would look more like research throughout the previous corporate campaign to inform the next with a low probability of updating any specific ask. The expectation is that it could be many years between the formation of corporate asks. 

So in fact this consolidation was highlighted in the incubation program as a reason success could have so much impact. As with the large amount of resources the movement devotes to these consolidated corporate asks ensuring these are optimised is essential.

As Ren outlined we have a couple of recent, more detailed evaluations and we have found that the main limitations on our impact are factors only a minority of advisors in the animal space highlighted. These are constraints from other organisation stakeholders either upper management (when the campaigns team had updated on our findings but there was momentum behind another campaign) or funders (particular individual or smaller donors who are typicaly less research motivated than OPP, EAAWF, ACE etc.)

You can see this was the main concern for CE researchers in the original report. "Organizations in the animal space are increasingly aware of the importance of research, but often there are many factors to consider, including logistical ease, momentum, and donor interest. It is possible that this research would not be the determining factor in many cases". 

From the main body of the text: "Plant-based products represented 0·011 % of product unit sales in the pre-intervention period. This increased to 0·016 % during the intervention period and 0·012 % in the post-intervention period. Meat products represented 26·52 % of sales in the pre-intervention period, 26·51 % during the intervention period and 26·32 % in the post-intervention period. The remainder of sales were represented by non-meat products (73·47 % in pre-intervention and intervention periods, 73·67 % in the post-intervention period)."

One thing to flag on reading into this as evidence against plant-based sales leading to lower meat consumption is how much harder it will be to detect a significant effect on meat consumption. The background variation on those is much higher so to detect a significant effect from a campaign like Veganuary we would need a much larger total effect size. Even if the 0.05% of pre/post change in plant-based came 100% from meat I'd expect it still would not be significant. 

That seems like the 80/20 of this would be appropriate for a lot of candidates. I guess I assume that a lot of EA candidates have a higher bar for claims made in typical fundraising material so would benefit from delving deeper into the numbers. This depends on how much trust you already have in organisaions. Where if you think groups are already assessed with enough rigor by funders, e.g they have a GiveWell recommendation, then the time cost of going through the numbers makes less sense. I think this would work best for meta-groups like the organisaion I work for Animal Ask or others like Animal Advocacy Careers, Charity Entrepreneurship, 80,000 hours, Rethink Priorities, Global Priorities Insitute etc. 

Hey Sofia, Great idea. Groups have usually indicated they would spend <10% of the time we spend researching without our involvement, so this seems like a more viable idea than one may expect. There are some reasons this may not entirely cross-apply to the rest of our work. Such as concerns with groups anchoring too much to their more shallow research, which usually results in more optimistic assessments (Optimizer's Curse). Or possibly a selection effect with the groups that are willing to do this being more likely to make better decisions. We are tracking the asks other similar organisations are using in the regions or areas we have worked in to. This gives us some sense of this, but a more direct experiment of this kind could be valuable. Particularly if we ran it with a few groups using different advocacy methods. We will look into the idea more as well as some of the other ways we could amend our pre/post surveys before we partner with the next group!

Hello Joel,

I agree that in hindsight a summary of each indicator would probably have been useful to provide the reader with an overall assessment given the information I reviewed in the report. 

wellbeingmeasured=accuracy∗importance= (reliability∗cardinality)∗(validity∗wellbeingaccount)

That model is roughly the way I was thinking of this assessment, with validity and interpersonal comparisons being how much I would update on a perfectly accurate measure, and reliability giving some sense how wide the confidence interval would be from a real world measurement. The trade off of these, between groups of indicators and individual indicators, adds some nuance so that a single physiological measure is reliable but can vary due to numerous other factors but a combination of them allows us to measure the welfare benefit of things that health can’t capture easily. For example, it may be better to minimise disease rates instead of blood glucose levels if given no context but disease rates would be unable to assess the importance of different types of environmental enrichment.  

If more people comment to express interest in  an overview of each section, I am happy to invest the time to go back through the report to add in these sections.  

I think the ideal system would have a single measure that perfectly tracks what matters, no?

I definitely agree, which is partially why I put an example of self-reports in humans (which are in my opinion as close to ideal as we can get) alongside the measures we have available in other animals. This is what I currently view as the best available ('ideal') system given the weaker methods available. 

My last question is: what are y'all's thoughts on making across species comparisons? This is the question that really interests me, and most of these indicators presented seem to be much, much more suitable to within species assessments of welfare. 

 In this context, many of these indicators struggle on cross species comparisons. Take cortisol for example, where different species have different cortisol levels, making it difficult to compare levels or even percentage changes across species. We can gain some sense of the relative importance of different improvements or events for an individual from the degree of change of an indicator. An example of this within an operant test could be a human showing a mild preference for social contact vs food compared to a fox who shows the opposite relationship. Yet, this still only gives us information within the range of their utility functions and doesn't tell us how their ranges compare. It's a challenging question and due to this we have mostly been deferring to Rethink Priorities’s work on moral weights and Open Philanthropy Project’s report on consciousness. At the moment, we approach this by using an assessment of an animal's quality of life, to gauge how important an improvement is within an individual's utility function, and then adjust this based on these considerations. However, I would be cautious about concluding that an ask is more promising if the deciding factors are based across species comparisons given the range of plausible views on the topic. 

Thanks for the feedback

In these sorts of discussions, I don't think comparing ourselves to the rest of the population is a great guide. It should probably be our base rate but many other factors can affect how income impacts our happiness.

If we look at the overall population the income level required to get the maximum benefit from consumption is pretty high. However, there is some evidence that for people who adopt voluntary simplicity can achieve greater life satisfaction on less income. Boujbel (2012) explanation for this is 'that the control of one’s consumption desires is a significant mediator of the relationship between voluntary simplicity and life satisfaction among consumers who have limited financial resources'.

So then the question is can you reduce your consumption desires if you start life with high consumption? I've seen a few people achieve this and I think I've reduced consumption desires over time as well. But this is pretty weak evidence to make broader inferences so I don't put too much weight on it.

I'd be much more interested in studying how income (specifically consumed rather than donated income) effects life satisfaction and value drift amongst EAs. I'd weight this much more than the general population for my own decision making. If I had to bet I would expect similar findings to Boujbel.

Excellent point, I was considering this when writing the report as it would be possible to use remote volunteers. This would make it a great way for EA university groups to volunteer their time and encourage additional engagement from members. Beyond a pure commitment device volunteers will need to be able to answer some basic questions about the from of psychotherapy they are delivering. However, the skill requirement for providing support is still very low and training would be short. One of the groups in the incubation program is looking into this more and I think it could be a really great model for giving more people an effective way to donate their time.

Something that could explain the public backlash is the large percentage of people who are so called 'non-traders' or 'zero traders' when asked to do time trade offs when weighting QALYs. About 57% of respondents don't trade off any length of life for quality increases. As you note the public revealed preferences show they will trade off quality for quantity but when asked to actual think about this a lot of people refuse to do this. Which explain why a large proportion of the public would view an argument for an improved quality of life vs reduced life poorly. This finding is the same when looking at QALY vs $ trade offs with a large proportion of people unwilling to trade off any amount of money against the value of a life.

I would disagree with two steps in your reasoning one the relative importance of different animals but Cameron_Meyer_Shorb comment already covers this point. Although your conclusion would probably not change if you valued animals more highly making the combined effect of an american diet equal to one or up to maybe ten equivalent years of human life per year ( $430 dollars of enjoyment).

Instead, I think your argument breaks down when accounting for moral uncertainty where if you are not 100% certain in consequentialist ethics then almost any other moral system would hold you much more accountable for pain you cause rather than fail to prevent. Particularly if we increase the required estimate for $ of the enjoyment gained even if they are met. This makes it a different case to other altruistic trade offs you might make in that you are not trading a neutral action.

Another argument against this position is its effect on your moral attitudes as Jeff Sebo argued in his talk at EA global in 2019. You could dismiss this if you are certain it will not effect the relative value you place on other being and by not advertising your position as to not effect others.

I've slowly been updating towards lower expected WP returns to improved DO based on conversations I have had with Fish Welfare Initiative. It seem likely that more fish are in the lower end of welfare benefit for DO optimization because of the natural incentives that exist for farmers in regards to DO. Low DO levels increase mortality and fluctuation in air pressure can cause DO to plummet so farmers often use extra buffer. Therefore any fish suffering -40 WP from DO levels alone would probably die , I think log-normal best captures this. Thanks for pointing this out as i did not make it explicit in the report.

Load more