GeorgeBridgwater's Comments

Dying for a day at the beach

Something that could explain the public backlash is the large percentage of people who are so called 'non-traders' or 'zero traders' when asked to do time trade offs when weighting QALYs. About 57% of respondents don't trade off any length of life for quality increases. As you note the public revealed preferences show they will trade off quality for quantity but when asked to actual think about this a lot of people refuse to do this. Which explain why a large proportion of the public would view an argument for an improved quality of life vs reduced life poorly. This finding is the same when looking at QALY vs $ trade offs with a large proportion of people unwilling to trade off any amount of money against the value of a life.

Why I'm Not Vegan

I would disagree with two steps in your reasoning one the relative importance of different animals but Cameron_Meyer_Shorb comment already covers this point. Although your conclusion would probably not change if you valued animals more highly making the combined effect of an american diet equal to one or up to maybe ten equivalent years of human life per year ( $430 dollars of enjoyment).

Instead, I think your argument breaks down when accounting for moral uncertainty where if you are not 100% certain in consequentialist ethics then almost any other moral system would hold you much more accountable for pain you cause rather than fail to prevent. Particularly if we increase the required estimate for $ of the enjoyment gained even if they are met. This makes it a different case to other altruistic trade offs you might make in that you are not trading a neutral action.

Another argument against this position is its effect on your moral attitudes as Jeff Sebo argued in his talk at EA global in 2019. You could dismiss this if you are certain it will not effect the relative value you place on other being and by not advertising your position as to not effect others.

Should We Try to Change Animal Welfare Laws in India or Taiwan?- Charity Entrepreneurship's Approach Report

I've slowly been updating towards lower expected WP returns to improved DO based on conversations I have had with Fish Welfare Initiative. It seem likely that more fish are in the lower end of welfare benefit for DO optimization because of the natural incentives that exist for farmers in regards to DO. Low DO levels increase mortality and fluctuation in air pressure can cause DO to plummet so farmers often use extra buffer. Therefore any fish suffering -40 WP from DO levels alone would probably die , I think log-normal best captures this. Thanks for pointing this out as i did not make it explicit in the report.

Best units for comparing personal interventions?

I think the third option is best to try to test. Apps like SmartMood could track the effect on your mood. I suppose the problem with this though is that something like eating a marginal apple will probably have very small effects (if any) and so practically you won't actual be able to measure it with the method. Things like meditation and a 10 min walk I would guess would be measurable though.

Shapley values: Better than counterfactuals

I think the reason summing counterfactual impact of multiple people leads to weird results is not a problem with counterfactual impact but with how you are summing it. Adding together each individual's counterfactual impact by summing is adding the difference between world A where they both act and world B and C where each of them act alone. In your calculus, you then assume this is the same as the difference between world A and D where nobody acts.

The true issue in maximising counterfactual impact seems to arise when actors act cooperatively but think of their actions as an individual. When acting cooperatively you should compare your counterfactuals to world D, when acting individually world B or C.

The Shapley value is not immune to error either I can see three ways it could lead to poor decision making:

  1. For the Vaccine Reminder example, It seems more strange to me to attribute impact to people who would otherwise have no impact. We then get the same double-counting problem or in this case infinite dividing which is worse as It can dissuade you of high impact options. If I am not mistaken, then in this case the Shapley value is divided between the NGO, the government, the doctor, the nurse, the people driving logistics, the person who built the roads, the person who trained the doctor, the person who made the phones, the person who set up the phone network and the person who invented electricity. In which case, everyone is attributed a tiny fraction of the impact when only the vaccine reminder intentionally caused it. Depending on the scope of other actors we consider this could massively reduce the impact of the action.
  2. Example 6 reveals another flaw as attributing impact this way can lead you to make poor decisions. If you use the Shapley value then when examining whether to leak information as the 10th person you see that the action costs -1million utilities. If I was offered 500,000 utils to share then under Shapley I should not do so as 500,00 -1M is negative. However, this thinking will just prevent me from increasing overall utilis by 500,000.
  3. In example 7 the counterfactual impact of the applicant who gets the job is not 0 but the impact of the job the lowest impact person gets. Imagine each applicant could earn to give 2 utility and only has time for one job application. When considering counterfactual impact the first applicant chooses to apply to the EA org and gets attributed 100 utility (as does the EA org). The other applicants now enter the space and decide to earn to give as this has a higher counterfactual impact. They decrease the first applicant's counterfactual utility to 2 but increase overall utility. If we use Shapely instead then all applicants would apply for the EA org and as this gives them a value of 2.38 instead of 2.

I may have misunderstood Shapely here so feel free to correct me. Overall I enjoyed the post and think it is well worth reading. Criticism of the underlying assumptions of many EAs decision-making methods is very valuable.

Reasons to eat meat

The openness of the EA movement to omnivores is a good point I had not considered before. Although this could probably be accomplished by not being in peoples face about it. I understand the reasoning that concludes that the strength of obligations to give to charity and for veganism are the same. However, I think there is one important distinction, we are causing harm. If we use the classic example of the child drowning in the pool. Not giving to charity is analogous to allowing the child to drown. Eating meat is analogous to drowning the child (or at least a chicken every couple of days). I think we should examine our actions through many ethical theory's due to moral uncertainty. If we do so we can see that there tends to be an extra obligation not to do harm in many ethical theory's. This means there is a distinction between not allowing someone to drown and drowning them. Thus I think there should be extra moral importance placed on first not doing the world any harm.

Getting People Excited About More EA Careers: A New Community Building Challenge

I think another potential cause I have at least observed in my self is risk aversion. EA organisations are widely thought of as good career paths which does make it easier to justify to others but also to your self. If I pursue more niche roles I am less certain that they will be high impact because I am relying on only my own judgment. This does justify some preference for EA organisations but I agree there is probably an over emphasis on them in the community.

From humans in Canada to battery caged chickens in the United States, which animals have the hardest lives: results

Some pretty unintuitive results for some of these. I would not have assumed that a dairy cow would have a worse estimate for welfare score than a beef cow. The method seems pretty logical so I think it is more accurate than just my intuition. I guess my concern would still be with inter-species comparisons of utility, given their possible varying levels of sentience. How is CE approaching this problem? With the usual neuron amount or is there a better way of doing it? I suppose that would just have to be something you have to concede a large margin of error for when comparing between species.

Earning to Save (Give 1%, Save 10%)

Thought this blog and the surrounding community would be a useful resource for EA's. I have already shared it with a few people.

I definitely agree with Raemon that having your own resources allows you greater flexibility but I would go one step further in the aim to amass enough money that I do not need paid work. This allows you complete flexibility with your time over your remaining lifespan and can, therefore, work on any project that seems valuable, or turn down a salary for jobs at EA orgs. I am aware that EA orgs are time sensitive in terms of donations. I think, the estimated preference for immediate donations instead of 1 year on was somewhere between 10-12%. Dependent on length of time to attain 'early retirement' (ER) this could mean there is a higher expected value on donations over savings leading to flexibility. I think overall taking the possibility of ER into consideration is important it frames any decisions you make in the present. You can spend money now to free up time but if you save that money It will help to free up all your time eventually.