JG

Jitse Goutbeek

57 karmaJoined

Posts
1

Sorted by New

Comments
11

I donated most to the animal welfare fund of 'doneer effectief' and some to their global health and wellbeing fund. 

I think you are too optimistic about how much the average person in the global north care about people in the global south (and possibly too pessimistic about how much people care about animals, but less confident about that). 

"Saving children from malaria, diarrheal disease, lead poisoning, or treating cataracts and obstetric fistula is hard to argue against without sounding like a bad person." 

The argument that you should help locally instead (even if the people making that argument don't do so) is easily made without sounding like a bad person. I live in the Netherlands and any spending on developmental cooperation or charity work tends to be very unpopular, our current government also pledged to cut a lot of that spending. Pushback on 'giving money to Africa' is something I certainly encounter, it might be less than for being vegan but I'm not sure and also not sure by how much. I would want better data on this, this piece seems to assume a big difference in how socially acceptable, moderate and politizised global health is compared to animal welfare. I would like to see better data on whether that assumption is true or not.

I find the slippery slope argument pretty weak, historically expanding the moral circle (to women, slaves, people of color, lhbtqi+ people) has been quite important and still requires a lot of work. It seems much easier to not go far enough than to go too far, and expanding the moral circle to animals or people in the global south is both an expansion. A core strength of EA is that there is a lot of attention to groups that are outside the moral circle of many comperatively affluent and wealthy people or institutions (the global poor, animals, and future generations). The slippery-slope argument is only meaningful if going down the slope would be bad, which is unclear to me.

It makes me quite sad that in practice EA has become so much about specific answers (work on AI risk, donate to this charity, become vegan) to the question of how we effective make the world a better place, that not agreeing with a specific answer can create so much friction. In my mind EA really is just about the question itself and the world is super complicated so we should be skeptical of any particular answer.

If we accidentally start selecting for people that intuitive agree with certain answers (which it sounds like we are doing, I know people that have a deep desire to make a lot of counterfactual impact, but were turned of because they ‘disagreed’ with some common EA belief, and sounds like if you read superintelligence earlier that would have been the case for you as well) that has a big negative effect on our epistemics and ultimately hurts our goal. We won’t be able to check each others biases and have a less diverse set of views and viewpoints.

I think many of these lessons have more merrit to them than you assume. To speak specifically about the ‘earning to give’ one, yes EA has pointed out that you should not do harm with your job to give it away. However I also think it is a bit psychologically naïeve to think that what happened with FTX is the last time that giving people the advice of earning to give is the last time it will lead to people doing harm to make money.

Trade-offs between ethical principles and monetary gain are not rare, and once we have established making as much money as possible (to give it away) as a goal in itself and something that gives status, it can be hard to make these trade-offs the way you are supposed to. It is not easy to accept a setback in wealth, power and (moral) status so lying to yourself or others to think that what you are doing is ethical becomes easy. It is also generally risky for individuals to become incredibly rich or powerful, especially if that depends on a misguided believe that some group membership (ea) makes you inherently ethical and therefore more trustworthy, since power tends to corrupt.

At the minimum I would like EA to talk more about how to jointly maximize the ethics of how you earn and spend your money, making sure that we promote people to gain their wealth in ways that add value to the world.

I do think there is something here, while I do not agree with everything taking away the lesson that we should think more about power and how to prevent it from being concentrated seems good. If you look at what EA has written it is clear that what SBF did was against almost everything that has been said about how to do good (do not do harm, ends don't justify the means, act with integrity and in accordance with common-sense altruism). However, he was originally inspired by EA, and might have started out following the principles but when things went sour abandoned them.  It is common psychological knowledgde that power tends to corrupt, so in a sense him not just conceding power when and thinking 'a hit and a miss' might not be that unexpected. In this sense instead of writing better advices of what to do when you have a lot of power (either political or wealth) me might need to focus more on making sure power is not concentrated in the first place.

Thanks for the responses. I was not aware of the article on harmful careers and think it is very good (I recognize that many of these issues are hard, so even tough I might be a bit more skeptical about some of these high paying jobs and examples I could easily be wrong). Thanks for bringing it to my attention, it shows that some of my criticism was a bit misguided. 

Agree with the postmortem process, there is a reasonable chance that SBF used EA type thinking to justify his behaviour and we certainly celebrated him as some kind of hero. I think it is important to not just condemn fraud but also really try to figure out if there is stuff EA did or advice it gives that incentivizes this kind of behaviour.

Thanks for this post. Wondering if 'earning to give' advice should be updated to more clearly argue for going for (the most) ethical ways to earn instead of just the most ethical ways to give. To me it seems like a lot of the fastest ways to make money can be unethical (which should bother us as EA's more than usual) or outright fraudulent, so arguing for making as much money as possible can incentivize behaviour like this (in that sense I do think EA bears some responsibility for Sam's behaviour). I love giving what you earn, just not trying to maximize what you earn unless it is done by doing good.

I have not thought this out very well, but I am wondering if we should rethink the 'earning to give' advice, 'giving what you earn' is great but the first really incentivizes people to make as much money in possible which can incentivize people to engage in useless, harmful, extractive or even fraudulent/illegal behaviour. This has a very 'ends justify the means' logic, which does not have the best historical track record, and I already see it turning me off from EA a little bit so I worry it will make it a lot harder to get many people into EA. At the very least we should talk more about how to make sure you do not do a lot of harm (a realistic concern when trying to do the most good, and being wrong about something), especially when earning to give.

Great piece!

I do believe we need more epistemic pluralism within EA to be robustely effective and these perspectives could really add to that. Specifically making sure that effectiveness is ranked according to the worldview and needs of the people effected (instead of the people trying to ‘help’ them) is of utmost important to be truly effective.

Besides that your worldview clearly contains a lot of theoretical and philosophical background that not everybody will agree with, even upon long and critical reflection. Nevertheless, there should also be options (in addition to the current career paths, and more paths for people from non-theoretical backgrounds) on 80.000 hours that are more in line with different kinds of epistemologies, including feminist, indigenous and decolonial ones

Load more