Progressives might be turned off by the phrasing of EA as "helping others." Here's my understanding of why. Speaking anecdotally from my ongoing experience as a college student in the US, mutual aid is getting tons of support among progressives these days. Mutual aid involves members of a community asking for assistance (often monetary) from their community, and the community helping out. This is viewed as a reciprocal relationship in which different people will need help with different things and at different times from one another, so you help out when you can and you ask for assistance when you need it; it is also reciprocal because benefiting the community is inherently benefiting oneself. This model implies a level field of power among everybody in the community. Unlike charity, mutual aid relies on social relations and being in community to fight institutional and societal structures of oppression (https://ssw.uga.edu/news/article/what-is-mutual-aid-by-joel-izlar/).
"[Mutual Aid Funds] aim to create permanent systems of support and self-determination, whereas charity creates a relationship of dependency that fails to solve more permanent structural problems. Through mutual aid networks, everyone in a community can contribute their strengths, even the most vulnerable. Charity maintains the same relationships of power, while mutual aid is a system of reciprocal support." (https://williamsrecord.com/376583/opinions/mutual-aid-solidarity-not-charity/).
Within this framework, the idea of "helping people" often relies on people with power aiding the helpless, but doing so in a way that reinforces power difference. To help somebody is to imply that they are lesser and in need of help, rather than an equal community member who is particularly hurt by the system right now. This idea also reminds people of the White Man's Burden and other examples of people claiming to help others but really making things worse.
I could ask my more progressive friends if they think it is good to help people, and they would probably say yes – or at least I could demonstrate that they agree with me given a few minutes of conversation – but that doesn't mean they wouldn't be peeved at hearing "Effective Altruism is about using evidence and careful reasoning to help others the best we can"
I would briefly note that mutual aid is not incompatible with EA to the extent that EA is a question; however, requiring that we be in community with people in order to help them means that we are neglecting the world's poorest people who do not have access to (for example) the communities in expensive private universities.
I think many progressives and others on the left value mutual aid because they see it as more sustainable and genuine and with fewer negative strings attached. I think they are generally fine with aid and helping others as long as they can be shown good evidence that 1) the aid is not going to be used to prevent other positive changes (basically things like exchanging humanitarian aid for continued resource extraction from a region that's worth more than the total aid contributed, or pressuring/requiring a housing justice org to stop organizing tenants to stand up for their rights in exchange for more funding for their shelter initiatives) and 2) Aid is done in a competent manner so that it doesn't get stolen by governments, wasted, or taken by other corrupt actors and 3) respects local wisdom and empowers people to have more of a say over decisions that most affect them. Another example would be conservation efforts that kick indigenous people off their land vs ones that center their practical experience and respect their rights. There's a big difference between donating to a food bank and creating the infrastructure for people to organize their own food bank and/or grow their own food of their choosing. The first one is more narrowly focused on food security whereas the latter fits with a broader food justice or food sovereignty approach. I think both are important. Many people believe the latter kind of empowerment initiatives are more sustainable in the long run and less dependent on shifts in funding, even if they're harder to set up initially. The reason being that they redistribute power, not just resources. To sum it up, something like "Give a man a fish and he will eat for a day; teach a community to fish, and give them a place to do so, and they will eat for generations."
Thanks for your response! I don't think I disagree with anything you're saying, but I definitely think it's hard. That is, the burden of proof for 1, 2, and 3 is really high in progressive circles, because the starting assumption is charity does not do 1, 2, or 3. To this end, simplified messages are easily mis-interpreted. I really like this: "The reason being that they redistribute power, not just resources."
Yeah when I was reading it I was thinking "these are high bars to reach" but I think they cover all the concerns I've heard. Oh glad you liked it! I probably could have said that from the start, now that I think about it.
A Simpler Version of Pascal's Mugging
Background: I found Bostrom’s original piece (https://www.nickbostrom.com/papers/pascal.pdf) unnecessarily confusing, and numerous Fellows in the EA VP Intro Fellowship have also been confused by it. I think we can be more accessible in our ideas.
I wrote this in about 30 minutes though, so it's probably not very good. I would greatly appreciate feedback on how to improve it. I also can't decide if it would be useful to have at the end a section of "possible solution" because as far as I can tell, theses solutions are all subject to complicated philosophical debate that goes over my head. So including it might be necessarily too confusing. Might be easiest to provide comments on the Google Doc itself (https://docs.google.com/document/d/1NLfDK7YqPGdYocxBsTX1QMldLNB4B-BvbT7sevPmzMk/edit)
Pascal is going about his day when he is approached by a mugger demanding Pascal’s wallet. Pascal refuses to give over his wallet, at which point the mugger offers the following deal: “Give me your wallet now and tomorrow I will give you twice as much money as is in the wallet now”
Pascal: “I have $100 in my wallet, but I don’t think it’s very likely you’re going to keep your promise”
Mugger: “What do you think is the probability that I keep my promise and give you the money?”
Pascal: “Hm, maybe 1 in a million because you might be some elaborate YouTube prankster”
Mugger: “Ok, then you give me your $100 now, and tomorrow I will give you $200 million”
Let’s do the math. We can calculate expected value by multiplying the value of an outcome by the probability of that outcome. The expected value of taking the deal, based on Pascal’s stated belief that the mugger will keep their word, is 200,000,000 * 1/(1,000,000) = $200. Whereas, the expected value of not taking the deal is $100 * 1 (certainty) = $100. Pascal should take the deal if he is an expected value maximizing person.
Maybe at this point Pascal realizes that the chances of the mugger having 200 million dollars is extremely low. But this doesn’t change the conundrum because the mugger will simply offer more money to account for the lower probability of them following through. For example, Pascal thinks the probability of the mugger having the money decreases the chance of the mugger following through to one in a trillion. Then the mugger offers 200 trillion dollars.
The mugger is capitalizing on the fact that everything we know, we know with a probability less than one. We can not be 100% certain that the mugger won’t follow through on their promise, even though we intuitively know they won’t. Extremely unlikely outcomes are still possible.
Pascal: “200 trillion dollars is too much money, in fact I don’t think I would benefit from having any more than 10 million dollars”
Pascal is drawing a distinction between expected value (uses units of money) and expected utility (uses units of happiness, satisfaction, other things we find intrinsically valuable), but the mugger is unphased.
Mugger: “Okay, but you do value happy days of life in such a way where more happy days is always better than fewer happy days. It turns out that I’m a wizard and I can grant you 200 trillion happy days of life in exchange for your wallet”
Pascal: “It seems extremely unlikely that you’re a wizard, but the amount I value 200 trillion happy days of life is so high that the expected utility is still positive, and greater than what I get from just keeping my $100”
Pascal hands his wallet to the mugger but doesn’t feel very good about doing so.
So what’s the moral of this story?
-Expected value is not a perfect system for making decisions, because we all know Pascal is getting duped.
-We should be curious and careful about how to deal with low probability events with super high or low expected value (like extinction risks). Relatedly, common sense seems to suggest that spending effort on too unlikely scenarios is irrational
P.S. For what it's worth, I got an entirely different moral from this. Namely that 200 trillion days of happiness makes no sense to the human brain. I would not submit to a million days of torture followed by 200 trillion days of happiness, I'd rather stick to status quo. No probabilities or x-risks involved.
Random journaling and my predictions: Pre-Retrospective on the Campus Specialist role. Applications for the Campus Specialist role at CEA close in like 5 days. Joan Gass's talk at EAG about this was really good, and it has led to many awesome, talented people believing they should do Uni group community building full time. 20-50 people are going to apply for this role, of which at least 20 would do an awesome job.
Because the role is new, CEA is going to hire like 8-12 people for this role; these people are going to do great things for community building and likely have large impacts on the EA community in the next 10 years. Many of the other people who apply will feel extremely discouraged and led on. I'm not sure what they will do, but for the ~10 (or more) who were great fits for the Campus Specialist program but didn't get it, they will do something much less impactful in the next 2 years.
I have no idea what the effects longer-term will be, but definitely not good. Probably some of these people will leave the EA community temporarily because they are confused, discouraged, and don't think their skill set fits well with what employers in the EA community care about right now.
This is avoidable if CEA expands the number of people they hire and the system for organizing this role. I think the strongest argument against doing so is that the role is fairly experimental and we don't know how it will work out. I think that the upside of having more people in this role totally overshadows the downsides. The downsides seem to mainly be money (as long as you hire competent, agentic people). The role description suggests an impact of counterfactually moving ~10 people per year into high impact careers. I think even if the number were only 5, this role would be well worth it, and my guess is that the next 10 best applicants would still have such an effect (even at less prestigious universities).
Disclaimer: I have no insider knowledge. I am applying for the Campus Specialist role (and therefore have a personal preference for more people getting the job). I think there is about a 2/3 chance of most of the above problem occurring, and I'm least confident about paragraph 3 (what the people who don't get the role do instead).
The other people who were good fits but weren't hired might do something less impactful over the next two years, but I think it's still unclear whether their career will be less impactful in the longer term. There are lots of jobs with quality training and management that could teach you a lot in the two years you would've been a campus specialist. I would encourage everyone who's applying to be a campus specialist to also apply to some of those jobs, and think carefully about which to pick if offered both.
Some things you could try:
-Testing your fit for a policy/politics career
-Learning the skills you'd need to help run a new EA megacharity
-Working or volunteering as a community organizer
Yes, I agree that this is unclear. Depending on AI timelines, the long-term might not matter too much. To add to your list:- What do you or others view as talent/skill gaps in the EA community; how can you build those skills/talents in a job that you're more likely to get? (I'm thinking person/project management, good mentoring, marketing skills, as a couple examples)
Thanks for posting this, Aaron! I'm also applying to the role, and your thoughts are extremely well-put and on the mark.
20-50 people are going to apply for this role, of which at least 20 would do an awesome job.
I think we have two disagreements here.
More on the 2nd thought: I'd reckon (high uncertainty) that CEA may struggle to find more than ~12 people like this. This does not imply that there are not far more than 12 qualified people for the job. Primary reasons I think this: a) the short application timeline; b) my uncertainty about the degree of headhunting that's gone on; and c) the fact that a lot of the best community builders I know (this is a limited dataset, however) already have jobs lined up. All of this depends on who is graduating this year and who is applying, of course.
Hey Ed, thanks for your response. I have no disagreement on 1 because I have no clue what the upper end of people applying is – simply that it's much higher than the number who will be accepted and the number of people (I think) will do a good job.
2. I think we do disagree here. I think these qualities are relatively common in the CBers and group organizers I know (small sample). I agree that short app timeline will decrease the number of great applicants applying, also unsure about b, c seems like the biggest factor to me.
Probably the crux here is what proportion of applicants have the skills you mention, and my guess is ⅓ to ⅔, but this is based on the people I know which may be higher than in reality.
Awesome - thanks for the response. Yes, I agree with the crux (this also may come from different conceptions of the skills themselves). I'll message you!
Hey I applied too! Hopefully at least one of us gets it. I think they probably got more than 50 applications, so it almost starts to become a lottery at that point if they only have a few spots and everyone seems like they could do it well. Or maybe that's just easier for me to think haha.
I think conceptualizing job hunts like this for very competitive positions is often accurate and healthy fwiw