TL:DR- Individual cause area re-prioritization is hard and may be getting harder.  It would be helpful to have a toolkit of techniques for making the process easier and better.  I highly recommend most of you give $20 to a charity in every major EA cause area, and also do some other things.

 

It's hard, and possibly getting harder, for an individual EA to re-prioritize between causes. There are a few simple and practical measures we can use to make the cause selection go more smoothly and effectively.  Here are a few reasons I've seen (or think I've seen) a lot of evidence for recently in myself and other EAs.

1) Sticking with one cause feels good/familiar, and unfamiliar other cause areas don't feel as good (maybe Mere-exposure effect has gotten to you, or maybe you were always more comfortable with one for separate reasons). 

2) You identify with a cause area or charity, and might lose that emotional connection if you donated* elsewhere (plus loss aversion twice over: once for losing the particular connection, and again for losing some confidence that you can or should become attached to a given cause or charity).  This is one I struggle with a lot.  

3) You fear you would lose status if you changed your mind, or that it would be socially difficult or costly to do so because of personal or professional relationships.

4) You "aren't the kind of person" who donates to a certain cause area (I heard slight variations on this 5-ish times at EAG, e.g. "I'm not a radical EA that gives to fringe causes").  You don't (just) identify with cause X, you identify with not-cause Y and Z. This makes me sad.

5) You thought hard about this a while back, and have since cached the idea that you've done the mental work of cause selection satisfactorily.  You may not be aware that there were holes in your original reasoning, or that new evidence has come to light that affects your earlier conclusions.

I expect all of these to grow more powerful over time. As our donation histories lengthen, we have more opportunities to identify more strongly with a certain charity or cause. More habituation takes place. Our community becomes more entrenched, and so we can expect more and stronger interpersonal relationships that make radical changes potentially costly. Reasoning that may have originally been sound is more likely to become outdated if it's not updated.

It's well worth small amounts of effort to fight identity ossification of this sort early and often, if the efforts are effective.  Identities are powerful, and we should actively manage them.  I fear these processes will decrease cause selection quality. On the other hand, longer exposure to EA means greater exposure to information about other cause areas. Hopefully the second force is stronger.  

Regardless, it would be helpful to have a toolkit of measures to push back against possible trends 1-5 without hurting the positive aspects of those trends, like stronger communities and more comfort with the process of giving.

What can we do about this? I'm not sure, but here are a few things I've been experimenting with.

Most simply:

1) Give a small donation ($20) to a charity in each major cause area, especially the ones you've never donated to before.  This will help prevent strong identification as a not-donor to cause area X or as only a donor to cause area Y.  It may decrease the perceived foreignness of other cause areas than your current favorite.  

But also:

2) Try Ideological Turing Tests to check how well you understand the arguments for other courses of action.

3) If you feel comfortable, make a habit of asking other EAs to explain how they picked their cause area, and invite them to try to convince you to change your mind, so they don't have to worry about being inappropriately aggressive towards people who don't want it.

4) Then document and share these arguments so lots of people don't unwittingly reproduce the same work.

5) De-stigmatize talking about emotional attachment to causes.  My strong impression is that many EAs have these attachments, but feel they have no place in the ideal EA conversation about cause selection, so they repress or subvert those feelings out of fear of looking irrational, or having their opinions be taken less seriously.

 

What else would people recommend to make cause re-prioritization easier? How can we manage our personal relationships and local community dynamics so that they are strong and meaningful, but interfere as little as possible with our donating decisions?

 

*throughout, I use "donating" as a shorthand for any action, including volunteering, professional work, journalistic coverage, political advocacy, and more, that's in support of a given cause area. 

Comments16


Sorted by Click to highlight new comments since:

Perhaps the most important way to improve cause re-prioritization is for more people to write publicly about why they donate to the causes they do. I don't see many people doing this, and the ones who do don't usually offer much detail. I'd like it to become normal and even expected for EAs to give public justifications to wherever they donate.

One possible concern with this is it may pressure people to donate to popular causes. Hopefully people who write up their reasoning like this can get sufficiently rational feedback that they won't feel this pressure. I expect that even with this pressure, people will end up making better decisions than if they didn't talk publicly about their donation choices.

Of course, I don't really know if this would help. What we need to do is run an RCT where people try different cause re-prioritization strategies and see which ones work best.

The risk of having people write about their donation choices online is that discussion devolves in to a flame war. Any such discussions should be conducted with the highest levels of collegiality, to prevent slipping in to the "my cause is better than your cause" degenerate case.

I've found the advice of this post useful.

In particular, the suggestion to "1) Give a small donation ($20) to a charity in each major cause area, especially  the ones you've never donated to before."

I just acted on this advice by giving $20 to a group of Democrat senate campaigns which David Shor considers "the races where I think the marginal *small dollar donation* will go the furthest." This was my first political donation (besides a $1 donation in February to Yang's campaign that allegedly helped bring him to the debates.)

Previously, I followed the advice by donating small dollar amounts to a couple other organizations working to help animals (the Good Food Institute and the Wild Animal Initiative). While these acts haven't (yet) caused me to change which cause area I make the bulk of my donations to, I've noticed that they  seem to have had some effect on me psychologically, making me more open to seriously considering making substantial donations to these organizations/cause areas.

Nice ideas! I like #5, since donating for "fuzzies" reasons is often looked down upon. I discussed my grappling with fuzzies vs. utilons here.

Relevant to the issue of identity: I think it's telling that the empathetic advice here is described as "try ideological Turing tests" rather than "try to argue the other side convincingly," which is a much older principle and much more generally understandable.

Should making EA legible to the majority of the worlds' citizens, who are not and will never be computer scientists, be a goal? If so, we need to work on the language we use to discuss these issues.

Thank you, great tips! In response, I just sent a donation of $20 to MIRI. :‑)

It worked! I’m excited about MIRI now!

Another, selfish reason to donate to a broad variety of causes is to claim moral authority over people who don't donate at all. Say you donate mainly to a "fringe" cause area, and some non-EA is hard on you for this. Then you can respond by saying "well I also donate small amounts to a variety of other causes too, including some you would agree with, to avoid getting attached to a particular cause; what are you doing?" At which point the person you're talking to gets embarrassed if they don't donate at all.

3) If you feel comfortable, make a habit of asking other EAs to explain how they picked their cause area, and invite them to try to convince you to change your mind, so they don't have to worry about being inappropriately aggressive towards people who don't want it.

I think I'm a bit averse to pushing this responsibility onto others ("convince me!") rather than putting in the effort myself to research/understand. But there is a lot of material out there related to each cause.

Rather than many-to-many requests (each person asking multiple others for their reasons), or as an adjunct to doing that, is there a consolidated guide to the key reasons for or against each of the main areas? I haven't come across one, but may have missed it. I've seen intros/summaries from different EA orgs, but they're generally not steelmanned / responding to the objections of the other cause-foci.

If this doesn't exist, it might be a good project in terms of time-saving and mind-change facilitation and feeling like one community whose members understand one other (a sort of "Ideological Turing Test Prep").

I don't like proposing projects without volunteering to help them, which is probably a fault of mine, but that said I'd be willing to help with this if people think it's a good idea. Asking individuals for their reasons, as Claire proposed, and then collecting/aggregating/summarizing responses might be a good place to start with such a thing, since few of us (none of us?) are expert in the various areas.

Issa Rice has the Cause Prioritization Wiki, but most of the pages are pretty empty.

I imagine it could be highly valuable for many EAs to write up their reasoning about why they donate to the cause they do, and then publish these at some central location. I'm considering setting up a website to do this but I don't know if there would be sufficient interest.

I also think it would be great to have a centralized location, probably with both some general arguments about different cause areas and charities, and write-ups of individual decisions about donation and other forms of support. Unfortunately I just don't have time, but if you guys wanted to work on this some people in the Cause Prioritization Facebook group might help (perhaps Issa).

The biggest issue vis-a-vis changing one's mind about cause areas seems to me likely to be the routine epistemic difficulty of properly (deeply, charitably and effectively) considering opposing points of view (motivated reasoning, confirmation bias etc.). (Closest but not identical to 5).

So conducting Ideological Turing Tests on yourself seems to be the best solution to me. 3 seems a good idea too: you'd really need to work with someone with the other point of view for the ITT to be fruitful, I'm guessing. 1 seems much less useful. For me, giving a nominal sum to causes B and C, where I support cause A, would be about as fruitful as sending a nominal donation to a couple of conservative organisations (which I expect would not shift my views one whit).

The only thing I worry about with the ITT and deep investigation of alternative causes' arguments is that while these policies are very epistemically and morally virtuous, they seem extremely demanding. I don't go in for moral over-demandingness arguments, but I worry that becoming decently conversant with the arguments for a cause you don't yet agree with and thinking about it could require enormous amounts of time and that with anything less than enormous amounts of time and sincere effort invested, a person might do better to just defer to some authority. Alternatively, if you are really worried by apparent peer disagreement/lack of information, you could just split your donations between plausible-looking causes and exhort others to do the same. (This might be especially worthwhile if you think that people are unlikely to switch causes to the most rational, wholesale).

I really like the approach behind this post - too often EAs are hesitant to think about ways we can make use of our own psychology for pursuing altruism. It appears to some EAs that tricks like donating to a cause area (to avoid identifying too strongly in opposition to it) should not be part of a rationalist's toolkit. But accepting that we are all biased, and doing what we can to overcome those biases in favor of what we would rationally, reflectively endorse as the unbiased viewpoint, can only help us increase our effectiveness in pursuing our altruistic goals.

I like your idea of donating $20 in a number of different causes, and I think I'll follow through on that. For example I'm not opposed to the idea of working on existential risk but have spent relatively little time looking into X-risk charities and donating to them ($0 so far) compared to my primary areas of focus, poverty and animal rights.

In response to this article, I followed the advice in 1) and thought about where I'd donate in the animal suffering cause area, ending up donating $20 to New Harvest.

[anonymous]1
0
0

Perhaps the most important factor for helping me change my mind (about many things): exposure to conversation with someone much brighter than me who understands everything about my position, and still has different conclusions than me. It decreases overconfidence.

I recommend exposure to at least one such person every week.

Curated and popular this week
Sam Anschell
 ·  · 6m read
 · 
*Disclaimer* I am writing this post in a personal capacity; the opinions I express are my own and do not represent my employer. I think that more people and orgs (especially nonprofits) should consider negotiating the cost of sizable expenses. In my experience, there is usually nothing to lose by respectfully asking to pay less, and doing so can sometimes save thousands or tens of thousands of dollars per hour. This is because negotiating doesn’t take very much time[1], savings can persist across multiple years, and counterparties can be surprisingly generous with discounts. Here are a few examples of expenses that may be negotiable: For organizations * Software or news subscriptions * Of 35 corporate software and news providers I’ve negotiated with, 30 have been willing to provide discounts. These discounts range from 10% to 80%, with an average of around 40%. * Leases * A friend was able to negotiate a 22% reduction in the price per square foot on a corporate lease and secured a couple months of free rent. This led to >$480,000 in savings for their nonprofit. Other negotiable parameters include: * Square footage counted towards rent costs * Lease length * A tenant improvement allowance * Certain physical goods (e.g., smart TVs) * Buying in bulk can be a great lever for negotiating smaller items like covid tests, and can reduce costs by 50% or more. * Event/retreat venues (both venue price and smaller items like food and AV) * Hotel blocks * A quick email with the rates of comparable but more affordable hotel blocks can often save ~10%. * Professional service contracts with large for-profit firms (e.g., IT contracts, office internet coverage) * Insurance premiums (though I am less confident that this is negotiable) For many products and services, a nonprofit can qualify for a discount simply by providing their IRS determination letter or getting verified on platforms like TechSoup. In my experience, most vendors and companies
jackva
 ·  · 3m read
 · 
 [Edits on March 10th for clarity, two sub-sections added] Watching what is happening in the world -- with lots of renegotiation of institutional norms within Western democracies and a parallel fracturing of the post-WW2 institutional order -- I do think we, as a community, should more seriously question our priors on the relative value of surgical/targeted and broad system-level interventions. Speaking somewhat roughly, with EA as a movement coming of age in an era where democratic institutions and the rule-based international order were not fundamentally questioned, it seems easy to underestimate how much the world is currently changing and how much riskier a world of stronger institutional and democratic backsliding and weakened international norms might be. Of course, working on these issues might be intractable and possibly there's nothing highly effective for EAs to do on the margin given much attention to these issues from society at large. So, I am not here to confidently state we should be working on these issues more. But I do think in a situation of more downside risk with regards to broad system-level changes and significantly more fluidity, it seems at least worth rigorously asking whether we should shift more attention to work that is less surgical (working on specific risks) and more systemic (working on institutional quality, indirect risk factors, etc.). While there have been many posts along those lines over the past months and there are of course some EA organizations working on these issues, it stil appears like a niche focus in the community and none of the major EA and EA-adjacent orgs (including the one I work for, though I am writing this in a personal capacity) seem to have taken it up as a serious focus and I worry it might be due to baked-in assumptions about the relative value of such work that are outdated in a time where the importance of systemic work has changed in the face of greater threat and fluidity. When the world seems to
 ·  · 4m read
 · 
Forethought[1] is a new AI macrostrategy research group cofounded by Max Dalton, Will MacAskill, Tom Davidson, and Amrit Sidhu-Brar. We are trying to figure out how to navigate the (potentially rapid) transition to a world with superintelligent AI systems. We aim to tackle the most important questions we can find, unrestricted by the current Overton window. More details on our website. Why we exist We think that AGI might come soon (say, modal timelines to mostly-automated AI R&D in the next 2-8 years), and might significantly accelerate technological progress, leading to many different challenges. We don’t yet have a good understanding of what this change might look like or how to navigate it. Society is not prepared. Moreover, we want the world to not just avoid catastrophe: we want to reach a really great future. We think about what this might be like (incorporating moral uncertainty), and what we can do, now, to build towards a good future. Like all projects, this started out with a plethora of Google docs. We ran a series of seminars to explore the ideas further, and that cascaded into an organization. This area of work feels to us like the early days of EA: we’re exploring unusual, neglected ideas, and finding research progress surprisingly tractable. And while we start out with (literally) galaxy-brained schemes, they often ground out into fairly specific and concrete ideas about what should happen next. Of course, we’re bringing principles like scope sensitivity, impartiality, etc to our thinking, and we think that these issues urgently need more morally dedicated and thoughtful people working on them. Research Research agendas We are currently pursuing the following perspectives: * Preparing for the intelligence explosion: If AI drives explosive growth there will be an enormous number of challenges we have to face. In addition to misalignment risk and biorisk, this potentially includes: how to govern the development of new weapons of mass destr