Hide table of contents

Two weeks ago I wrote a post discussing my intention to switch from donating to the Long-Term Future Fund to neartermist causes. The comments and messages I got in response were really interesting, seemed to fall evenly on both sides of the fence, and gave me some really useful perspectives in thinking about the issue raised. Much like before, an afternoon to spend in a café by Stavanger harbour gives me the excuse to collate some points people have suggested and lay out the updates I've made as a result. 

Several people drew out what ought to be two separate topics in my initial post. Firstly, how should a longtermist view neartermist donations? External but linked to this choice, how much should one care about the optics of their EA actions and how can this be optimised.

Donations

It seems, from a handful of forum comments, far more common than I had anticipated for longtermists to donate to near-term causes. The prevailing theme in the reasons given seems to be "hedging one's bets". This could be motivated by a desire to avoid putting all of one's vegan egg alternatives in one basket, almost like an investor would diversify a portfolio. Furthermore, as Jack Lewars points out, the uncertainties around a donation to longtermist charities actually generating good are much greater than counterfactually tested global health initiatives. 

My feeling on this is that portfolio diversification is a potentially prudent response to uncertainty surrounding the best donation opportunity but avoidance of low-likelihood, massive upside options in the case that it overruled one's EV estimates is a step away from Rationality in its strictest sense. That being said, I am entirely sympathetic towards, and even categorise myself along with, a person facing highly uncertain impact through a longtermist career that wants the kind of "slam-dunk" feeling of saving a life for a few grand that only global health interventions can offer. EA can be hard and I imagine there are a lot of people out there who would find it an easier pill to swallow if they can know for certain that they actually helped someone. Nate Soares has a quote that stuck with me - "But if the chance that one person can save the world is one in a million, then there had better be a million people trying." I remember this quote both because I passionately agree with it, and for the realisation that it requires selling to 999,999 people a dream that requires their life's work without them ever accomplishing the goal. Those individuals care about their lives, and so that is a high price to ask of them. What I've just said is admittedly a dilution of what EA tries to do, of what sets this community apart. In the scheme of things, if someone's main altruistic guilt is donating 10% of their earnings for the sure-fire win of AMF or GiveDirectly instead of the LTFF to compensate for the angst of a longtermist career rather than pure maximisation, that's probably forgivable and certainly not a failure. I'd still be glad to have a lot more of those people in the world.

Optics

Comments on my first post range in opinion from "the signalling value is often greater than the direct value of the donations" to "I wouldn't worry too much about optics unless you're doing a lot of community building and outreach to your friends." With due regard to a small data set, that was a striking polarity of viewpoint. Prior to writing all this, I agreed more with the first point but reading Chris Leong's comment from which I take the second quote got me thinking. In the 4+ years since I joined the EA community, I have surely introduced the idea to many dozens of people, many of whom are educated, skilled, and intelligent enough to really put a dent in the problems we spend our time discussing. How many of them became EAs that would not have otherwise? So far as I can recall: one. How did I persuade them? An evening spent in extremely deep conversation and binge-watching Rational Animations, following on years of close friendship and watching my entire EA/Rationalist journey unfold. I don't think my community-building score would be any higher if I'd spent this time donating to AMF instead of the LTFF. It's probably just hard to get people into EA, unless they're of a disposition that has an inherent affinity for it, in which case we're best off presenting our beliefs, thoughts, and interests with authenticity, as multiple commenters and friends suggest. 

This brings me to a reframing of my optics concerns. I'm not worried that donating to longtermist charities is damaging my ability to conduct outreach. I am, as I think many others are, worried that a tabloid newspaper would make a concerted strike against the reputation of EA. I sincerely fear the headline "This cult of academic elites thinks it's wrong that you donate to Cancer Research" and the concerns I described in the previous post would be fuel to that fire. I lack the omniscience over our community and the understanding of journalism to foresee how this would come about, but I suspect it's not going to be a result of me talking to my friends about my donations. It would be interesting to hear other's thoughts on this, with the caveat that publicly discussing it may inadvertently attract the attention I'm trying to avoid. In any case, I should probably stop worrying too much about the signalling value of where that tenth of my salary goes and knuckle down on the direct impact conundrum. 

22

0
0

Reactions

0
0

More posts like this

Comments


No comments on this post yet.
Be the first to respond.
Curated and popular this week
 ·  · 8m read
 · 
Around 1 month ago, I wrote a similar Forum post on the Easterlin Paradox. I decided to take it down because: 1) after useful comments, the method looked a little half-baked; 2) I got in touch with two academics – Profs. Caspar Kaiser and Andrew Oswald – and we are now working on a paper together using a related method.  That blog post actually came to the opposite conclusion, but, as mentioned, I don't think the method was fully thought through.  I'm a little more confident about this work. It essentially summarises my Undergraduate dissertation. You can read a full version here. I'm hoping to publish this somewhere, over the Summer. So all feedback is welcome.  TLDR * Life satisfaction (LS) appears flat over time, despite massive economic growth — the “Easterlin Paradox.” * Some argue that happiness is rising, but we’re reporting it more conservatively — a phenomenon called rescaling. * I test this hypothesis using a large (panel) dataset by asking a simple question: has the emotional impact of life events — e.g., unemployment, new relationships — weakened over time? If happiness scales have stretched, life events should “move the needle” less now than in the past. * That’s exactly what I find: on average, the effect of the average life event on reported happiness has fallen by around 40%. * This result is surprisingly robust to various model specifications. It suggests rescaling is a real phenomenon, and that (under 2 strong assumptions), underlying happiness may be 60% higher than reported happiness. * There are some interesting EA-relevant implications for the merits of material abundance, and the limits to subjective wellbeing data. 1. Background: A Happiness Paradox Here is a claim that I suspect most EAs would agree with: humans today live longer, richer, and healthier lives than any point in history. Yet we seem no happier for it. Self-reported life satisfaction (LS), usually measured on a 0–10 scale, has remained remarkably flat over the last f
 ·  · 3m read
 · 
We’ve redesigned effectivealtruism.org to improve understanding and perception of effective altruism, and make it easier to take action.  View the new site → I led the redesign and will be writing in the first person here, but many others contributed research, feedback, writing, editing, and development. I’d love to hear what you think, here is a feedback form. Redesign goals This redesign is part of CEA’s broader efforts to improve how effective altruism is understood and perceived. I focused on goals aligned with CEA’s branding and growth strategy: 1. Improve understanding of what effective altruism is Make the core ideas easier to grasp by simplifying language, addressing common misconceptions, and showcasing more real-world examples of people and projects. 2. Improve the perception of effective altruism I worked from a set of brand associations defined by the group working on the EA brand project[1]. These are words we want people to associate with effective altruism more strongly—like compassionate, competent, and action-oriented. 3. Increase impactful actions Make it easier for visitors to take meaningful next steps, like signing up for the newsletter or intro course, exploring career opportunities, or donating. We focused especially on three key audiences: * To-be direct workers: young people and professionals who might explore impactful career paths * Opinion shapers and people in power: journalists, policymakers, and senior professionals in relevant fields * Donors: from large funders to smaller individual givers and peer foundations Before and after The changes across the site are aimed at making it clearer, more skimmable, and easier to navigate. Here are some side-by-side comparisons: Landing page Some of the changes: * Replaced the economic growth graph with a short video highlighting different cause areas and effective altruism in action * Updated tagline to "Find the best ways to help others" based on testing by Rethink
 ·  · 4m read
 · 
Summary I’m excited to announce a “Digital Sentience Consortium” hosted by Longview Philanthropy, in collaboration with The Navigation Fund and Macroscopic Ventures, to support research and applied projects focused on the potential consciousness, sentience, moral status, and experiences of artificial intelligence systems. The opportunities include research fellowships, career transition fellowships, and a broad request for proposals for applied work on these topics.  For years, I’ve thought this area was seriously overlooked. It now has growing interest. Twenty-two out of 123 pages of  Claude 4’s model card are about its potential moral patienthood. Scientific experts increasingly say that near-term AI sentience is a real possibility; even the skeptical neuroscientist Anil Seth says, “it is unwise to dismiss the possibility altogether.” We’re hoping to bring new people and projects into the field to increase the chance that society deals with the possibility of digital sentience reasonably, and with concern for all involved. * Apply to Research Fellowship * Apply to Career Transition Fellowship * Apply to Request for Proposals Motivation & Focus For about as long as I’ve been reading about transformative AI, I’ve wondered whether society would face critical decisions involving AI sentience. Until recently, I thought there was not much to be done here besides perhaps more philosophy of mind and perhaps some ethics—and I was not sure these approaches would make much progress.  Now, I think there are live areas where people can contribute: * Technically informed research on which AI systems are sentient, like this paper applying existing theories of consciousness to a few AI architectures. * Innovative approaches to investigate sentience, potentially in a way that avoids having to take a stand on a particular theory of consciousness, like work on  AI introspection. * Political philosophy and policy research on the proper role of AI in society. * Work to ed