Hide table of contents

As EA-aligned foundations and projects direct more money, EA ideas continue to gain traction, What We Owe the Future comes out, etc, there’s naturally going to be more attention on EA soon. That attention will likely range from enthusiasm to thoughtful criticism to . . . less thoughtful criticism. If you’ve been involved in EA for a while, this transition might be a bit disorienting.

I’m writing this post on behalf of some staff (at CEA, Forethought Foundation, and Open Philanthropy) who are working on communications for EA as a movement. We’re trying to prepare for increased attention, plan the best ways to communicate complex ideas succinctly, and increase the chance that EA will be portrayed accurately and thoughtfully.

Doing more proactive communications work

For the last several years, most EA organizations did little or no pursuit of media coverage. CEA’s advice on talking to journalists was (and is) mostly cautionary. I think there have been good reasons for that — engaging with media is only worth doing if you’re going to do it well, and a lot of EA projects don’t have this as their top priority.  

While this may have made sense for each individual organization, as a result, we’ve missed out on opportunities to convey the good ideas and work coming from EA. There’s also confusion out there about what EA is even about. Ideally more people would have a clearer sense of what EA is, so they can agree or disagree with an accurate representation of EA and not with a misconception.

Several EA organizations are working together with a communications advising firm to answer questions like

  • Who are key audiences we especially want to reach? 
  • How do these audiences currently see EA?
  • What are the best ways to reach these audiences?
  • What EA ideas are especially important to convey?

Connecting EA projects with journalists

I’ve been writing to EA organizations and projects to see if they have recent success stories that journalists might be interested in covering. If I’ve missed your project and you’d like some help connecting with journalists who might want to cover your work, please do get in touch! media@centreforeffectivealtruism.org

As before, if a journalist reaches out to you, we suggest you look through our guide on responding to journalists

FAQs

If I see a new media piece about EA, who should I flag it to?

Feel free to flag things to media@centreforeffectivealtruism.org and we’ll talk with our advisors about whether some kind of response makes sense. We’ll likely have heard about pieces in large-scale publications, but might miss coverage of EA in publications in languages other than English, or targeted readership that might be of interest (e.g. university student newspapers, professional sub-communities).   

Why don’t CEA or other EA orgs push back more publicly on misconceptions?

The advice we’ve gotten so far is to not repeat misconceptions. You’re unlikely to see an EA organization say “No, X isn’t true; actually Y is true.” Instead they’re more likely to talk about “Here’s why Y is important.” 

Some criticisms will be unfair or uninformed. Typically we expect to respond by writing pieces explaining our own views rather than responding directly to critical pieces.

Should I reach out to celebrities, HNWIs, etc about getting involved with EA?

Very unlikely. There are existing projects doing this, and it’s better that outreach happen in a coordinated way than a bunch of people contacting them. Simran Dhaliwal of Longview Philanthropy writes: “Please do reach out if you have a connection to an UHNW individual/family; we'd be more than happy to invest the many hours it takes to build a relationship and discuss EA concepts in-depth / assist with coordination."  simran@longview.org

How should I respond to takes on EA that I disagree with?

Maybe not at all — it may not be worth fanning the flames. 

If you do respond, it helps to link to a source for the counter-point you want to make. That way, curious people who see your interaction can follow the source to learn more.

Comments28


Sorted by Click to highlight new comments since:

I'm a journalist, and would second this as sound advice, especially the 'guide to responding to journalists'. It explains the pressures and incentives/deterrents we have to work with, without demonising the profession... which I was glad to see! 

A couple of things I would emphasise (in the spirit of mutual understanding!): 

It can help to look beyond the individual journalist to consider the audience we write for, and what our editors' demands might be higher up in the hierarchy. I know many good, thoughtful journalists who work for publications (eg politically partisan newspapers) where they have to present stories the way they do, because that's what their audience/editors demand... There's often so much about the article they, as the reporter, don't control after they file. (Early career journalists in particular have to make these trade-offs, which is worth bearing in mind.) 

Often I would suggest it could be helpful to think of yourself as a guide not a gatekeeper. An obvious point... but this space here [waves arms] is all available to journalists, along with much else in the EA world, via podcasts, public google docs etc. There are vast swathes of material that are already public and all quotable. The community is an unusually online one compared with other fields I report on. It's great! But the problem for a journalist is therefore not really that information is scarce and hard to come by, which you as a source could gatekeep - on the contrary, it's that so much is all already there, and there's far too much of it to digest before a deadline. It means a bad journalist could cherrypick; a hurried journalist could get only a fleeting impression. Generally, what we journalists need is a guide through it all - context, history, depth - so we can form a picture that is fair and accurate. 

With that in mind, I would always advocate for speaking with us face-to-face or via video, rather than emailing...it's just a more human way of connecting, more efficient and responsive, and frankly makes it a little harder for a journalist to ignore your guidance if you have given them your time and shown you are a real person rather than a quote-machine! If a journalist asks for answers to their questions in email, for me that's a sign that they don't have that much of an interest in engaging and learning. I admit I've done it myself sometimes when pressed for time, but it's not good practice. Also it's not serving the needs of audiences/publications because, unless a source is an unusually conversational writer, it leads to flat quotes that are less natural and engaging in tone. 

Last, just to return to the OP, I agree that far more attention is coming. In the past I have observed a sentiment that seems to assume that the EA world can stay under the radar by not engaging. There's perhaps been something to that - insofar it avoids actively advertising – but I'd also say it's as much that relatively few journalists so far have had reason and motivation to look. I'd suggest that will change for a few reasons - there are many great positive and important stories to tell that interest wider audiences, as I've discovered myself, but increasingly also because journalists have a civic duty to write about concentrations of power and money. 

One thing that may backfire with the slow rollout of talking to journalists is that people who mean to write about EA in bad faith will be the ones at the top of the search results. If you search something like “ea longtermism”, you might find bad faith articles many of us are familiar with. I’m concerned we are setting ourselves up to give people unaware of EA a very bad faith introduction.

Note: when I say “bad faith“ here, it may just be a matter of semantics with how some people are seeing it as. I think I might not have the vocabulary to articulate what I mean by “bad faith.” I actually agree with pretty much everything David has said in response to this comment.

In my view, Phil Torres' stuff, whilst not entirely fair, and quite nasty rhetorically, is far from the worst this could get. He actually is familiar with what some people within EA think in detail, reports that information fairly accurately, even if he misleads by omission somewhat*, and makes  criticisms of controversial philosophical assumptions of some leading EAs that have some genuine bite, and might be endorsed by many moral philosophers. His stuff actually falls into the dangerous sweet spot where legitimate ideas, like 'is adding happy people actually good anyway' get associated with less fair criticism-"Nick Beckstead did white supremacy when he briefly talked about different flow-through effects of saving lives in different places", potentially biasing us against the legit stuff in a dangerous way. 

But there could-again, in my view-easily be a wave of criticism coming from people who share Torres' political viewpoint and tendency towards heated rhetoric, but who, unlike him, haven't really taken the time to understand EA /longtermist/AI safety ideas in the first place. I've already seen one decently well-known anti-"tech" figure on twitter re-tweet a tweet that in it's entirety consisted of "long-termism is eugenics!".  People should prepare emotionally (I have already mildly lost my temper on twitter in a way I shouldn't have, but at least I'm not anyone important!) for keeping their cool in the face of criticisms that is:
-Poorly argued 
-Very rhetorically forceful
-Based on straightforward misunderstandings
-Involves infuriatingly confident statements of highly contestable philosophical and empirical assumptions. 
-Deploy guilt-by-association tactics of an obviously unreasonable sort**: i.e. so-and-so once attended a conference with Peter Thiel, therefore they share [authoritarian view] with Thiel.
-Attacks motives not just ideas
-Gendered in a way that will play directly to the personal insecurities of some male EAs.

Alas, stuff can be all those things and also identify some genuine errors we're making. It's important we remain open to that, and also don't get too polarized politically by this kind of stuff ourselves. 

* (i.e. he leaves out reasons to be longtermist that don't depend on total utilitarianism or adding happy people being good, doesn't discuss why you might reject person-affecting population ethics etc.)

** I say "of an unreasonable sort" because in principle people's associations can be legitimately criticized if they have bad effects, just like anything else. 

Great points, here’s my impression: 

Meta-point: I am not suggesting we do anything about this or that we start insulting people and losing our temper (my comment is not intended to be prescriptive). That would be bad and it is not the culture I want within EA. I do think it is, in general, the right call to avoid fanning the flames. However, my first comment is meant to point at something that is already happening: many people uninformed about EA are not being introduced in a fair and balanced way, and first impressions matter. And lastly, I did not mean to imply that Torres’ stuff was the worse we can expect. I am still reading Torres’ stuff with an open-mind to take away the good criticism (while keeping the entire context in consideration).

Regarding the articles: Their way of writing is by telling the general story in a way that it’s obvious they know a lot about EA and had been involved in the past, but then they bends the truth as much as possible so that the reader leaves with a misrepresentation of EA and what EAs really believe and take action on. Since this is a pattern in their writings, it’s hard not to believe they might be doing this because it gives them plausible deniability since what they're saying is often not “wrong”, but it is bent to the point that the reader ends up inferring things that are false.

To me, in the case of their latest article, you could leave with the impression that Bostrom and MacAskill (as well as the entirety of EA) both think that the whole world should stop spending any money towards philanthropy that helps anyone in the present (and if you do, only to those who are privileged). The uninformed reader can leave with the impression that EA doesn’t even actually care about human lives. The way they write gives them credibility to the uninformed because it’s not just an all-out attack where it is obvious to the reader what they're intentions are.

Whatever you want to call it, this does not seem good faith to me. I welcome criticism of EA and longtermism, but this is not criticism.

*This is a response to both of your comments.

Thanks for this thoughtful challenge, and in particular flagging what future provocations could look like so we can prepare ourselves and let our more reactive selves come to the fore, less of the child selves.  

 

In fact, I think I'll reflect on this list for a long time to ensure I continue not to respond on Twitter!

Definitely the case in Germany. Top 3 Google results for "longtermism" are all very negative posts. 2 of them by some of Germany biggest news magazines (ZEIT and Spiegel). As far as I know there is no positive content on Longtermism in German.

I agree! This is part of what we’re trying to work on, by making good-quality pieces in favor of EA and longtermism easier to find.

Also, I doubt Torres is writing in bad faith exactly. "Bad faith" to me has connotations of 'is saying stuff they know to be untrue', when with Torres I'm sure he believes what he's saying he's just angry about it, and anger biases. 

Agreed. 

My model is, he has a number of frustrations with EA. That on its own isn't a big deal. There are plenty of valid, invalid, and arguable gripes with various aspects of EA. 

But he also has a major bucket error where the concept of "far-right" is applied to a much bigger Category of bad stuff. Since some aspects of EA & longtermism seem to be X to him, and X goes in the Category, and stuff in the Category is far-right, EA must have far-right aspects. To inform people of the problem, he writes articles claiming they're far-right. 

If EA's say his claims are factually false, he thinks the respondents are fooling themselves. After all, they're ignoring his wider point that EA has stuff from the Category, in favor of the nitpicky technicalities of his examples. He may even think they're trying to motte & bailey people into thinking EA & longtermism can't possibly have X. To me, it sounds like his narrative is now that he's waging a PR battle against Bad Guys.

I'm not sure what the Category is, though. 

At first I thought it was an entirely emotional thing- stuff that make him sufficiently angry, or a certain flavor of angry, or anything where he can't verbalize why it makes him angry, are assumed to be far-right. But I don't think that fits his actions. I don't expect many people can decide "this makes me mad, so it's full of white supremacy and other ills", run a years-long vendetta on that basis, and still have a nuanced conversation about which parts aren't bad.

Now I think X has a "shape"- with time & motivation, in a safe environment, Torres could give a consistent definition of what X is and isn't. And with more of those, he could explain what it is & why he hates it without any references to far-right stuff. Maybe he could even do an ELI5 of why X goes in the same Category as far right stuff in the first place. But not much chance of this actually happening, since it requires him being vulnerable with a mistrusted representative of the Bad Guys.

Yes, i’m always unsure of what “bad faith” really means. I often see it cited as a main reason to engage or not engage with an argument. But I don’t know why it should matter to me what a writer or journalist intends deep down. I would hope that “good faith” doesn’t just mean aligned on overall goals already.

To be more specific, i keep seeing reference hidden context behind Phil Torres’s pieces. To someone who doesn’t have the time to read through many cryptic old threads, it just makes me skeptical that the bad faith criticism is useful in discounting or not discounting an argument.

Have you ever had conversations where someone has misrepresented everything you've said or where they kept implying that you were a bad person every time you disagreed with them?

Equally there's an argument to thank and reply to critical pieces made against the EA community which honestly engage with subject matter. This post (now old) making criticisms of long-termism is a good example: https://medium.com/curious/against-strong-longtermism-a-response-to-greaves-and-macaskill-cb4bb9681982

I'm sure / really hope Will's new book does engage with the points made here. And if so, it provides the rebuttal to those who come across hit-pieces and take them at face value, or those who promulgate hit-pieces because of their own ideological drives.

Yup, I saw somebody on Medium speaking favorably about a Phil Torres piece as a footnote of his article on Ukraine (I responded here). And earlier I responded to Alice Crary's piece. Right now the anti-EAs are often self-styled intellectual elites, but a chorus of bad faith could go mainstream at some point. (And then I hope you guys will see why I'm proposing an evidence clearinghouse, to help build a new and more efficient culture of good epistemics and better information... whether or not you think my idea would work as intended.)

I just posted a comment giving a couple of real-life anecdotes showing this effect.

Thanks Julia, I think this is really well put. 

Relatedly, trauma circles.

I like trauma circles as a good model for dealing with crises. When someone is in distress you dump (complain, vent, etc) away from them and comfort (listen, graciously help, etc) towards them.  In short, you use your and everyone else's closeness to the crisis to inform your response.

This is also the model I use if someone is angry at me on the internet. I do not want to dump towards them (the centre of the circle) but instead vent towards my friends. If I say anything to them I am first gracious and kind. 

This next point is tricky, but I think worth making.  For me public community spaces are "sideways" of me as regards this model - useful to dump into if necessary but not ideal. When someone is rudely frustrated with EAs on twitter, I generally avoid quote tweeting into the forum or Twitter Community (a new feature for communities on twitter)  because then everyone feels attacked, not just me.  This isn't an iron law -  sometimes the rude criticism is still really instructive, but this is my general heuristic.

My heuristics then:
- if the upset person is a friend comfort them
- if the upset person is not a friend comfort them or say nothing unless I feel  very very competent (I very rarely feel competent enough to challenge directly)
- if the upset person has written something mean, probably don't amplify it
- if I need  to vent, vent away from the upset person
- venting in private is usually better than venting in public
- if I need to vent in public, it's better to talk about my feelings rather than give a play by play of the crisis

I may edit this post based on comments. 

aog
21
0
0

This is a great point. As one example of growing mainstream coverage, here’s a POLITICO Magazine piece on Carrick Flynn’s Congressional campaign. It gives a detailed explanation of effective altruism and longtermism, and seems like a great introduction to the movement for somebody new. The author sounds like he might have collaborated with CEA, but if not, maybe someone should reach out?

https://www.politico.com/news/magazine/2022/05/12/carrick-flynn-save-world-congress-00031959

Thanks, Julia! The  "Advice for responding to journalists" doc you link is really excellent. Everyone should read this before speaking to the media. https://docs.google.com/document/d/1GlVEKYdJU2LqE6tXPPay_2tBmJTQrsQxAO27ZaeKAQk/edit#heading=h.86t1p0fnb9uz

Some advice I would add: if a journalist asks to interview you, try to understand where they are in their research. 

Do they have a narrative that they are already committed to and they're just trying to get a juicy quote from you? If so, it might not make sense to talk to them since they might twist whatever you say to fit the story they have already written.

Alternatively, are they in information gathering mode and are honestly trying to understand a complex issue? If they have not written their story yet and you think you can give them information that will make their writing more accurate, then it makes more sense to do an interview. 
 

I’m assuming and hoping Julia Wise or the respective team here has strong and adequate staffing.

I’ve got this worrying mental picture of Wise carrying both the community health and international public relations team as a one woman show, like with a headset, three keyboards and seven monitors typing furiously.

Honestly, I also low key want there to be strong people working for Wise, so we can refer to the resulting apparatus with awesome names:

  • Department of The Wise
  • The Wise Empire
  • The Era of Wise EA
  • Wisely, EA succeeds

It's definitely a bigger job than I can do on my own! As I said, staff at several organizations plus a communications advising firm are working on this.

We're also keeping an eye out for possible hires who are familiar with both media/communications work and EA. If that sounds like you, feel free to let me know (julia.wise@centreforeffectivealtruism.org) and I can let you know if we have a job posting.

Is there much ongoing outreach to journalists about EA projects which are really good according to most ethical views?

Eg - about Alvea, updates on CE-incubated charities, updates on Givewell’s donations, about how EA-affiliated people advised the UK government on COVID, animal welfare wins, etc

That list is almost identical to what I started with, yes! :)

In several cases the projects are already working on their own media outreach, but we’ll be trying to help where we can (perhaps introducing them to journalists they weren’t already in touch with) and to help smaller projects that might not have a media plan yet.

Cool, good to hear!

For the last several years, most EA organizations did little or no pursuit of media coverage. CEA’s advice on talking to journalists was (and is) mostly cautionary. I think there have been good reasons for that — engaging with media is only worth doing if you’re going to do it well, and a lot of EA projects don’t have this as their top priority.  

 

I think this policy has been noticeably harmful, tbh. If the supporters of something won't talk to the media, the net result seems to be that the media talk to that thing's detractors instead, and so you trade low-fidelity positive reporting for lower-fidelity condemnation.

Two real-life anecdotes to support this: 

  1. At the EA hotel, we turned away a journalist at the door, who'd initially contacting me sounding very positive about the idea. He wrote a piece about it anyway, and instead of interviews with the guests, concluded with a perfunctory summary of the neighbours' very lukewarm views.
  2. At a public board games event we were introducing ourselves while setting up for a 2-hour game, and described my interest in EA as a way of making conversation. The only person at the table who recognised the name turned to me and said 'oh... that's the child molestation thing, right?' It turns out everything he knew about the movement was from a note published by Kathy Forth making various unsubstantiated accusations about the EA and rationalist movements without distinguishing between them. I felt morally committed to the game at that point, so... that was an uncomfortable couple of hours.

Several EA organizations are working together with a communications advising firm to answer questions like

  • Who are key audiences we especially want to reach?
  • How do these audiences currently see EA?
  • What are the best ways to reach these audiences?
  • What EA ideas are especially important to convey?

I hope EA orgs end up sharing their new best guesses regarding these questions with the broader community, or at least reach out to smaller and newer organizations dedicated to outreach so that they can scale their outreach in a good direction and self-correct more easily.

HI,

Could you clarify your section about connecting projects with journalists? I am not sure I understand entirely what you are looking for. Are there are particular journalists you have connections with already, is there a particular geography or topic you are thinking of, etc.?

 Also, does this meant that CEA wants to coordinate and do  outreach on behalf of all affiliated organizations and groups?

 

Thanks so much!

Charlie

Hi Charlie,

Yes, we have some connections with journalists already who have worked on EA-related pieces before or expressed interest in EA.

The types of projects we expect they might want to cover are those working on problems that non-EAs are also concerned about (like poverty, animal welfare, pandemic risk, and other catastrophic risks.) We expect EA community-building projects to be of less of a focus. We do expect stories about the amount of funding in EA, but we want to shed more light on the concrete work that the funding and community-building are actually for.

I don’t expect that CEA will do outreach on behalf of all EA-related organizations and groups, no. For example if EA Norway gets approached by a journalist wanting to write about the group or about EA in Norway, we’d be happy to help you decide whether to take the interview and prepare what you’d want to convey (as described in our guide to responding to journalists). But we don’t have capacity to do proactive outreach on behalf of community-building projects or organizations in EA.

If you (or anybody else) want to talk more about what this might mean for a project or organization you’re involved with, I’m happy to respond more specifically! julia.wise@centreforeffectivealtruism.org

Thank you Julia for this well-written post! I had been considering writing something along these lines (because of the increase in EAs working in policy and under public scrutinee), and I am very, very glad that this is not only taken seriously, but also actively being worked on.  

Thank you so much for this extremely important and helpful guide on EA messaging, Julia! I really appreciate it, and hope all EAs read it asap.

Social opinion dynamics seem to have the property where some action (or some inaction) can cause EA to move into a different equilibrium, with a potentially permanent increase or decrease in EA’s outreach and influence capacity. We should therefore tread carefully.

Unfortunately, social opinion dynamics are also extremely mysterious. Nobody knows precisely what action or what inaction possesses the risk of permanently closing some doors to additional outreach and influence. Part of the system is likely inherently unpredictable, but people are almost certainly not near the optimal level of knowledge about predicting such social opinion dynamics.

But perhaps EA movement-builders are already using and improving a cutting-edge model of social opinion dynamics!

Curated and popular this week
 ·  · 52m read
 · 
In recent months, the CEOs of leading AI companies have grown increasingly confident about rapid progress: * OpenAI's Sam Altman: Shifted from saying in November "the rate of progress continues" to declaring in January "we are now confident we know how to build AGI" * Anthropic's Dario Amodei: Stated in January "I'm more confident than I've ever been that we're close to powerful capabilities... in the next 2-3 years" * Google DeepMind's Demis Hassabis: Changed from "as soon as 10 years" in autumn to "probably three to five years away" by January. What explains the shift? Is it just hype? Or could we really have Artificial General Intelligence (AGI) by 2028?[1] In this article, I look at what's driven recent progress, estimate how far those drivers can continue, and explain why they're likely to continue for at least four more years. In particular, while in 2024 progress in LLM chatbots seemed to slow, a new approach started to work: teaching the models to reason using reinforcement learning. In just a year, this let them surpass human PhDs at answering difficult scientific reasoning questions, and achieve expert-level performance on one-hour coding tasks. We don't know how capable AGI will become, but extrapolating the recent rate of progress suggests that, by 2028, we could reach AI models with beyond-human reasoning abilities, expert-level knowledge in every domain, and that can autonomously complete multi-week projects, and progress would likely continue from there.  On this set of software engineering & computer use tasks, in 2020 AI was only able to do tasks that would typically take a human expert a couple of seconds. By 2024, that had risen to almost an hour. If the trend continues, by 2028 it'll reach several weeks.  No longer mere chatbots, these 'agent' models might soon satisfy many people's definitions of AGI — roughly, AI systems that match human performance at most knowledge work (see definition in footnote).[1] This means that, while the co
saulius
 ·  · 22m read
 · 
Summary In this article, I estimate the cost-effectiveness of five Anima International programs in Poland: improving cage-free and broiler welfare, blocking new factory farms, banning fur farming, and encouraging retailers to sell more plant-based protein. I estimate that together, these programs help roughly 136 animals—or 32 years of farmed animal life—per dollar spent. Animal years affected per dollar spent was within an order of magnitude for all five evaluated interventions. I also tried to estimate how much suffering each program alleviates. Using SADs (Suffering-Adjusted Days)—a metric developed by Ambitious Impact (AIM) that accounts for species differences and pain intensity—Anima’s programs appear highly cost-effective, even compared to charities recommended by Animal Charity Evaluators. However, I also ran a small informal survey to understand how people intuitively weigh different categories of pain defined by the Welfare Footprint Institute. The results suggested that SADs may heavily underweight brief but intense suffering. Based on those findings, I created my own metric DCDE (Disabling Chicken Day Equivalent) with different weightings. Under this approach, interventions focused on humane slaughter look more promising, while cage-free campaigns appear less impactful. These results are highly uncertain but show how sensitive conclusions are to how we value different kinds of suffering. My estimates are highly speculative, often relying on subjective judgments from Anima International staff regarding factors such as the likelihood of success for various interventions. This introduces potential bias. Another major source of uncertainty is how long the effects of reforms will last if achieved. To address this, I developed a methodology to estimate impact duration for chicken welfare campaigns. However, I’m essentially guessing when it comes to how long the impact of farm-blocking or fur bans might last—there’s just too much uncertainty. Background In
 ·  · 4m read
 · 
SUMMARY:  ALLFED is launching an emergency appeal on the EA Forum due to a serious funding shortfall. Without new support, ALLFED will be forced to cut half our budget in the coming months, drastically reducing our capacity to help build global food system resilience for catastrophic scenarios like nuclear winter, a severe pandemic, or infrastructure breakdown. ALLFED is seeking $800,000 over the course of 2025 to sustain its team, continue policy-relevant research, and move forward with pilot projects that could save lives in a catastrophe. As funding priorities shift toward AI safety, we believe resilient food solutions remain a highly cost-effective way to protect the future. If you’re able to support or share this appeal, please visit allfed.info/donate. Donate to ALLFED FULL ARTICLE: I (David Denkenberger) am writing alongside two of my team-mates, as ALLFED’s co-founder, to ask for your support. This is the first time in Alliance to Feed the Earth in Disaster’s (ALLFED’s) 8 year existence that we have reached out on the EA Forum with a direct funding appeal outside of Marginal Funding Week/our annual updates. I am doing so because ALLFED’s funding situation is serious, and because so much of ALLFED’s progress to date has been made possible through the support, feedback, and collaboration of the EA community.  Read our funding appeal At ALLFED, we are deeply grateful to all our supporters, including the Survival and Flourishing Fund, which has provided the majority of our funding for years. At the end of 2024, we learned we would be receiving far less support than expected due to a shift in SFF’s strategic priorities toward AI safety. Without additional funding, ALLFED will need to shrink. I believe the marginal cost effectiveness for improving the future and saving lives of resilience is competitive with AI Safety, even if timelines are short, because of potential AI-induced catastrophes. That is why we are asking people to donate to this emergency appeal
Recent opportunities in Building effective altruism
49
Ivan Burduk
· · 2m read