Quick takes

What organizations can be donated to to help people in Sudan effectively? Cf. https://www.nytimes.com/2025/04/19/world/africa/sudan-usaid-famine.html?unlocked_article_code=1.BE8.fw2L.Dmtssc-UI93V&smid=url-share

I used to feel so strongly about effective altruism. But my heart isn't in it anymore.

I still care about the same old stuff I used to care about, like donating what I can to important charities and trying to pick the charities that are the most cost-effective. Or caring about animals and trying to figure out how to do right by them, even though I haven't been able to sustain a vegan diet for more than a short time. And so on.

But there isn't a community or a movement anymore where I want to talk about these sorts of things with people. That community and mo... (read more)

Showing 3 of 6 replies (Click to show all)

On cause prioritization, is there a more recent breakdown of how more and less engaged EAs prioritize? Like an update of this? I looked for this from the 2024 survey but could not find it easily: https://forum.effectivealtruism.org/posts/sK5TDD8sCBsga5XYg/ea-survey-cause-prioritization 

2
Benevolent_Rain
Yes, this seems similar to how I feel: I think the major donor(s) have re-prioritized, but am not so sure how many people have switched from other causes to AI. I think EA is more left to the grassroots now, and the forum has probably increased in importance. As long as the major donors don't make the forum all about AI - then we have to create a new forum! But as donors change towards AI, the forum will inevitable see more AI content. Maybe some functions to "balance" the forum posts so one gets representative content across all cause areas? Much like they made it possible to separate out community posts?
2
Jeroen Willems🔸
Good point, I guess my lasting impression wasn't entirely fair to how things played out. In any case, the most important part of my message is that I hope he doesn't feels discouraged from actively participating in EA. 

I'd be excited to see 1-2 opportunistic EA-rationalist types looking into where marginal deregulation is a bottleneck to progress on x-risk/GHW, circulating 1-pagers among experts in these areas, and then pushing the ideas to DOGE/Mercatus/Executive Branch. I'm thinking things like clinical trials requirements for vaccines, UV light, anti-trust issues facing companies collaborating on safety and security, maybe housing (though I'm not sure which are bottlenecked by federal action). For most of these there's downside risk of the message is low fidelity, the... (read more)

The key objection I always have to starting new charities, as Charity Entrepreneurship used to focus on is that I feel is money usually not the bottleneck? I mean, we already have a ton of amazing ideas of how to use more funds, and if we found new ones, it may be very hard to reduce the uncertainty sufficiently to be able to make productive decisions. What do you think Ambitious Impact ?

8
Jason
A new organization can often compete for dollars that weren't previously available to an EA org -- such as government or non-EA foundation grants that are only open to certain subject areas. 

That is actually a good point, thanks Jason.

In ~2014, one major topic among effective altruists was "how to live for cheap."

There wasn't much funding, so it was understood that a major task for doing good work was finding a way to live with little money.

Money gradually increased, peaking with FTX in 2022.

Now I think it might be time to bring back some of the discussions about living cheaply.

The one thing that matters more for this than anything else is setting up an EA hub in a low cost of living area with decent visa options. The thing that matters second most is setting up group houses in high cost of living cities with good networking opportunities.

Hot take, but political violence is bad and will continue to be bad in the foreseeable near-term future. That's all I came here to say folks, have a great rest of your day.

I'm not sure how to word this properly, and I'm uncertain about the best approach to this issue, but I feel it's important to get this take out there.

Yesterday, Mechanize was announced, a startup focused on developing virtual work environments, benchmarks, and training data to fully automate the economy. The founders include Matthew Barnett, Tamay Besiroglu, and Ege Erdil, who are leaving (or have left) Epoch AI to start this company.

I'm very concerned we might be witnessing another situation like Anthropic, where people with EA connections start a company... (read more)

Showing 3 of 4 replies (Click to show all)
24
evhub
The situation doesn't seem very similar to Anthropic. Regardless of whether you think Anthropic is good or bad (I think Anthropic is very good, but I work at Anthropic, so take that as you will), Anthropic was founded with the explicitly altruistic intention of making AI go well. Mechanize, by contrast, seems to mostly not be making any claims about altruistic motivations at all.

You're right that this is an important distinction to make.

14
Jeroen Willems🔸
You make a fair point, but what other tool do we have than our voice? I've read Matthew's last post and skimmed through others. I see some concerning views, but I can also understand how he arrives at them. But what puzzles me often with some AI folks is the level of confidence needed to take such high-stakes actions. Why not err on the side of caution when the stakes are potentially so high? Perhaps instead of trying to change someone's moral views, we could just encourage taking moral uncertainty seriously? I personally lean towards hedonic act utilitarianism, yet I often default to 'common sense morality' because I'm just not certain enough. I don't have strong feelings on know how to best tackle this. I won't have good answers to any questions. I'm just voicing concern and hoping others with more expertise might consider engaging constructively.

Back in October 2024, I tried to test various LLM Chatbots with the question:

"Is there a way to convert a correlation to a probability while preserving the relationship 0 = 1/n?"

Years ago, I came up with an unpublished formula that does just that:

p(r) = (n^r * (r + 1)) / (2^r * n)

So I was curious if they could figure it out. Alas, back in October 2024, they all made up formulas that didn't work.

Yesterday, I tried the same question on ChatGPT and, while it didn't get it quite right, it came, very, very close. So, I modified the question to be more specific:... (read more)

huw
55
6
0

Per Bloomberg, the Trump administration is considering restricting the equivalency determination for 501(c)3s as early as Tuesday. The equivalency determination allows for 501(c)3s to regrant money to foreign, non-tax-exempt organisations while maintaining tax-exempt status, so long as an attorney or tax practitioner claims the organisation is equivalent to a local tax-exempt one.

I’m not an expert on this, but it sounds really bad. I guess it remains to be seen if they go through with it.

Regardless, the administration is allegedly also preparing to directl... (read more)

I guess orgs need to be more careful about who they hire as forecasting/evals researchers in light of a recently announced startup.

Sometimes things happen, but three people at the same org...

This is also a massive burning of the commons. It is valuable for forecasting/evals orgs to be able to hire people with a diversity of viewpoints in order to counter bias. It is valuable for folks to be able to share information freely with folks at such forecasting orgs without having to worry about them going off and doing something like this.

However, this only works... (read more)

Showing 3 of 8 replies (Click to show all)
7
Jason
I agree that we need to be careful about who we are empowering.  "Value alignment" is one of those terms which has different meanings to different people. For example, the top hit I got on Google for "effective altruism value alignment" was a ConcernedEAs post which may not reflect what you mean by the term. Without knowing exactly what you mean, I'd hazard a guess that some facets of value alignment are pretty relevant to mitigating this kind of risk, and other facets are not so important. Moreover, I think some of the key factors are less cognitive or philosophical than emotional or motivational (e.g., a strong attraction toward money will increase the risk of defecting, a lack of self-awareness increases the risk of motivated reasoning toward goals one has in a sense repressed). So, I think it would be helpful for orgs to consider what elements of "value alignment" are of particular importance here, as well as what other risk or protective factors might exist outside of value alignment, and focus on those specific things.

Agreed. "Value alignment" is a simplified framing.

-10
Holly Elmore ⏸️ 🔸

Is anyone in the U.S. savvy with how to deduct from your taxes the value of stocks which have been donated to eligible charities? The stocks have been held for decades with a very high value and capital gains. Would love help as my tax guy hasn't seen this before.

Update for anyone else who may find it useful:

You need to fill out Form 8283: https://www.irs.gov/pub/irs-pdf/f8283.pdf

You can calculate the "Fair Market Value" of the stock(s) you donated by averaging the highest and lowest price of that stock on the day you donated it. I used this page to find that, but you can replace "MSFT" in the URL with whatever stock it is you sold.

https://www.wsj.com/market-data/quotes/MSFT/historical-prices

I'm currently reviewing Wild Animal Initiative's strategy in light of the US political situation. The rough idea is that things aren't great here for wild animal welfare or for science, we're at a critical time in the discipline when things could grow a lot faster relatively soon, and the UK and the EU might generally look quite a bit better for this work in light of those changes. We do already support a lot of scientist in Europe, so this wouldn't be a huge shift in strategy. It’s more about how much weight to put toward what locations for community and ... (read more)

Is there a good list of the highest leverage things a random US citizen (probably in a blue state) can do to cause Trump to either be removed from office or seriously constrained in some way? Anyone care to brainstorm?

Like the safe state/swing state vote swapping thing during the election was brilliant - what analogues are there for the current moment, if any?

Showing 3 of 5 replies (Click to show all)

This post (especially this section) explores this. There are also some ideas on this website. I've copied and pasted the ideas from that site below. I think it's written with a more international perspective, but likely has some overlap with actions which could be taken by Americans.

... (read more)
2
David Mathers🔸
"Because once a country embraces Statism, it usually begins an irreversible process of turning into a "shithole country", as Trump himself eloquently put it. " Ignoring tiny islands (some of them with dubious levels of independence from the US), the 10 nations with the largest %s of GDP as government revenue include Finland, France, Belgium and Austria, although, also, yes, Libya and Lesotho. In general, the top of the list for government revenue as % of GDP seems to be a mixture of small islands, petro states, and European welfare state democracies, not places that are particularly impoverished or authoritarian: https://en.wikipedia.org/wiki/List_of_countries_by_government_spending_as_percentage_of_GDP#List_of_countries_(2024) Meanwhile the countries with the low levels of government revenue as a % of GDP that aren't currently having some kind of civil war are places like Bangladesh, Sri Lanka, Iran and (weirdly) Venezuela. This isn't a perfect proxy for "statism" obviously, but I think it shows that things are more complicated than simplistic libertarian analysis would suggest. Big states (in purely monetary) seem to often be a consequence of success. Maybe they also hold back further success of course, but countries don't seem to actively degenerate once they arrive (i.e. growth might slow, but they are not in permanent recession.) 
1
Chakravarthy Chunduri
You make good points. Obviously, every country is either definitionally or practically a nation-state. But IMHO the only conditions under which individual freedoms and economic freedoms for individuals survive in a country, are when Statism is not embraced but is instead held at arm's length and treated with caution and hesitation. My argument for voting against Trump and Trumpists in the 2026 midterms, for an American citizen is this: The current situation is directly a result of both Republican and Democrat politicians explicitly trying to increase and abuse state power for their definition of "the greater good", which the other side disagrees with.  Up to an arbitrary point, this can be considered the ordinary functioning of democratic nation-states. Beyond the arbitrary point, the presence or absence of democracy is irrelevant, and the very nature of the social contract changes.  The fact that the arbitrary point is unknown or unpredictable is precisely the reason that Statism should not embraced but instead be held at arm's length and treated with caution and hesitation! Every dollar the government takes out of you pocket or restricts you from earning, every sector or part of the economy or society the government feels the need to "direct" or "reshape" for the greater good, the less freedom there is for the individual, and private citizens as a whole. If the Republican voters abdicate too much sovereignty to support Trumpist pet projects, even if the Dems ultimately defeat Trumpists, or even if Vance turns out to be a much better president, the social contract may or may not revert back to what it used to be. Which could really suck.

Apply now for EA Global: London 2025 happening June 6–8. Applications close on May 18 at 11:59 pm BST (apply here)!

We're excited to be hosting what's shaping up to be our biggest EAG yet at the InterContinental London–The O2. We expect to welcome over 1,500 attendees.

We have some travel funding available. More information can be found on the event page and EA Global FAQ.

If you have any questions, please email us at hello@eaglobal.org!

Should the EA Forum facilitate donation swaps? 🤔 Judging from the number of upvotes on this recent swap ask and the fact that the old donation swap platform has retired, maybe there's some unmet demand here? I myself would like to swap donations later this year. Maybe even a low-effort solution (like an open thread) could go a long way?

2
Cullen 🔸
There used to be a website to try to coordinate this; not sure what ever happened to it.
2
Alfredo Parra 🔸
I assume it's the one I linked in my original post? Catherine announced it was discontinued. :/

Ah sorry, I read your post too quickly :-)

I just learned about Zipline, the world's largest autonomous drone delivery system, from YouTube tech reviewer Marques Brownlee's recent video, so I was surprised to see Zipline pop up in a GiveWell grant writeup of all places. I admittedly had the intuition that if you're optimising for cost-effectiveness as hard as GW do, and that your prior is as skeptical as theirs is, then the "coolness factor" would've been stripped clean off whatever interventions pass the bar, and Brownlee's demo both blew my mind with its coolness (he placed an order on mobile for... (read more)

Showing 3 of 9 replies (Click to show all)
4
NickLaing
Don't be inclined to trust my in-the-field experience, Zipline has plenty of that too! I just had a read of their study but couldn't see how they calculated costing (the most important thing). One thing to note is that vaccine supply chains currently often unnecessarily use trucks and cars rather than motorcycles because, well, GAVI has funded them so they may well be fairly comparing to status quo rather than other more efficient methods. For the life of me I don't know why so many NGOs use cars for si many things that public transport and motorcycles could do sometimes orders of magnitude cheaper. Comparing to status quo is a fair enough thing to do (probably what I would do) but might not be investigating the most cost effective way of doing things. Also I doubt they are including R and D and the real drone costs in the costs in of that study, but I'll try and dig and get more detail. It annoys me that most modeling studies focus so hard on their math method, rather than explaining now about how they estimate their cost input data - which is really what defines the model itself. 

The modelling study has a "costs" section (quoted below), but for what it's worth GiveWell said they "were unable to quickly assess how key parameters like program costs... were being estimated" so I don't think this quote will satisfy you:

Given the Ghana Health Service (GHS)'s dominant role, the government perspective in this analysis included healthcare treatment costs and incremental last mile delivery (LMD) costs. The societal perspective also accounted for externalities such as caregivers’ wage loss and transport costs.

To calculate the total cost for

... (read more)
4
Mo Putera
Thanks for the links! And for the pics, makes me feel like I'm glimpsing the future, but it's already here, just unevenly distributed. Everything you say jives with both what GiveWell said about Zipline in their grant writeup as well as the vibe I get from their about page, stuff like  and 3 out of their 4 most prominent "output statistic" claims being health-oriented Yeah the pointer to snakebite antivenom delivery feels useful, you reminded me of how big a problem it is. 

Some AI research projects that (afaik) haven't had much work done on them and would be pretty interesting:

  • If the US were to co-build secure data centres in allied countries, would that be geopolitically stabilising or destabilising?
  • What AI safety research agendas could be massively sped up by AI agents? What properties do they have (e.g. easily checkable, engineering > conceptual ...)?
  • What will the first non-AI R&D uses of powerful and general AI systems be?
  • Are there ways to leverage cheap (e.g. 100x lower than present-day cost) intelligence or manu
... (read more)
Showing 3 of 5 replies (Click to show all)
4
calebp
Do you have a list of research questions that you think could easily be sped up with AI systems? I suspect that I'm more pessimistic than you are due to concerns around scheming AI agents doing intentional research sabotage and think that the affordances of AI agents might make some currently intractable agendas more tractable.
5
calebp
Thank you for replying - it's great that someone within the industry shared their perspective! I don't really understand why that would make the US building DCs in allied countries destabilising. The short answer for why it might be stabilising is: * It gives non-US actors more leverage, making deals where benefits are shared more likely. * It's harder for the US to defect on commitments to develop models safely and not misuse them if it's easy for their allies to spy on them (or they have made commitments for DC use to be monitored) * It keeps the Western democracies ahead of the CCP. I think that allied countries themselves building DCs might be comparably stabilising - it gives more leverage to allied countries, at the cost of baking in less coordination and affordances to make deals around how AI is used and developed.

I didn't articulate myself clearly enough — first-time poster blues! I'd argue these co-builds are a destabilising force for the same reason I mentioned Pine Gap (without explaining myself, whoops).

The benefits allies receive from these facilities are often at the expense of sovereignty over the site or technical oversight by local regulatory bodies. 

Now, this tradeoff might be worth it for the intelligence agencies, but the US presence is often conspicuous and jarring to the local population, even in a remote area like Alice Springs, where PG is loca... (read more)

I've been reading AI As Normal Technology by Arvind Narayanan and Sayash Kapoor: https://knightcolumbia.org/content/ai-as-normal-technology. You may know them as the people behind the AI Snake Oil blog.

I wanted to open up a discussion about their concept-cutting of AI as "normal" technology, because I think it's really interesting, but also gets a lot of stuff wrong.

Was sent a resource in response to this quick take on effectively opposing Trump that at a glance seems promising enough to share on its own: 

From A short to-do list by the Substack Make Trump Lose Again:

  1. Friends in CA, AZ, or NM: Ask your governor to activate the national guard (...)
  2. Friends in NC: Check to see if your vote in the NC Supreme Court race is being challenged (...)
  3. Friends everywhere: Call your senators and tell them to vote no on HR 22 (...)
  4. Friends everywhere: If you’d like to receive personalized guidance on what opportunities are best su
... (read more)

In case this is useful to anyone in the future: LTFF does not provide funding for for-profit organizations. I wasn't able to find mentions of this online, so I figured I should share.

I was made aware of this after being rejected today for applying to LTFF as a for-profit. We updated them 2 weeks ago on our transition into a non-profit, but it was unfortunately too late, and we'll need to send a new non-profit application in the next funding round.

Showing 3 of 4 replies (Click to show all)
2
jacquesthibs
Ok, but the message I received was specifically saying you can’t fund for-profits and that we can re-apply as a non-profit: "We rejected this on the grounds that we can't fund for-profits. If you reorganize as a non-profit, you can reapply to the LTFF in an future funding round, as this would change the application too significantly for us to evaluate it in this funding round. Generally, we think it's good when people run for-profits, and other grant makers can fund them." We will reconsider going the for-profit route in the future (something we’ve thought a lot about), but for now have gotten funding elsewhere as a non-profit to survive for the next 6 months.
4
calebp
Sorry, I agree this message is somewhat misleading - I'll ask our ops team to review this.

Just a quick note, I completely understand where you guys are coming from and just wanted to share the information. This wasn’t intended as a call-out or anything. I trust you guys and appreciate the work you do!

Load more