Quick takes

Mini EA Forum Update

We now have a unified @mention feature in our editor! You can use it to add links to posts, tags, and users. Thanks so much to @Vlad Sitalo — both for the GitHub PR introducing this feature, and for time and again making useful improvements to our open source codebase. 💜

Around EA Priorities:

Personally, I feel fairly strongly convinced to favor interventions that could help the future past 20 years from now. (A much lighter version of "Longtermism").

If I had a budget of $10B, I'd probably donate a fair bit to some existing AI safety groups. But it's tricky to know what to do with, say, $10k. And the fact that the SFF, OP, and others have funded some of the clearest wins makes it harder to know what's exciting on-the-margin.

I feel incredibly unsatisfied with the public EA dialogue around AI safety strategy now. From what I ... (read more)

Showing 3 of 5 replies (Click to show all)
10
Peter Wildeford
Thanks for the comment, I think this is very astute. ~ I think there's a (mostly but not entirely accurate) vibe that all AI safety orgs that are worth funding will already be approximately fully funded by OpenPhil and others, but that animal orgs (especially in invertebrate/wild welfare) are very neglected. I don't think that all AI safety orgs are actually fully funded since there are orgs that OP cannot fund for reasons (see Trevor's post and also OP's individual recommendations in AI) other than cost-effectiveness and also OP cannot and should not fund 100% of every org (it's not sustainable for orgs to have just one mega-funder; see also what Abraham mentioned here). Also there is room for contrarian donation takes like Michael Dickens's.

I think there's a (mostly but not entirely accurate) vibe that all AI safety orgs that are worth funding will already be approximately fully funded by OpenPhil and others, but that animal orgs (especially in invertebrate/wild welfare) are very neglected.

That makes sense, but I'm feeling skeptical. There are just so many AI safety orgs now, and the technical ones generally aren't even funded by OP. 

For example: https://www.lesswrong.com/posts/9n87is5QsCozxr9fp/the-big-nonprofits-post

While a bunch of these salaries are on the high side, not all of them are.

7
Ozzie Gooen
On AI safety, I think it's fairly likely (40%?) that the risk of x-risk (upon a lot of reflection) in the next 20 years is less than 20%, and that the entirety of the EA scene might be reducing it to say 15%. This means that the entirety of the EA AI safety scene would help the EV of the world by ~5%. On one hand, this is a whole lot. But on the other, I'm nervous that it's not ambitious enough, for what could be one of the most [combination of well-resourced, well-meaning, and analytical/empirical] groups of our generation. One thing I like about epistemic interventions is that the upper-bounds could be higher.  (There are some AI interventions that are more ambitious, but many do seem to be mainly about reducing x-risk by less than an order of magnitude, not increasing the steady-state potential outcome)  I'd also note here that an EV gain of 5% might not be particularly ambitious. It could well be the case that many different groups can do this - so it's easier than it might seem if you think goodness is additive instead of multiplicative. 

The Donation Election will close (no more voting or donating) at an undisclosed time today... could be in ten minutes, could be in ten hours... get your final votes in soon just in case!

Voting has now closed! Thank you to everyone who voted, discussed and donated. And of course to all the organisations who wrote marginal funding posts! 

Stay tuned for a post about the winners tomorrow. 

Yudkowsky's message is "If anyone builds superintelligence, everyone dies." Zvi's version is "If anyone builds superintelligence under anything like current conditions, everyone probably dies."

Yudkowsky contrasts those framings with common "EA framings" like "It seems hard to predict whether superintelligence will kill everyone or not, but there's a worryingly high chance it will, and Earth isn't prepared," and seems to think the latter framing is substantially driven by concerns about what can be said "in polite company."

Obviously I can't speak for all of... (read more)

Showing 3 of 4 replies (Click to show all)

I think the difference between me and Yudkowsky has less to do with social effects on our speech and more to do with differing epistemic practices, i.e. about how confident one can reasonably be about the effects of poorly understood future technologies emerging in future, poorly understood circumstances. 

This isn't expressing disagreement, but I think it's also important to consider the social effects of our speaking in line with different epistemic practices, i.e.,

  • When someone says "AI will kill us all" do people understand us as expressing 100% con
... (read more)
6
titotal
Funnily enough, I think this is true in the opposite direction. There is massive social pressure in EA spaces to take AI x-risk and the doomer arguments seriously. I don't think it's uncommon for someone who secretly suspects it's all a load of nonsense to diplomatically say a statement like the above, in "polite EA company". Like you: I urge people who think AI x-risk is overblown to make their arguments loudly and repeatedly. 
14
David Mathers🔸
It's easy for both to be true at the same time right? That is skeptics tone it down within EA, and believers tone it down when dealing with people *outside* EA.

Donated $180k to PauseAI (US and Global). Calling on more people to donate significant amounts. Pretty much the only way we're going to survive the next 5-10 years is by such efforts being successful. [X post]

Showing 3 of 6 replies (Click to show all)

Thanks. Yeah, I see a lot of disagreement votes. I was being too hyperbolic for the EA Forum. But I do put ~80% on it (which I guess translates to "pretty much"?), with the remaining ~20% being longer timelines, or dumb luck of one kind or another that we can't actually influence.

10
Ian Turner
Did it? My sense was only that (a) the amount of money from six-figure donations was nonetheless dwarfed by Open Philanthropy, and (b) as the number of professionals in EA has increased, the percentage of the community focused on donations has been diluted somewhat. But we’re still around!
6
MarcusAbramovitch
I basically think so, yes. I think it mainly caused by, as you put it, "the amount of money from six-figure donations was nonetheless dwarfed by Open Philanthropy" and therefore people have scaled back/stopped since they don't think it's impactful. I basically don't think that's true, especially in this case of animal welfare but also just in terms of absolute impact which is what actually matters as opposed to relative impact. FWIW, this is the same (IMO, fallacious) argument "normies" have against donating "my potential donations are so small compared to billionaires/governments/NGOs that I may as well just spend it on myself". But yes, the amount of people I know who would consider themselves to be effective altruists, even committed effective altruists who earn considerable salaries donate relatively little, at least compared to what they could be donating.

The very well written Notes on Effective Altruism coheres some thoughts I've had over the years, and makes me think we should potentially drop the "how to do good in the best way possible framing" when introducing EA for the "be more effective when trying to help others" framing. This honestly seems straightforwardly good to me from a number of different angles, and I think we should seriously be thinking about changing our overall branding to this as a tagline instead. 

But am I missing something here? Is there a reason the latter is worse than I think? Or some hidden benefits to the former that I'm not weighing? 

a moral intuition i have: to avoid culturally/conformistly-motivated cognition, it's useful to ask:

if we were starting over, new to the world but with all the technology we have now, would we recreate this practice?

example: we start and out and there's us, and these innocent fluffy creatures that can't talk to us, but they can be our friends. we're just learning about them for the first time. would we, at some point, spontaneously choose to kill them and eat their bodies, despite us having plant-based foods, supplements, vegan-assuming nutrition guides, etc? to me, the answer seems obviously not. the idea would not even cross our minds.

(i encourage picking other topics and seeing how this applies)

3
Joseph Lemien
I've most often read/heard this argument in relation to alcohol and marijuana. Something along the lines of "if we had never had this thing and we discovered it today, would we make it legal/illegal?" I think of it in vaguely the same category as the veil of ignorance and other simple thought experiments that encourage us to step outside of our own individualized preferences.

I am organizing a fundraising competition between Philosophy Departments for AMF.
You can find it here: https://www.againstmalaria.com/FundraiserGroup.aspx?FundraiserID=9191
Previous editions have netted (badum-tschak) roughly $40.000:
https://www.againstmalaria.com/FundraiserGroup.aspx?FundraiserID=9189
Any contributions are very welcome, as is sharing the fundraiser. A more official-looking announcement is on Dailynous, a central blog of academic philosophy: people found this ideal for sharing via e.g. department listservs. 
https://dailynous.com/2024/12... (read more)

Suppose that the EA community were transported to the UK and US in 1776. How fast would slavery have been abolished? Recall that the slave trade ended in 1807 in the UK and 1808 in the US, and abolition happened between 1838-1843 in the British Empire and 1865 in the US.

Assumptions:

  • Not sure how to define "EA community", but some groups that should definitely be included are the entire staff of OpenPhil and CEA, anyone who dedicates their career choices or donates more than 10% along EA principles, and anyone with >5k EA forum karma.
  • EAs have the same pro
... (read more)
Showing 3 of 7 replies (Click to show all)

I guess on one hand, if this were the case, then EAs would be well-represented in America, given that it's population in 1776 was just 2.5M, vs. the UK population of 8M. 

On the other hand, I'd assume that if they were distributed across the US, many would have been farmers / low-income workers / slaves, so wouldn't have been able to contribute much. There is an interesting question on how much labor mobility or inequality there was at the time. 

Also, it seems like EAs got incredibly lucky with Dustin Moskovitz + Good Ventures. It's hard to picture just how lucky we were with that, and what the corresponding scenarios would have been like in 1776. 

Could make for neat historical fiction. 

7
Thomas Kwa
I disagree with a few points, especially paragraph 1. Are you saying that people were worried about abolition slowing down economic growth and lowering standards of living? I haven't heard this as a significant concern-- free labor was perfectly capable of producing cotton at a small premium, and there were significant British boycotts of slave-produced products like cotton and sugar. As for utilitarian arguments, that's not the main way I imagine EAs would help. EA pragmatists would prioritize the cause for utilitarian reasons and do whatever is best to achieve their policy goals, much as we are already doing for animal welfare. The success of EAs in animal welfare, or indeed anywhere other than x-risk, is in implementation of things like corporate campaigns rather than mass spreading of arguments. Even in x-risk, an alliance with natsec people has effected concrete policy outcomes like compute export controls. To paragraph 2, the number of philosophers is pretty low in contemporary EA. We just hear about them more. And while abolition might have been relatively intractable in the US, my guess is the UK could have been sped up.  I basically agree with paragraph 3, though I would hope if it came to it we would find something more economical than directly freeing slaves. Overall thanks for the thoughtful response! I wouldn't mind discussing this more.
1
David T
Absolutely slaveholders and those dependent on them were worried about their own standard of living (and more importantly, specifically not interested in significantly improving the standard of living of plantation slaves, and not because they'd never heard anyone put forward the idea that all people were equal. I mean, some of them were on first name terms with Thomas Paine and signed the Declaration of Independence and still didn't release their slaves!). I'm sure most people who were sympathetic to EA ideas would have strongly disagreed with this prioritisation decision, just like the Quakers or Jeremy Bentham. I just don't think they'd have been more influential than the Quakers or Jeremy Bentham, or indeed the deeply religious abolitionists lead by William Wilberforce. I agree the number of philosophers in EA is quite low, but I'm assuming the influence centre would be similar, possibly even more Oxford-centric, in a pre-internet, status-obsessed social environment where discourse is more centred on physical places and elite institutions[1]. For related reasons I think they'd be influential in the sort of place where abolitionist arguments were already getting a fair hearing, and of little consequence in slaveowning towns in the Deep South.  In the UK, I think the political process was held up by  the amount of vested interests in keeping it going in Parliament and beliefs  that slavery was "the natural order" rather than any lack of zeal or arguments or resources on the abolitionist side (though I'm sure they'd have been grateful for press baron Moskovitz's donations!). I think you could make the argument that slave trade abolition in the UK was actually pretty early considering the revenues it generated, who benefited, and the generally deeply inegalitarian social values and assumption of racial superiority of British society at the time. I agree this is probably the main way that EAs would try to help, I just don't think abolitionism is an area where this

EA in the wild: I'm having trouble adding a screenshot but I recently made an online purchase and at the bottom of the checkout page was a "give 1% of your purchase to a high-impact cause" - and it was featuring Giving What We Can's funds! 

Always fun to see EA in unexpected places. :) 

3
Giving What We Can
Was it Bullet Journal???

Yes it was! 

In case you haven't seen it, CEA has redone their website. I like the new look and the content makes it much easier to understand the scope of their work. Bravo to whomever worked on this! 

I think I broadly like the idea of Donation Week. 

One potential weakness is that I'm curious if it promotes the more well-known charities due to the voting system. I'd assume that these are somewhat inversely correlated with the most neglected charities.

Related, I'm curious if future versions could feature specific subprojects/teams within charities. "Rethink Priorities" is a rather large project compared to "PauseAI US", I assume it would be interesting if different parts of it were put here instead. 

(That said, in terms of the donation, I'd hop... (read more)

One potential weakness is that I'm curious if it promotes the more well-known charities due to the voting system. I'd assume that these are somewhat inversely correlated with the most neglected charities.

I guess this isn't necessarily a weakness if the more well-known charities are more effective? I can see the case that: a) they might not be neglected in EA circles, but may be very neglected globally compared to their impact and that b) there is often an inverse relationship between tractability/neglectedness and importance/impact of a cause area and char... (read more)

6
Ozzie Gooen
I (with limited information) think the EA Animal Welfare Fund is promising, but wonder how much of that matches the intention of this experiment. It can be a bit underwhelming if an experiment to try to get the crowd's takes on charities winds up determining to, "just let the current few experts figure it out." Though I guess, that does represent a good state of the world. (The public thinks that the current experts are basically right) 

I occasionally hear implications that cyber + AI + rogue human hackers will cause mass devastation, in ways that roughly match "lots of cyberattacks happening all over." I'm skeptical of this causing over $1T/year in damages (for over 5 years, pre-TAI), and definitely of it causing an existential disaster.

There are some much more narrow situations that might be more X-risk-relevant, like [A rogue AI exfiltrates itself] or [China uses cyber weapons to dominate the US and create a singleton], but I think these are so narrow they should really be identified i... (read more)

Pretty funny CGD blog post by Victoria Fan and Rachel Bonnifield: If the Global Health Donors Were Your Parents: A (Whimsical) Comparative Perspective. Quoting at length (with some reformatting):

Navigating the global health funding landscape can be confusing even for global health veterans; there are scores of donors and multilateral funding mechanisms, each with its own particular structure, personality, and philosophy. For the uninitiated, PEPFAR, GAVI, PMI, WHO, the Global Fund, UNITAID, and the Gates Foundation can all appear obscure and intimidating.

... (read more)

How do I offset my animal product consumption as easily as possible? The ideal product would be a basket of offsets that's

  • easy to set up-- ideally a single monthly donation equivalent to the animal product consumption of the average American, which I can scale up a bit to make sure I'm net positive
  • based on well-founded impact estimates
  • affects a wide variety of animals reflecting my actual diet-- at a minimum my donation would be split among separate nonprofits improving the welfare of mammals, birds, fish, and invertebrates, and ideally it would closely tr
... (read more)
Showing 3 of 4 replies (Click to show all)
19
Toby Tremlett🔹
Specifically this. 
7
Thomas Kwa
Thanks, I've started donating $33/month to the FarmKind bonus fund, which is double the calculator estimate for my diet. [1] I will probably donate ~$10k of stocks in 2025 to offset my lifetime diet impact-- is there any reason not to do this? I've already looked at the non-counterfactual matching argument, which I don't find convincing. [1] I basically never eat chicken, substituting it with other meats, so I reduced the poultry category by 2/3 and allocated that proportionally between the beef and pork categories.

One reason to perhaps wait before offsetting your lifetime impact all at once could be to preserve your capital’s optionality. Cultivated meat could in the future become common and affordable, or your dietary preferences could otherwise change such that $10k was too much to spend.

Your moral views on offsetting could also change. For example, you might decide that the $10k would be better spent on longtermist causes, or that it’d be strictly better to donate the $10k to the most cost-effective animal charity rather than offsetting.

I basically never eat chicken

That’s awesome. That probably gets you 90% of the way there already, even if there were no offset!

Possibly a high effort low reward suggestion for the forum team but I’d love (with a single click) to be able to listen to forum posts as a podcast via google’s notebookLM. I think this could increase my content consumption of long form posts by about 2x.

The UK government has announced the foundation of the Laboratory for AI Security Research (LASR) with around £8m funding, which appears to have largely flown under the radar.

---

To help the UK stay ahead in the “new AI arms race” the Chancellor of the Duchy of Lancaster will announce a new Laboratory for AI Security Research (LASR) to protect the UK and its allies against new threats, saying:

"The lab will pull together world-class industry, academic and government experts to assess the impact of AI on our national security.

"While AI can amplify existing cyb... (read more)

Sometimes people mention "expanding the moral circle" as if it's universally good. The US flag is an item that has expanded and contracted in how much care it gets.

The US Flag Code states: "The flag represents a living country and is itself considered a living thing." When I was a child, my scout troop taught us that American flags should never touch the ground, and a worn-out flag should be disposed of respectfully by burial (in a wooden box, as if it were a person) or burning (while saluting the flag and reciting the Pledge of Allegiance) and then buryin... (read more)

Good point and good fact. 

My sense, though, is that if you scratch most "expand the moral circle" statements you find a bit of implicit moral realism. I think generally there's an unspoken "...to be closer to its truly appropriate extent", and that there's an unspoken assumption that there'll be a sensible basis for that extent. Maybe some people are making the statement prima facie though. Could make for an interesting survey.

4
Eevee🔹
I did not know this. That's wild.

Is anyone keeping tabs on where AI's actually being deployed in the wild? I feel like I mostly see (and so this could be a me problem) big-picture stuff, but there seems to be a proliferation of small actors doing weird stuff. Twitter / X seems to have a lot more AI content, and apparently YouTube comments do now as well (per conversation I stumbled on while watching some YouTube recreationally - language & content warnings: https://youtu.be/p068t9uc2pk?si=orES1UIoq5qTV5TH&t=2240)

We've decided to add another $5000 match for the last stretch of the Donation Election (you can see how much of it is left on the banner). This is in addition to the first $5000, which we already matched. I know that the word shouldn't be used loosely, but this match seems genuinely counter-factual to me- based on the pace of donations so far, I put low odds (20%) on the full match amount being reached. That is, unless potential donors suddenly see the value of Bulbys exploding from Forum buttons

The Marginal Funding posts were really valuable this y... (read more)

Load more