We now have a unified @mention feature in our editor! You can use it to add links to posts, tags, and users. Thanks so much to @Vlad Sitalo — both for the GitHub PR introducing this feature, and for time and again making useful improvements to our open source codebase. 💜
Around EA Priorities:
Personally, I feel fairly strongly convinced to favor interventions that could help the future past 20 years from now. (A much lighter version of "Longtermism").
If I had a budget of $10B, I'd probably donate a fair bit to some existing AI safety groups. But it's tricky to know what to do with, say, $10k. And the fact that the SFF, OP, and others have funded some of the clearest wins makes it harder to know what's exciting on-the-margin.
I feel incredibly unsatisfied with the public EA dialogue around AI safety strategy now. From what I ...
I think there's a (mostly but not entirely accurate) vibe that all AI safety orgs that are worth funding will already be approximately fully funded by OpenPhil and others, but that animal orgs (especially in invertebrate/wild welfare) are very neglected.
That makes sense, but I'm feeling skeptical. There are just so many AI safety orgs now, and the technical ones generally aren't even funded by OP.
For example: https://www.lesswrong.com/posts/9n87is5QsCozxr9fp/the-big-nonprofits-post
While a bunch of these salaries are on the high side, not all of them are.
The Donation Election will close (no more voting or donating) at an undisclosed time today... could be in ten minutes, could be in ten hours... get your final votes in soon just in case!
Yudkowsky's message is "If anyone builds superintelligence, everyone dies." Zvi's version is "If anyone builds superintelligence under anything like current conditions, everyone probably dies."
Yudkowsky contrasts those framings with common "EA framings" like "It seems hard to predict whether superintelligence will kill everyone or not, but there's a worryingly high chance it will, and Earth isn't prepared," and seems to think the latter framing is substantially driven by concerns about what can be said "in polite company."
Obviously I can't speak for all of...
I think the difference between me and Yudkowsky has less to do with social effects on our speech and more to do with differing epistemic practices, i.e. about how confident one can reasonably be about the effects of poorly understood future technologies emerging in future, poorly understood circumstances.
This isn't expressing disagreement, but I think it's also important to consider the social effects of our speaking in line with different epistemic practices, i.e.,
The very well written Notes on Effective Altruism coheres some thoughts I've had over the years, and makes me think we should potentially drop the "how to do good in the best way possible framing" when introducing EA for the "be more effective when trying to help others" framing. This honestly seems straightforwardly good to me from a number of different angles, and I think we should seriously be thinking about changing our overall branding to this as a tagline instead.
But am I missing something here? Is there a reason the latter is worse than I think? Or some hidden benefits to the former that I'm not weighing?
a moral intuition i have: to avoid culturally/conformistly-motivated cognition, it's useful to ask:
if we were starting over, new to the world but with all the technology we have now, would we recreate this practice?
example: we start and out and there's us, and these innocent fluffy creatures that can't talk to us, but they can be our friends. we're just learning about them for the first time. would we, at some point, spontaneously choose to kill them and eat their bodies, despite us having plant-based foods, supplements, vegan-assuming nutrition guides, etc? to me, the answer seems obviously not. the idea would not even cross our minds.
(i encourage picking other topics and seeing how this applies)
I am organizing a fundraising competition between Philosophy Departments for AMF.
You can find it here: https://www.againstmalaria.com/FundraiserGroup.aspx?FundraiserID=9191
Previous editions have netted (badum-tschak) roughly $40.000:
https://www.againstmalaria.com/FundraiserGroup.aspx?FundraiserID=9189
Any contributions are very welcome, as is sharing the fundraiser. A more official-looking announcement is on Dailynous, a central blog of academic philosophy: people found this ideal for sharing via e.g. department listservs.
https://dailynous.com/2024/12...
Suppose that the EA community were transported to the UK and US in 1776. How fast would slavery have been abolished? Recall that the slave trade ended in 1807 in the UK and 1808 in the US, and abolition happened between 1838-1843 in the British Empire and 1865 in the US.
Assumptions:
I guess on one hand, if this were the case, then EAs would be well-represented in America, given that it's population in 1776 was just 2.5M, vs. the UK population of 8M.
On the other hand, I'd assume that if they were distributed across the US, many would have been farmers / low-income workers / slaves, so wouldn't have been able to contribute much. There is an interesting question on how much labor mobility or inequality there was at the time.
Also, it seems like EAs got incredibly lucky with Dustin Moskovitz + Good Ventures. It's hard to picture just how lucky we were with that, and what the corresponding scenarios would have been like in 1776.
Could make for neat historical fiction.
EA in the wild: I'm having trouble adding a screenshot but I recently made an online purchase and at the bottom of the checkout page was a "give 1% of your purchase to a high-impact cause" - and it was featuring Giving What We Can's funds!
Always fun to see EA in unexpected places. :)
In case you haven't seen it, CEA has redone their website. I like the new look and the content makes it much easier to understand the scope of their work. Bravo to whomever worked on this!
I think I broadly like the idea of Donation Week.
One potential weakness is that I'm curious if it promotes the more well-known charities due to the voting system. I'd assume that these are somewhat inversely correlated with the most neglected charities.
Related, I'm curious if future versions could feature specific subprojects/teams within charities. "Rethink Priorities" is a rather large project compared to "PauseAI US", I assume it would be interesting if different parts of it were put here instead.
(That said, in terms of the donation, I'd hop...
One potential weakness is that I'm curious if it promotes the more well-known charities due to the voting system. I'd assume that these are somewhat inversely correlated with the most neglected charities.
I guess this isn't necessarily a weakness if the more well-known charities are more effective? I can see the case that: a) they might not be neglected in EA circles, but may be very neglected globally compared to their impact and that b) there is often an inverse relationship between tractability/neglectedness and importance/impact of a cause area and char...
I occasionally hear implications that cyber + AI + rogue human hackers will cause mass devastation, in ways that roughly match "lots of cyberattacks happening all over." I'm skeptical of this causing over $1T/year in damages (for over 5 years, pre-TAI), and definitely of it causing an existential disaster.
There are some much more narrow situations that might be more X-risk-relevant, like [A rogue AI exfiltrates itself] or [China uses cyber weapons to dominate the US and create a singleton], but I think these are so narrow they should really be identified i...
Pretty funny CGD blog post by Victoria Fan and Rachel Bonnifield: If the Global Health Donors Were Your Parents: A (Whimsical) Comparative Perspective. Quoting at length (with some reformatting):
...Navigating the global health funding landscape can be confusing even for global health veterans; there are scores of donors and multilateral funding mechanisms, each with its own particular structure, personality, and philosophy. For the uninitiated, PEPFAR, GAVI, PMI, WHO, the Global Fund, UNITAID, and the Gates Foundation can all appear obscure and intimidating.
How do I offset my animal product consumption as easily as possible? The ideal product would be a basket of offsets that's
One reason to perhaps wait before offsetting your lifetime impact all at once could be to preserve your capital’s optionality. Cultivated meat could in the future become common and affordable, or your dietary preferences could otherwise change such that $10k was too much to spend.
Your moral views on offsetting could also change. For example, you might decide that the $10k would be better spent on longtermist causes, or that it’d be strictly better to donate the $10k to the most cost-effective animal charity rather than offsetting.
I basically never eat chicken
That’s awesome. That probably gets you 90% of the way there already, even if there were no offset!
The UK government has announced the foundation of the Laboratory for AI Security Research (LASR) with around £8m funding, which appears to have largely flown under the radar.
---
To help the UK stay ahead in the “new AI arms race” the Chancellor of the Duchy of Lancaster will announce a new Laboratory for AI Security Research (LASR) to protect the UK and its allies against new threats, saying:
"The lab will pull together world-class industry, academic and government experts to assess the impact of AI on our national security.
"While AI can amplify existing cyb...
Sometimes people mention "expanding the moral circle" as if it's universally good. The US flag is an item that has expanded and contracted in how much care it gets.
The US Flag Code states: "The flag represents a living country and is itself considered a living thing." When I was a child, my scout troop taught us that American flags should never touch the ground, and a worn-out flag should be disposed of respectfully by burial (in a wooden box, as if it were a person) or burning (while saluting the flag and reciting the Pledge of Allegiance) and then buryin...
Good point and good fact.
My sense, though, is that if you scratch most "expand the moral circle" statements you find a bit of implicit moral realism. I think generally there's an unspoken "...to be closer to its truly appropriate extent", and that there's an unspoken assumption that there'll be a sensible basis for that extent. Maybe some people are making the statement prima facie though. Could make for an interesting survey.
Is anyone keeping tabs on where AI's actually being deployed in the wild? I feel like I mostly see (and so this could be a me problem) big-picture stuff, but there seems to be a proliferation of small actors doing weird stuff. Twitter / X seems to have a lot more AI content, and apparently YouTube comments do now as well (per conversation I stumbled on while watching some YouTube recreationally - language & content warnings: https://youtu.be/p068t9uc2pk?si=orES1UIoq5qTV5TH&t=2240)
We've decided to add another $5000 match for the last stretch of the Donation Election (you can see how much of it is left on the banner). This is in addition to the first $5000, which we already matched. I know that the word shouldn't be used loosely, but this match seems genuinely counter-factual to me- based on the pace of donations so far, I put low odds (20%) on the full match amount being reached. That is, unless potential donors suddenly see the value of Bulbys exploding from Forum buttons.
The Marginal Funding posts were really valuable this y...