I think suicide prevention might be an underrated cause (need to firmly fact check before my confidence in this is high)
(1) if you delay someone from commiting suicide for just 30 minutes they will almost always change their mind
(2) suicidal people usually spend years inbetween attempts
(3) after someone "fails" a suicide attempt via changing their mind they usually feel a lot better emotionally (excluding failed attempts, only failure via changing your mind)
a charity in the UK places 1 hour of phone time is £44, if we assume 10% of people who call th...
The book "Careless People" starts as a critique of Facebook — a key EA funding source — and unexpectedly lands on AI safety, x-risk, and global institutional failure.
I just finished Sarah Wynn-Williams' recently published book. I had planned to post earlier — mainly about EA’s funding sources — but after reading the surprising epilogue, I now think both the book and the author might deserve even broader attention within EA and longtermist circles.
The early chapters examine the psychology and incentives...
Bloomberg's valuation of Moskovitz's fortune recently dropped by ~60% (from $30B to $11B) as his level of ownership of Meta was not significant enough to show up in their filings.
(But Forbes' estimate didn't change much at $19B)
POLL: Is it OK to eat honey[1]?
I've appreciated the Honey wars. We've seen the kind of earnest inquiry that makes EA pretty great.
I'm interested to see where the community stands here. I have so much uncertainty that I'm close to the neutral point, but I've been updated towards it maybe not being OK - I previously slurped the honey without a thought. What do you think[2]?
This is a non-specific question. "OK" could mean a number of things (you choose). It could mean you think eating net honey is "net positive" (My pleasure/health > sma
It's OK to eat honey
I am quite uncertain because I am unsure to what extend a consumption boycott affects production; however, I lean slightly on the disagree side because boycotting animal-based foods is important for:
There seems to be a pattern where I get excited about some potential projects and ideas during an EA Global, fill EA Global survey suggesting that the conference was extremely useful for me, but then those projects never materialise for various reasons. If others relate, I worry that EA conferences are not as useful as feedback surveys suggest.
Marginal returns to work (probably) go up with funding cuts, not down.
It can be demoralizing when a field you’re working in gets funding cuts. Job security goes down, less stuff is happening in your area, and people may pay you less attention since they believe others are doing more important work. But assuming you have job security and mostly make career decisions on inside views (meaning you’re not updating too heavily on funders de-prioritizing your cause area), then your skills are more valuable than they were previously.
Lots of caveats apply of course...
Good point. In a toy model, it'd depend on relative cuts to labor versus non-labor inputs. Now that I think about it, it probably points towards exiting being better in mission-driven fields. People are more attached to their careers so the non-labor resources get cut deeply while all the staff try to hold onto their jobs.
Maybe I'd amend it to... if you're willing to switch jobs, then you can benefit from increasing marginal returns in some sub-cause areas. Because maybe there's a sub-cause area where lots of staff are quitting (out of fear the cause area ...
I’m in a WeChat group initiated by Plant Future and Good Food Fund. It is meant to connect young Chinese students who are interested in promoting vegetarianism. We have a weekly discussion (in English) every Wednesday morning 7:30AM. If you’re interested in joining, please send me a message.
Note that you do need a WeChat account.
Recently I got curious about the situation of animal farming in China. So I asked the popular AI tools (ChatGPT, Gemini, Perplexity) to do some research on this topic. I have put the result into a NotebookLM note here: https://notebooklm.google.com/notebook/071bb8ac-1745-4965-904a-d0afb9437682
If you have resources that you think I should include, please let me know.
The original reports can be found here: https://u.pcloud.link/publink/show?code=kZiW1f5Zf9YhpJUeHqfPVbuv9Afhozu1XSgy
I have also written a short summary.
Recently, various groups successfully lobbied to remove the moratorium on state AI bills. This involved a surprising amount of success while competing against substantial investment from big tech (e.g. Google, Meta, Amazon). I think people interested in mitigating catastrophic risks from advanced AI should consider working at these organizations, at least to the extent their skills/interests are applicable. This both because they could often directly work on substantially helpful things (depending on the role and organization) and because this would yield ...
Also want to shout out @Holly Elmore ⏸️ 🔸 and PauseAI's activism in getting people to call their senators. (You can commend this effort even if you disagree with an ultimate pause goal.) It could be worth following them for similar advocacy opportunities.
A new study in The Lancet estimates that high USAID spending saved over 91 million lives in the past 21 years, and that the cuts will kill 14 million by 2030. They estimate high USAID spending reduced all-cause mortality by 15%, and by 32% in under 5s.
My initial hot-take off the cuff reaction is that it seems borderline implausible that USAID spending have reduced under 5 mortality by 1/3. With so many other factors like Development/Growth, government programs, Medical innovation not funded by USAID (artesunate came on the scene after 2001!), 10x-100x more effective AID like Gates/AMF etc how could this be?
The biggest under 5 effects caused by USAID might be from malaria/ORS programs, but they usually didn't fund the staff giving the medication, so how much credit are they taking for those? They've clai...
argument about anti-realism just reinforces my view that effective altruism needs to break apart into sub movements that clearly state their goals/ontologies. (I'm pro ea) but it increasingly doesn't make sense to me to call this "effective altruism" and then be vaguely morally agnostic while mostly just being an applied utilitarian group. Even among the utilitarians there is tons of minutiae that actually significantly alters the value estimates of different things.
I really do think we could solve most of this stuff by just making EA an umbrel...
Good news! The 10-year AI moratorium on state legislation has been removed from the budget bill.
The Senate voted 99-1 to strike the provision. Senator Blackburn, who originally supported the moratorium, proposed the amendment to remove it after concluding her compromise exemptions wouldn't work.
https://www.yahoo.com/news/us-senate-strikes-ai-regulation-085758901.html?guccounter=1
Linking this from @Andy Masley's blog:
Consider applying to the Berggruen Prize Essay Competition on the philosophy of consciousness, and donating a portion of your winnings to effective charities
TLDR:
The theme is 'consciousness' and the criteria are very vague. Peter Singer won before.
More details on the berggruen website here.
Matching campaigns get a bad rep in EA circles* but it’s totally reasonable for a donor to be concerned that if they put in lots of money into an area other people won’t donate, and matching campaigns preserve the incentive for others to donate, crowding in funding.
* I agree that campaigns claiming you’ll have twice the impact as your donation will be matched are misleading.
Have you read Holden's classic on this topic? It sounds like you are describing what he calls "Influence matching".
Elon Musk recently presented SpaceX's roadmap for establishing a self-sustaining civilisation on Mars (by 2033 lol). Aside from the timeline, I think there may be some important questions to consider with regards to space colonisation and s-risks:
Hi Birk. Thank you for your very in-depth response, I found it very interesting. That's pretty much how i imagined the governance system when I wrote the post. I actually had it as a description like that originally but I hated the implications for liberalism, so i took a step back and listed requirements instead (which didn't actually help).
The "points of no return" do seem quite contingent, and I'm always sceptical about the tractability of trying to prevent something from happening - usually my approach is: it's probably gonna happen, how do we pr...
On Stepping away from the Forum and "EA"
I'm going to stop posting on the Forum for the foreseeable future[1]. I've learned a lot from reading the Forum as well as participating in it. I hope that other users have learned something from my contributions, even if it's just a sharper understanding of where they're right and I'm wrong! I'm particularly proud of What's in a GWWC Pin? and 5 Historical Case Studies for an EA in Decline.
I'm not deleting the account so if you want to get in touch the best way is probably DM here with an alternative way to stay in c...
And yes, "weird" has negative connotations to most people. Self flagellation once helped highlight areas needing improvement. Now overcorrection has created hesitation among responsible, cautious, and credible people who might otherwise publicly identify as effective altruists. As a result, the label increasingly belongs to those willing to accept high reputational risks or use it opportunistically, weakening the movement’s overa...
Ivan Gayton was formerly mission head at Doctors Without Borders. His interview (60 mins, transcript here) with Elizabeth van Nostrand is full of eye-opening anecdotes, no single one is representative of the whole interview so it's worth listening to / reading it all. Here's one, on the sheer level of poverty and how giving workers higher wages (even if just $1/day vs the local market rate of $0.25/day "for nine hours on the business end of a shovel") distorted the local economy to the point of completely messing up society:
...[00:06:07] Ivan: I had a re
The funny thing working with vitamin deficiencies and malnourishment, you never think it could happen to you? I am autistic, so my diet is bland and always the same... I have scurvy... and vitamin A hypovitaminosis...I literally write papers on issues like this and how we are supposed to fix them. SO MY QUICK TAKE IS "TAKE CARE OF YOUR HEALT FIRST".
So, I have two possible projects for AI alignment work that I'm debating between focusing on. Am curious for input into how worthwhile they'd be to pursue or follow up on.
The first is a mechanistic interpretability project. I have previously explored things like truth probes by reproducing the Marks and Tegmark paper and extending it to test whether a cosine similarity based linear classifier works as well. It does, but not any better or worse than the difference of means method from that paper. Unlike difference of means, however, it can be extended to mu...
Of course! You make some great points. I’ve been thinking about that tension too, how alignment via persuasion can feel risky, but might be worth exploring if we can constrain it with better emotional scaffolding.
VSPE (the framework I created) is an attempt to formalize those dynamics without relying entirely on AGI goodwill. I agree it’s not obvious yet if that’s possible, but your comments helped clarify where that boundary might be.
I would love to hear how your own experiments go if you test either idea!