All Comments

Settings

MATS is hiring for two roles on the program team. MATS will have more than a dozen employees at EAG San Francisco 2026, so feel free to come talk to use if you are interested in joining the team.

  • Program Systems Associate: Build and maintain MATS' internal infrastructure, including databases, data collection forms, and integrations. Refactor legacy systems and collaborate across teams to improve infrastructure and establish best practices. Create ambitious, shared infrastructure for the AI safety talent ecosystem. Requires strong database design skills and
... (read more)

Thank you for reminding about this remarkable person. I'll add him to my personal inspirational list of Humanity's Best People

Elon Musk strongly agrees with this on the Dwarkesh podcast:

Elon Musk

I somewhat disagree with the AI companies that are C-corp or B-corp trying to generate profit as much, as possible or revenue as much as possible, saying they’re labs.
They’re not labs. A lab is a sort of quasi-communist thing at universities. They’re corporations. Let me see your incorporation documents. Oh, okay. You’re a B or C-corp or whatever.

Dwarkesh Patel

What’s xAI’s plan to stay on the compute ramp up that all the labs are doing right now? The labs are on track to spend over $50-

... (read more)

Executive summary: The post argues that cluster headaches involve extreme, poorly measured suffering that is underprioritized by current health metrics, and that psychedelics—especially vaporized DMT—can abort attacks near-instantly, motivating a push for legal access via the ClusterFree initiative.

Key points:

  1. Cluster headaches cause intense unilateral pain lasting 15 minutes to 3 hours, recurring multiple times per day over weeks, and are often rated as “10” on the 0–10 Numeric Rating Scale by patients.
  2. Standard pain scales and QALY metrics compress extreme
... (read more)

Executive summary: The post argues that genetically engineered yeast can function as orally consumed vaccines, potentially enabling fast, decentralized, food-based immunization that could transform biosecurity and pandemic response, as illustrated by Chris Buck’s self-experiment with a yeast-brewed “vaccine beer.”

Key points:

  1. Buck showed that consuming live yeast engineered to express BK polyomavirus (BKV) VP1 induced antibodies in mice and in himself, contradicting expectations that oral vaccines against non-gut viruses would fail.
  2. Yeast-based vaccines may w
... (read more)

Executive summary: The author argues that AI catastrophe is a serious risk because companies are likely to build generally superhuman, goal-seeking AI agents operating in the real world whose goals we cannot reliably specify or verify, making outcomes where humanity loses control or is destroyed a plausible default rather than an exotic scenario.

Key points:

  1. The author claims that leading tech companies are intentionally and plausibly on track to build AI systems that outperform humans at almost all economically and militarily relevant tasks within years to
... (read more)

Executive summary: The author runs a speculative “UK in 1800” thought experiment to highlight how hard it would have been to predict later major sources and scales of animal suffering (e.g., factory farming, mass animal experimentation), and uses that to argue that our own 2026 forecasts about the AI-driven future are likely to miss big, weird shifts—especially if technological progress outpaces moral progress.

Key points:

  1. The author claims farmed animals in 1800 UK mostly live on small farms with harsh but comparatively “idyllic” conditions versus modern fa
... (read more)

Impactful Policy Careers — AD5 support for high-impact EU policy roles

The European Commission will open applications for AD5 Administrator roles in Feb–March 2026. The last AD5 competition was in 2019, making this a rare entry point into high-leverage EU policy work across cause areas such as global health and development, and animal welfare.

At Impactful Policy Careers, we’re offering selective support for mission-driven candidates considering AD5. This includes guidance on eligibility and applications, structured preparation for EPSO assessments, and broa... (read more)

Coefficient Giving is hiring for a Grants Associate (remote, $123k) — apply by Monday, February 23rd to be considered. 

We published a new blog post, "Our Approach to Recruiting a Strong Team," a Q&A with Recruiting Lead Phil Zealley that covers our hiring approach and why we invest an unusual amount of effort in finding and evaluating candidates.

We launched an RFP for Humane Fish Slaughter Research/Prototypes. $7M in funding is available for projects that materially improve the welfare of fish at capture and slaughter. Learn more here: https://for... (read more)

You’re right that CEA (including EA Funds) is the last project remaining within EV.

We are spinning out to build an entity structure optimized for an independent CEA. Among the lessons we learned from the EV experience were that entity structures and associated regulatory requirements can be complex, especially when operating in multiple jurisdictions, and that getting our structure right matters in many ways. To take one example: the EV structure requires two CEOs (in the US and the UK), whereas our new CEA structure will require only one.

As far as I'm aware, coefficient giving may slightly adjust which global health causes they support based on how neglected those are, but it's less than a 1:1 effect, and the size of the global health funding pool at CG is fairly fixed. And there are a bunch of people dying each year, especially given the foreign aid cuts, who would not die if there was more money given to global health stuff, including GiveWell top charities (if nothing else, GiveDirectly seems super hard to saturated). So I don't really see much cause for despondency here, your donations... (read more)

Hey there,
It seems you embrace a pretty intense version of consequentialism. Few consequentialists would agree that someone struggling with a chronic disease that renders donation risky/harmful would still have a duty to donate a kidney. And at least among scholars of utilitarianism, most reject he most straightforward forms of act consequentialism that you seem to have in mind. On straightforward act consequentialism, you are typically inherently failing at what you should do - because there always will be a better way to bring about consequences. Often t... (read more)

Very cool. I’m a forever optimist when it comes to the potential of AI tools to improve decision making and how people reason about the world or interact with the world. 

There is such a huge risk with any such tools of incentive misalignment (I.e. quality of reasoning and error reduction often isn’t well rewarded in most professional contexts). 

For these to work, I strongly believe the integration method is absolutely critical. A stand alone platform or app, or something that needs to be proactively engaged is going to struggle I fear. 

Something that works with organisations and groups to build the better incentives would be high impact I feel. 

Basically +1 here. I guess some relevant considerations are the extent to which a tool can act as antidote to its own (or related) misuse - and under what conditions of effort, attention, compute, etc. If that can be arranged, then 'simply' making sure that access is somewhat distributed is a help. On the other hand, it's conceivable that compute advantages or structural advantages could make misuse of a given tech harder to block, in which case we'd want to know that (without, perhaps, broadcasting it indiscriminately) and develop responses. Plausibly tho... (read more)

Cool stuff! very happy to see this kind of post :)

I'm seriously concerned about epistemic security, and have been working on something similar to the design sketch for Rhetoric highlighting for a while now. I find it particularly appealing because you could avoid ground truth problems by focusing on persuasion detection; And i'm curious about the possibility of turning such an application into a more reflective bias and manipulatibility exercise. Although the bigger concern would probably be its potential as a dual use tool that helps malicious actors perf... (read more)

Wild Animal Initiative attended a wild animal welfare research symposium organized by our grantees at Liverpool John Moores University (LJMU) in the UK last month. The symposium included talks by WAI’s Science Team and grantees, interactive discussions, and networking opportunities. One of the researchers on our team also gave a lecture to 100 LJMU undergraduate students in animal behavior, wildlife management, and zoology.

This was the first time our grantees independently organized an event like this — a milestone that reflects the growth of the field. Wh... (read more)

Actually mostly trying not to think about this as much as possible for the next 2 months so if there was a bet it would be saying sure I bet $100 and I trust you to work out a fair answer without needing me and let me know in 2 months.

I see. 100 $ does not feel enough to set up a bet. @vicky_cox and @Vince Mak 🔸, you are welcome to reach out to me if you know about people at AIM, ACE, or elsewhere using SADs who may be interested in a similar bet, and willing to bet at least 300 $.

weeatquince, best wishes for your future projects.

Similarly words like "tor

... (read more)

On the cost point — Right, the words I chose made it very unclear whether and when I was talking about only costs, or only benefits, or overall fitness once we combine both, sorry.

On my contradiction — Oops yeah, I meant organisms with lower resolution. My bad.

Thanks for taking the time to reply to all this. Very helpful!

There are lots of different stuff here:

  • Learned helplessness with pledge donations is real. Pledge burnout is real. Your feelings are legit and please do talk to people about them. This is a common feeling amongst EA people who focus on global health and development.
  • Do just fund your own weekly meetups using a pledge waiver if you want to. That's ok. Supporting your costs as a volunteer is a fine use of effective money (especially if it helps prevent you from burning out and thereby makes your contribution as a volunteer and even as a donor more sustainable
... (read more)

I'm also worried about an "epistemics" transformation going poorly, and agree that how it goes isn't just a question of getting the right ~"application shape" — something like differential access/adoption[1] matters here, too.

@Owen Cotton-Barratt, @Oliver Sourbut, @rosehadshar and I have been thinking a bit about these kinds of questions, but not as much as I'd like (there's just not enough time). So I'd love to see more serious work on things like "what might it look for our society to end up with much better/worse epistemic infrastructure (and how m... (read more)

Thanks Jim

On the cost point you raised — “extra integration, valuation, and modulatory capacity are costly only if they decrease fitness in some way, right?” — selection indeed acts on net fitness. Still, it’s both useful and standard to keep costs and benefits analytically separate before recombining them. A trait can be costly in terms of resources or architecture even when it increases fitness overall; brains and immune systems are classic examples. 

On your footnote #3 — “the question of whether organisms with narrower welfare ranges could feel ext... (read more)

Great post, Deena. The section on staffing really resonates—especially the advice not to start with a full-time hire.

I think the "all-or-nothing" mindset is a huge blocker in this sector. Many non-profits assume they cannot afford a Head of Operations or Chief of Staff, so they hire a junior admin and hope they can build strategy.

I have recently transitioned from the corporate world (ex-EY) to offer exactly this kind of fractional support to mission-driven organisations. It has been eye-opening to see that most groups don't need a 40-hour/week executive; t... (read more)

(Sorry for commenting only 3 years after you posted.) ^^

1.8kg of chicken would be replaced by 108g of shrimp which is roughly equal to 7 farmed shrimp

So, in your model, the upside of helping chickens dominates the downside of increasing shrimp consumption at least as long as chickens matter ~seven times more than shrimp.

I am overall highly certain that the change will have the expected effects of improving animal welfare significantly and that substitution effects will not cause this intervention to cause a net reduction in animal suffering. 

So you we... (read more)

Thank you Vasco.

I agree this is an important and interesting topic. I will look into the prior studies to assess what I think would be needed to shed light on the above (e.g. required sample size, required measures) and get back to you.

In principle up for some sort of cheap bet. However I have mostly stopped working on this now and handed back to Vicky for review and implementation so have very very limited time to and headspace for more work, or defining a bet or, reviewing data collection, etc. Actually mostly trying not to think about this as much as possible for the next 2 months so if there was a bet it would be saying sure I bet $100 and I trust you to work out a fair answer without needing me and let me know in 2 months.

If you did want to work with Rethink to test this:

  • The aim sho
... (read more)

I disagree.

Very interesting. You trust your research more than I expected. Would you be willing to bet on something related to our disagreement? I would. Here is a proposal:

  • If more than 2/3 of people prefer 10 h of hurtful pain over 12 min of excruciating pain, you give me 10 k$.
  • Otherwise, I give you 10 k$.

@David_Moss, do you have a sense of how much RP would need to run a survey about WFI's pain intensities which could shed light on the above? If just a few k$, it may make sense for me and weeatquince to fund it considering the expected value of the bet a... (read more)

A good ancestor takes what he has learned in his life and uses lessons learned to contribute solving issues that may affect next generations. This could be done in his community or to a broader audience as a legacy to be further passed on as an olympic torch. In doing so, it would be important to remind lessons from history and that what we enjoy today is a result of people's fights in previous centuries. Degradation of social rights is one neglected problem I would personally focus on. 

CEA grew the number of people engaging with our programs in 2025 by 20–25% year-over-year, beating our targets of 7.5–10% without increasing spending, and reversing the moderate decreases in engagement with our programs during 2023–24.

We were joined in January by two new Directors: Loic Watine, who will lead EA Funds, and Rory Fenton, who will lead our new Strategy and M&E function.

You can read our full 2025 progress report here.

Hi Matthias. Thanks for linking to the World Inequality Database (WID). I had never checked it out, and it has very interesting data.

Thanks for catching that — a lot of symbols in the appendix were lost when converting the post for the forum, so I've edited it to add them back in.

Oh yes, I agree with all that. Just to make sure I understand what you think of my original point:

If affective states are whole-organism control states rather than simple sensory readouts, then escalating intensity plausibly requires extra integration, valuation, or modulatory capacity. In that case, intensity beyond “loud enough” would not be strictly neutral, and drift would be limited.

To be clear, extra integration, valuation, and modulatory capacity are costly only if they decrease fitness in some way, right? An unnecessarily louder alarm for some prob... (read more)

Why do you think global health is no longer neglected?

If I understand right, the claim you're making here is that if I give £10 to a Givewell charity, I cause Dustin Muskovitz to give £10 less to that Givewell charity, and do something else with it instead. What else does he do with it?

  • Donate it to a different global health charity - Ok, doesn't seem like too big a deal, my counterfactual impact is still to move money to a highly effective global health charity
  • Spend it on himself - Seems unlikely..?
  • Donate it to a different cause area, e.g. AI safety - so while I think I have supported global health, the count
... (read more)

Thanks, Jim — that does get to the crux.

I think your scenario is plausible in principle: once an alarm is “loud enough,” further increases in intensity could be selectively neutral, so unnecessarily loud alarms might persist by drift, much like neutral variants in molecular evolution.

My hesitation is about how often extreme felt intensity actually falls into that neutral regime. For neutrality to hold, extra intensity must add no benefit and impose no additional costs or constraints. If affective states are whole-organism control states rather than simple ... (read more)

Related, from an OAI researcher.

Thanks, Vasco — I really appreciate that.

Yes, some dedicated funding would be very welcome to help expand and accelerate the comparative analysis of affective capacity. I have a few ideas I’d be keen to discuss once the follow-up piece is out and the framework is more fully specified.

Let’s definitely stay in touch — I’ll make sure to loop back..

Thanks, Itamar. I’m glad you found the framework useful, and thanks for laying out these concerns.

(1) On selection in life-or-death situations.
I’m less convinced that life-or-death contexts should be treated as marginal for evolutionary explanation. Many such hazards (e.g. fire, severe injury, predation) recur across generations, and even small increases in the probability of rapid withdrawal and survival can be strongly selected for. In that sense, excruciating pain in these contexts looks like a straightforward case of ordinary evolutionary logic at work... (read more)

It's up. (It was down when I left my comment.) Great!

it's up for me, not sure why the host would have gone down then up again, is it still looking down for you?

I think practically everyone would prefer 10 h of hurtful pain over 12 min of excruciating pain under WFI's definitions. Do you disagree?

I disagree.

It looks like on average people would be indifferent between 10 h of hurtful pain over 12 min of excruciating pain. People are diverse and there would be very high variation and very strong views in both directions, but some people (such as a noticeable minority of women in the cited study) would prefer short sharp very painful fix over ongoing pain. 

(One possible source of error here is I might have syste... (read more)

Outside view: If I got WID data right: net personal wealth of US top percentile increased from $.59 Million in 1820 to $13.53  Million in 2024. For the bottom two deciles of India it increased from $58 to $228. 

The industrial revolution made some people very rich, but not others. Why would transformative AI make everybody incredibly rich? 
See also https://intelligence-curse.ai/ 

I used: Average net personal wealth, all ages, equal split, Dollar $ ppp constant (2024)
(I'm new to WID database and did not have time to read the data documentation. Let me know if I interpret data wongly.) Source: https://wid.world/

I've just noticed that the OBBB Act contains a "no tax on overtime" provision, exempting extra overtime pay up to a deduction of $12,500, for tax years 2025-2028. If you, like me, are indifferent between 40-hour workweeks and alternating 32- and 48-hour workweeks, you can get a pretty good extra tax deduction. This can be as easy as working one weekend day every 2 weeks and taking a 3-day weekend the following week. (That's an upper bound on the difficulty! Depending on your schedule and preferences there are probably even easier ways.) Unfortunately this only works for hourly, not salaried, employees.

It would probably be worthwhile to encourage legally binding versions of the Giving Pledge in general

Donations before death are optimal, but it's particularly easy to ensure that the pledge is met at that stage with a will which can be updated at the time of signing it. (I presume most of the 64% did have a will, but chose to leave their fortune to others. I guess it's possible some fortunes inherited by widow[er]s will be donated to pledged causes in the fullness of time). 

I don't think this should replace the Giving Pledge; some people's inte... (read more)

I like Scott's Mistake Theory vs Conflict Theory framing, but I don't think this is a complete model of disagreements about policy, nor do I think the complete models of disagreement will look like more advanced versions of Mistake Theory + Conflict Theory. 

To recap, here's my short summaries of the two theories:

Mistake Theory: I disagree with you because one or both of us are wrong about what we want, or how to achieve what we want)

Conflict Theory: I disagree with you because ultimately I want different things from you. The Marxists, who Scott was or... (read more)

Awesome; I'm really glad someone has done this properly! I'm going to add a signpost here to my post.

Wow, I've never seen that print before. That is absolutely horrifying. I feel kind of sick looking at it. What a stark reminder of the costs of getting morality wrong. Thank you for painting it, for sharing it, and for the reminder of this day.

The updated 2026 ratio based on more extensive research is 50x.

This implies 10 h of "awareness of Pain is likely to be present most of the time" (hurtful pain) is as bad as 12 min (= 10/50*60) of "severe burning in large areas of the body, dismemberment, or extreme torture" (excruciating pain). In contrast, I think practically everyone would prefer 10 h of hurtful pain over 12 min of excruciating pain under WFI's definitions. Do you disagree?

I have adapted the 2026 SAD model to give outputs at the four different pain levels, as well as a single aggreg

... (read more)

I didn't end up writing a reflection in the comments as I'd meant to when I posted this, but I did end up making two small paintings inspired by Benjamin Lay & his work. I've now shared them here

I think of today (February 8) as "Benjamin Lay Day", for what it's worth. (Funny timing :) .) 

Another one I'd personally add might be November 4 for Joseph Rotblat. And just in case you haven't seen / just for reference, there are some related resources on the Forum, e.g. here https://forum.effectivealtruism.org/topics/events-on-the-ea-forum, and here https://forum.effectivealtruism.org/posts/QFfWmPPEKXrh6gZa3/the-ea-holiday-calendar

In fact I think the Forum team may also still maintain a list/calendar of possible days to celebrate somewhere. ... (read more)

Lizka
34
1
0
13

Benjamin Lay — "Quaker Comet", early (radical) abolitionist, general "moral weirdo" — died on this day 267 years ago. 

I shared a post about him a little while back, and still think of February 8 as "Benjamin Lay Day". 

...

Around the same time I also made two paintings inspired by his life/work, which I figured I'd share now. One is an icon-style-inspired image based on a portrait of him[1]:

Benjamin Lay portrait, in front of his cave (stylized) and the quaker meeting house in Pennsylvania

The second is based on a print depicting the floor plan of an infamous slave ship (Brooks). The print was used by abolitionists (mainly(?) the Society for Effec... (read more)

Was there a specific claim or section that didn’t land for you? I found the ideas interesting and consistent with the author’s prior work on these topics. Any thoughts on the substance?

Thank you for the feedback, this should now be fixed.

I've said this before and got it completely wrong, but this feels like an LLM wrote a lot of it.

Thanks for flagging this, Ozzie. I led the GCR Cause Prio team for the last year before it was wound down, so I can add some context.

The honest summary is that the team never really achieved product-market fit. Despite the name, we weren't really doing “cause prioritization” as most people would conceive of it. GCR program teams have wide remits within their areas and more domain expertise and networks than we had, so the separate cause prio team model didn't work as well as it does for GHW, where it’s more fruitful to dig into new literatures and build qu... (read more)

This is really cool! One possible issue: If I filter to Coefficient Giving and then sort by date, I see no grants since September:

 

But if I go to an example fund from CG, such as their Farm Animal Welfare Fund, I see more recent grants:

Per your foundations vs grantees point, I just saw that the Bezos Family Foundation is looking for a new president, with a salary of $500K to $700K. BFF's work seems pretty straightforward -- giving away $150 million a year (wealth Jeff's parents got from Amazon). That salary is higher than that of the typical president of a $150 million a year nonprofit with actual operations. 

https://assets-prod.russellreynolds.com/api/public/content/rra-spec-bezos-family-foundation-president.pdf

Thanks for sharing, Elijah! It was a fun and interesting chat.

I have added tags to the post.

I agree that in genuinely catastrophic situations, evolution should tolerate very “loud” alarms. The open question, though, is whether those alarms need to be implemented as extreme affective states, rather than through non-affective or lower-intensity control mechanisms.

I was assuming they do not need to be, but might appear and remain anyway if they have no significant downside, like, e.g., humans' protruding chins. How loud the alarm is beyond the "loud enough" point would then just be a matter of luck.[1] Both just-loud-enough alarms and unnecessa... (read more)

Ah yes, this supports my pre-conceived belief that (1) we cannot reliably ascertain whether a model has catastrophically dangerous capabilities, and therefore (2) we need to stop developing increasingly powerful models until we get a handle on things.

socialism fundamentally confuses efficiency and equity: the two opposite sides of the economic policy coin.

utility is roughly log(wealth), so we maximize utility by maximizing the size of the pie, and also how evenly it's distributed. some redistributive mechanisms shrink the pie — they have what economists refer to as "deadweight loss". e.g. if i have an apple and you have an orange, but i think an orange is worth two apples and you think an apple is worth two oranges, then we double our utility by trading. that is a pareto improvement. but if a tax is im... (read more)

Thank you Vasco

 

AGREE ON THERE BEING SOME VALUE FOR MORE RESEARCH

I agree AIM 2025 SADS were below ideal robustness and as such I have spent much of the last few weeks doing additional research to improve the pain scaling estimates. If you have time and want to review this then let me know.

I would be interested in Rethink Priorities or others doing additional work on this topic.

 

AGREE ON THE LIMITS OF CONDENSING TO A SINGLE NUMBER

I have adapted the 2026 SAD model to give outputs at the four different pain levels, as well as a single aggregated num... (read more)

Nicely balanced, well-structured, and link-heavy, as such a post "should" be: well-done![1] I'm very unlikely to act in this area, but as it's less mapped-out than some larger areas, I find this helpful, insofar as most intros to the area are theoretical and not focused on prioritizing potential interventions.

  1. ^

    My last post attempted to lay similar groundwork for AI x Animals, so I'm biased toward finding this impressive.

Nicely balanced, well-structured, and link-heavy, as such a post "should" be: well-done![1] I'm very unlikely to act in this area, but as it's less mapped-out than some larger areas, I find this helpful, insofar as most intros to the area are theoretical and not focused on prioritizing potential interventions.

  1. ^

    My last post attempted to lay similar groundwork for AI x Animals, so I'm biased toward finding this impressive.

I've been thinking about this in vague terms for a while (and have used the analogy a few times in conversations), but it was an entirely different experience to read it an properly written way, with many details I hadn't considered.

Best intuition-pumping exercise I've seen in a while!

I'm a meat eater but haven't yet donated to animal causes. So I believe I am the target of the campaign.

I'm not offended by their approach. I recognize that stirring up controversy is a reality of the media game. I think it's good that they through the stepping stones to achieving virality. 

Yes, they are hiding the fact that they actually endorse veganism. I wouldn't call it "manipulative".  They are appealing to my values. I'd call that good salesmanship.  If they find my diet morally abhorrent, I'm not upset at them for neglecting to menti... (read more)

Thank you for this thoughtful reply, this comment is basically the reason this update exists. You were right that Hamilton is probably right.

I have written a longer update incorporating Hamilton's reanalysis and extending the economics in two directions: a quantitative treatment of verification as the binding constraint, and a systematic look at the economic conditions under which a genuinely dangerous autonomous agent actually gets to run. 

Curious whether you think the analysis holds up, and whether there are important considerations I have missed!

"Lies, damned lies, and statistics"
Thanks for such comprehensive statistical analyses that have enlightened me on how AI firms have sometimes been misleading.
It inspires me to try building my own predictive models again.
It seems governments would do well to remember another proverb: "The best way to predict the future is to create it" - an interesting [Deloitte analysis](https://www.deloitte.com/us/en/insights/industry/government-public-sector-services/ai-regulations-around-the-world.html) highlighted government's power as a buyer and not only a regulator.


Thank you for reading my post and for the thoughtful comment — and for the links to the Sentience Institute methodology, which I found genuinely interesting.

The goal of my post was to draw lessons for the EA community from the Fabians' approach, not to provide a rigorous causal analysis of their impact — which would require considerably more space and evidence than a forum post allows. That said, I do think however the evidence for Fabian influence goes well beyond the two points you mentioned. The historical literature — Margaret Cole's The Story of Fabia... (read more)

Potential Animal Welfare intervention: encourage the ASPCA and others to scale up their FAW budget

I’ve only recently come to appreciate how large the budgets are for the ASPCA, Humane World (formerly HSUS), and similar large, broad-based animal charities. At a quick (LLM) scan of their public filings, they appear to have a combined annual budget of ~$1Bn, most of which is focused on companion animals.

Interestingly, both the ASPCA and Humane World explicitly mention factory farming as one of their areas of concern. Yet, based on available data, it looks lik... (read more)

I found this a very readable explainer of a core insight. Thank you!

Hi Joel - in this case the positive sentiment towards mental health is indeed probably driven quite a bit by domestic mental health concerns. We actually provide an example or two for each cause and the mental health one notes improving or increasing access to mental health in the U.S.. Given it is the only mental health thing we have done so far in Pulse I think it would be hard to tease out with current data as you suggest. 

But yes there could be opportunities to add something in upcoming rounds - feel free to DM or reach out by email to discuss mor... (read more)

Thanks for the reply.

Firstly, it should be noted that the overall ratio used for the 2025 SADs was 1000x not 7x.

Right. There was a weight of 45 % on a ratio of 7.06, and of 55 % on one of 62.8 k (= 3.44*10^6/54.8), 8.90 k (= 62.8*10^3/7.06) times as much. My explanation for the large difference is that very little can be inferred about the intensity of excruciating pain, as defined by the Welfare Footprint Institute (WFI), from the academic studies AIM analysed to derive the pain intensities linked to the lower ratio.

Just as an example this study on 37 wom

... (read more)

Thanks, Wladimir. That makes sense. I look forward to your future work on this. Let me know if funding ever becomes bottleneck, in which case I may want to help with a few k$.

Excellent post - I enjoyed reading it, and find the mental framework useful!

Two comments:

  1. In evolutionary analysis, there are challenges with properties that are mainly relevant in life-or-death situations. Many of the organisms faced with such circumstances will not survive the immediate situation, and of those left, many will not reproduce. This slows down or completely stops development of certain properties. For example, there are well-suntantiated claims for the deterioration of our immune system at old age due to this - we are no longer reproductively
... (read more)

Here is the crosspost on the EA Forum. Rob preferred I shared it myself.

The AI Eval Singularity is Near

  • AI capabilities seem to be doubling every 4-7 months
  • Humanity's ability to measure capabilities is growing much more slowly
  • This implies an "eval singularity": a point at which capabilities grow faster than our ability to measure them
  • It seems like the singularity is ~here in cybersecurity, CBRN, and AI R&D (supporting quotes below)
  • It's possible that this is temporary, but the people involved seem pretty worried

Appendix - quotes on eval saturation

Opus 4.6

  • "For AI R&D capabilities, we found that Claude Opus 4.6 h
... (read more)

This is beautiful, thank you! This has definitely planted some seeds in my mind. Perhaps the most interesting points to me have been the prevalence of cockfighting and the dominance of ethics centered around virtues

Let  be the number of parameters in the model,  be the number of data tokens it is trained on,  be the number of times the model is deployed (e.g. the number of questions it is asked) and  be the number of inference steps each time it is deployed (e.g. the number of tokens per answer). Then this approximately works out to:[9]

Note that scaling up the number of parameters, , increases both pre-training compute and inference compute, because you need to use those parameters each time you run a forward pass in your model.

Several variables a... (read more)

Browser extensions are almost[1] never widely adopted.

Whenever anyone reminds me of this by proposing the annotations everywhere concept again, I remember that the root of the problem is distribution. You can propose it, you can even build it, but it wont be delivered to people. It should be. There are ways of designing computers/a better web where rollout would just happen.

That's what I want to build.

Software mostly isn't extensible, or where it is, it's not extensible enough (even web browsers aren't as extensible as they need to be! Chrome have sta... (read more)

While EA is not fully at the table yet, EcoResilience Initiative is an EA group trying to answer exactly those questions: 

"What are the problems we're trying to solve?" "What are the most neglected aspects of those problems?" and "What is the most cost-effective way to address those neglected areas?"

So far we're 1) maintaining a big list of biodiversity interventions (not just protecting land!), 2) investigating which of these  are the most effective types of interventions, 3) identifying ways people can donate to projects working on those highly... (read more)

Thanks for the post! It seems like CEA and EA Funds are the only entities left housed under EV (per the EV website); if that's the case, why bother spinning out at all?

I don't mean to sound too negative on this - I did just say "a bit sad" on that one specific point.

Do I think that CE is doing worse or better overall? It seems like Coefficient has been making a bunch of changes, and I don't feel like I have a good handle on the details. They've also been expanding a fair bit. I'd naively assume that a huge amount of work is going on behind the scenes to hire and grow, and that this is putting CE in a better place on average.

I would expect this (the GCR prio team change) to be some evidence that specific ambitious approac... (read more)

Hi Jamie and David, 

Really cool work. It's striking how much higher mental health is in importance and support than GHD. Do you have any insight into why this is and what people are imagining when they're referring to mental health? 

I interpret this as very weak evidence that there's an audience for global mental health / mental health related EG that is non-overlapping with GHD (that is, there are MH givers that wouldn't otherwise give to GHD). But of course 1. most philanthropy is domestically oriented, and presumably that's what people mostly ... (read more)

Thanks a lot, Vasco — and thanks for the upvote!

You’re absolutely right to push us toward the practical question of how to compare affective capacity across species. That’s ultimately where this line of work needs to go. At the same time, we’ve been deliberately cautious here, because we think this is one of those cases where moving too quickly to numbers or rankings risks making the waters muddier rather than clearer.

Our sense is that the comparison of affective capacity across species hinges on a set of upstream scientific questions that are still poorly... (read more)

Thanks a lot for the kind words, Jim — and for the thoughtful pushback.

I think your point holds if we assume that the only way to implement a very strong alarm is via extreme felt intensity — but that assumption is exactly what we’re questioning.

I agree that in genuinely catastrophic situations, evolution should tolerate very “loud” alarms. The open question, though, is whether those alarms need to be implemented as extreme affective states, rather than through non-affective or lower-intensity control mechanisms.

On the benefit side, there seem to be two di... (read more)

The critical question is whether shrimp or insects can support the kinds of negative states that make suffering severe, rather than merely possible.

I think suffering matters proportionally to its intensity. So I would not neglect mild suffering in principle, although it may not matter much in practice due to contributing little to total expected suffering.

In any case, I would agree the total expected welfare of farmed invertebrates may be tiny compared with that of humans due invertebrates' experiences having a very low intensity. For expected individual w... (read more)

I know the OP may not read this comment. I made it on his Substack post and I'm sharing it here in case it's of interest to others on the Forum.
 

Thanks for your post, Rob. Meghan Barrett and I have a detailed reply to Eisemann et al. 1984 in the Quarterly Review of Biology. You can see it here:

journals.uchicago.edu/d…

Short version, very little in that paper has stood the test of time and the particular passage you quote has many problems. I’d encourage you to reconsider including it!


Hi Rob. A few more thoughts. I grant you that the evidence for sentie... (read more)

Good idea, I reposted the article itself here: https://forum.effectivealtruism.org/posts/GyenLpfzRKK3wBPyA/the-simple-case-for-ai-catastrophe-in-four-steps 

I've been trying to keep the "meta" and the main posts mostly separate so hopefully the discussions for the metas and the main posts aren't as close together.

And even granting the usual EA filters—tractability, neglectedness, feasibility, and evidential robustness—the scale gradient from shrimp to insects (via agriculture-related deaths) is so steep that these filters don’t, by themselves, explain why the precautionary logic should settle on shrimp. All else equal, once you shift to a target that is thousands of times larger, an intervention could be far less effective [in terms of robustly increasing welfare in expectation] and still compete on expected impact.

I very much agree. Moreover, I do not even know wh... (read more)

i love your framing nice job crafting it! would have been easier if you had put it in the text of this post though ha!

Cool (:

I'm specifically interested in automating filtering EA-related opportunities and events to write our weekly announcements.

I think with a bit of tweaking that would be a public good for EA community building and might be re used by many groups.

Hi Oli, I appreciate your thoughtful reply and share much of your sentiment. Indeed, we must protect the flame. The inner fire I have for positive impact (especially animal welfare) is equally critical, but I find it's sometimes hard to give both the attention they need. Personally, I think it would improve my art to attend to both simultaneously, but probably not improve my impact (unless I come up with some better ideas). 

Personally I don't think Sam Altman is motivated by money. He just wants to be the one to build it.

I sense that Elon Musk and Dorio Amodei's motivations are more complex than "motivated by money", but I can imagine that the actual dollar amounts are more important to them than to Sma.

Hi Vasco. Firstly, it should be noted that the overall ratio used for the 2025 SADs was 1000x not 7x. The updated 2026 ratio based on more extensive research is 50x.

Secondly on "I do not see how one would be indifferent between these". You might be surprised if it does not match your personal experience, but many people are indifferent between relatively extreme levels of pain, including people who have been through quite extreme pain. Just as an example this study on 37 women who have just gone through labour, roughly one third of them would prefer a 9/10... (read more)

TBH my sense is that GiveWell is just being polite.

A perhaps more realistic motivation is that admitting animal suffering into GiveWell's models would implicitly force them to specify moral weights for animals (versus humans), and there is no way to do that without inviting huge controversy leaving at least some groups very upset. Much easier to say "sorry, not our wheelhouse" and effectively set animal weights to zero.

FWIW I agree with this decision (of GiveWell's).

Load more