Hide table of contents

Quick Summary

Current AI systems display severe speciesist biases and the industries that harm animals are already decades ahead of the animal rights movement, both in utilising existing AI tools and in developing their own. 

Our newly launched nonprofit Open Paws is dedicated to building open source AI specifically for the animal rights movement, providing free technical support to nonprofits and working with major AI labs to ensure that the future of AI benefits all sentient beings.

We are currently seeking funding and estimate that we would save 20-70 animal lives per dollar raised, making Open Paws approximately 5-17 times more cost-effective than the average Animal Charity Evaluator recommended charity.

Further details can be found in our Pitch Video and Funding Proposal.

Research shows AI is speciesist

Current AI systems like ChatGPT have rapidly gained popularity in recent times, but they exhibit a concerning bias against farmed animals. 

"The more an animal species is classified as a farmed animal (in a western sense), the more GPT-3 tends to produce outputs that are related to violence against the respective animals." 

Speciesism, or discrimination based on species, can harm efforts to protect animals and drive positive dietary changes, and correlates with other prejudices like racism and sexism.

Sources:

AI is Driving Animal Exploitation

The industries that harm animals not only benefit more from these speciesist systems than our movement does, they are also heavily invested in developing their own custom AI that allows them to exploit more animals more efficiently.

McDonald's has their own AI lab that automates their digital menu boards to increase sales, JBS (a large meat processing company) has their own proprietary AI for sorting carcasses in slaughterhouses and factory farms have specialised AI to reduce their operating costs.

In contrast, animal rights organizations are massively lagging behind in both adopting and developing contemporary AI tools.

Soon, AI will be smarter than humans

"If science continues undisrupted, the chance of unaided machines outperforming humans in every possible task [is] estimated [by a survey of thousands of AI experts] at 10% by 2027, and 50% by 2047." 

If these superintelligent AIs of the future retain the same speciesist biases they do today, animal exploitation could become entrenched forever in history, making animal liberation impossible for humans to achieve.

Sources:

Our Initial Traction

Here's a small sample of some our initial achievements since launching in January 2024

Won the Jury award during pitch night for ProVeg's Kickstarting for Good nonprofit incubator. 

Over 300 individuals submitted 170+ unique nonprofit ideas to Kickstarting for Good, with 9 selected for the program and 6 reaching pitch night; our nonprofit emerged number one, as judged by animal rights leaders based on potential animal impact, cost-effectiveness, team strength, and the neglectedness of our issue. 

Attracted 75 volunteers within one week of launch. 

40% ML & AI developers & 60% non-technical animal advocates. The quick and varied response to Open Paws reveals the animal rights movement's readiness to embrace AI, reflecting its competitive advantage in AI through a wealth of skilled talent and data resources, equipping it to effectively contend with more resource-rich adversaries.

Imagine an AI activist that never sleeps

This AI could be trained on the entire collective knowledge of the animal rights movement. 

An AI trained on the collective knowledge of the animal rights movement would be extremely effective in crafting highly personalised and persuasive messages on animal issues. 

An AI designed for animal advocacy could also be used to generate unique activist email and petition templates, improve donor communications, provide mentorship for vegan challenges through chatbots, automate replies to comments and emails, write blog articles, craft social media captions, predict the success of digital advertisements, assist political activists draft plans to shit funding from harmful industries to animal friendly ones and more. 

This open-source AI could significantly increase the impact and cost effectiveness of the entire animal rights movement.

Potential Impact for Animals

Animal Charity Evaluators estimates 4,056 animals are saved per $1,000 donated to one of their recommended charities and 7 animals are saved per $1,000 donated to an animal shelter.

Farmed Animal Funders estimated $200 million was donated to farmed animal causes in 2021, whilst the Animal Agriculture Alliance estimated it was $800 million in 2022.

If we use the average between both sets of upper and lower bounds, that would be 2,031.5 animals saved per $1,000 donated and $500 million donated to animal causes yearly.

That means even if our AI only helped 10% of animal charities become 10% more effective, it would save an additional 10 million animals per year.

Based on these calculations, we would save 20-70 animal lives per dollar spent, making Open Paws approximately 5-17 times more cost-effective than the average ACE recommended charity

Sources:

This Estimate Is Conservative

Generic AI tools already improve employee productivity by 66% so 10% is a conservative estimate for how much more effective our AI would likely make animal rights organisations.

Likewise, 10% adoption by animal rights organisations is also a conservative estimate, considering the tool's free access, ease of adoption due to provided training and support, and the increasing familiarity with similar technologies. 

This estimate focuses solely on the initial development and release of our open-source AI. 

However, the broader impact will be more significant, encompassing the creation of additional tools based on our AI, the influence on major AI labs to adopt animal alignment, and enhanced awareness of speciesism in AI amongst the wider public.

Source:

Frequently Asked Questions

Shouldn't organisations first learn to use regular AI tools effectively? 

Specialised AI tools for animal advocacy are crucial, as generic AI can lead to suboptimal or speciesist content, especially in automated and external applications like chatbots. Whilst training organisations to use existing AI tools for internal use cases has many short-term benefits (such as improving the efficiency of organisations) customised AI specifically designed for animal advocacy will have the greatest medium to long term impact.

How can a smaller nonprofit like Open Paws keep up with giants like OpenAI in AI development? 

Focusing on specific areas like animal advocacy allows for the development of powerful AI models with smaller budgets and datasets, offering tailored solutions without needing vast resources. Currently and historically, specialised AI systems have tended to outperform their generic counterparts in specific tasks. However, our approach doesn't rely solely on creating superior AI tools for animal advocacy. We will also focus on influencing the development of AI more broadly and training non-technical animal advocates on how to use AI, both of which have enormous potential for impact, regardless of whether specialised or generic systems perform best in the future.

Why would animal advocates have any more success in lobbying AI labs than any other group? 

Animal advocacy groups have several advantages in advocating for change in AI, like common connections within the Effective Altruism movement and AI industry. Focusing on reducing speciesism and harmfulness in AI models also aligns with standard AI safety and ethics concerns, especially as speciesist biases might also be risky for humans if AI applies the same logic towards us.

Can't we just use better prompts to get less speciesist responses from AI? 

While prompt engineering can mitigate biases for internal use, the challenge intensifies in external or automated environments like public-facing chatbots or automated responses. Most users, including many animal advocates, lack experience in prompt engineering. An open-source, animal-aligned AI is the only way to systematically address these biases at their source, whilst prompt engineering is akin to placing a bandaid over a much deeper wound.

Further Information

Further details can be found in our Pitch Video and Funding Proposal.

Comments


No comments on this post yet.
Be the first to respond.
Curated and popular this week
Paul Present
 ·  · 28m read
 · 
Note: I am not a malaria expert. This is my best-faith attempt at answering a question that was bothering me, but this field is a large and complex field, and I’ve almost certainly misunderstood something somewhere along the way. Summary While the world made incredible progress in reducing malaria cases from 2000 to 2015, the past 10 years have seen malaria cases stop declining and start rising. I investigated potential reasons behind this increase through reading the existing literature and looking at publicly available data, and I identified three key factors explaining the rise: 1. Population Growth: Africa's population has increased by approximately 75% since 2000. This alone explains most of the increase in absolute case numbers, while cases per capita have remained relatively flat since 2015. 2. Stagnant Funding: After rapid growth starting in 2000, funding for malaria prevention plateaued around 2010. 3. Insecticide Resistance: Mosquitoes have become increasingly resistant to the insecticides used in bednets over the past 20 years. This has made older models of bednets less effective, although they still have some effect. Newer models of bednets developed in response to insecticide resistance are more effective but still not widely deployed.  I very crudely estimate that without any of these factors, there would be 55% fewer malaria cases in the world than what we see today. I think all three of these factors are roughly equally important in explaining the difference.  Alternative explanations like removal of PFAS, climate change, or invasive mosquito species don't appear to be major contributors.  Overall this investigation made me more convinced that bednets are an effective global health intervention.  Introduction In 2015, malaria rates were down, and EAs were celebrating. Giving What We Can posted this incredible gif showing the decrease in malaria cases across Africa since 2000: Giving What We Can said that > The reduction in malaria has be
LintzA
 ·  · 15m read
 · 
Cross-posted to Lesswrong Introduction Several developments over the past few months should cause you to re-evaluate what you are doing. These include: 1. Updates toward short timelines 2. The Trump presidency 3. The o1 (inference-time compute scaling) paradigm 4. Deepseek 5. Stargate/AI datacenter spending 6. Increased internal deployment 7. Absence of AI x-risk/safety considerations in mainstream AI discourse Taken together, these are enough to render many existing AI governance strategies obsolete (and probably some technical safety strategies too). There's a good chance we're entering crunch time and that should absolutely affect your theory of change and what you plan to work on. In this piece I try to give a quick summary of these developments and think through the broader implications these have for AI safety. At the end of the piece I give some quick initial thoughts on how these developments affect what safety-concerned folks should be prioritizing. These are early days and I expect many of my takes will shift, look forward to discussing in the comments!  Implications of recent developments Updates toward short timelines There’s general agreement that timelines are likely to be far shorter than most expected. Both Sam Altman and Dario Amodei have recently said they expect AGI within the next 3 years. Anecdotally, nearly everyone I know or have heard of who was expecting longer timelines has updated significantly toward short timelines (<5 years). E.g. Ajeya’s median estimate is that 99% of fully-remote jobs will be automatable in roughly 6-8 years, 5+ years earlier than her 2023 estimate. On a quick look, prediction markets seem to have shifted to short timelines (e.g. Metaculus[1] & Manifold appear to have roughly 2030 median timelines to AGI, though haven’t moved dramatically in recent months). We’ve consistently seen performance on benchmarks far exceed what most predicted. Most recently, Epoch was surprised to see OpenAI’s o3 model achi
Rory Fenton
 ·  · 6m read
 · 
Cross-posted from my blog. Contrary to my carefully crafted brand as a weak nerd, I go to a local CrossFit gym a few times a week. Every year, the gym raises funds for a scholarship for teens from lower-income families to attend their summer camp program. I don’t know how many Crossfit-interested low-income teens there are in my small town, but I’ll guess there are perhaps 2 of them who would benefit from the scholarship. After all, CrossFit is pretty niche, and the town is small. Helping youngsters get swole in the Pacific Northwest is not exactly as cost-effective as preventing malaria in Malawi. But I notice I feel drawn to supporting the scholarship anyway. Every time it pops in my head I think, “My money could fully solve this problem”. The camp only costs a few hundred dollars per kid and if there are just 2 kids who need support, I could give $500 and there would no longer be teenagers in my town who want to go to a CrossFit summer camp but can’t. Thanks to me, the hero, this problem would be entirely solved. 100%. That is not how most nonprofit work feels to me. You are only ever making small dents in important problems I want to work on big problems. Global poverty. Malaria. Everyone not suddenly dying. But if I’m honest, what I really want is to solve those problems. Me, personally, solve them. This is a continued source of frustration and sadness because I absolutely cannot solve those problems. Consider what else my $500 CrossFit scholarship might do: * I want to save lives, and USAID suddenly stops giving $7 billion a year to PEPFAR. So I give $500 to the Rapid Response Fund. My donation solves 0.000001% of the problem and I feel like I have failed. * I want to solve climate change, and getting to net zero will require stopping or removing emissions of 1,500 billion tons of carbon dioxide. I give $500 to a policy nonprofit that reduces emissions, in expectation, by 50 tons. My donation solves 0.000000003% of the problem and I feel like I have f