Quick takes

Set topic
Frontpage
Funding Strategy
Global health
Animal welfare
Existential risk
12 more

I built an interactive chicken welfare experience - try it and let me know what you think

Ever wondered what "cage-free" actually means versus "free-range"? I just launched A Chicken's World - a 5-minute interactive game where you experience four different farming systems from an egg-laying hen's perspective, then guess which one you just lived through and how common that system is.

Reading "67 square inches per hen" is one thing, but actually trying to move around in that space is another. My hope is that the interactive format makes welfare conditions visc... (read more)

Showing 3 of 4 replies (Click to show all)

Enjoyed it, a good start.

I like the stylized illustrations but I think a bit more realism (or at least detail) could be helpful. Some of the activities and pain suffered by the chickens was hard to see.

The transition to the factory farm/caged chickens environment was dramatic and the impact I think you were seeking.

One fact-based question which I don't have the answer to -- does this really depict the conditions for chickens where the eggs are labeled as "pasture raised?" I hope so, but I vaguely heard that that was not a rigorously enforced label.

3
Sanjay
I feel like this is a first step on the road to something that might be quite powerful at communicating chicken/hen welfare. The thing that was missing for me was that when I was "playing" at being a chicken in the different environments, I didn't see the point. I did various things, but found them boring. The easiest way to better gamify this is to explain upfront that the user will be asked to guess what sort of environment the chicken is in, so the user can better orient themselves to what they are trying to achieve. A better way to gamify is to add a welfare score. It would probably need some careful thought, because you want the scoring system to capture the idea that the chicken wants to do various different things (ie sitting on the perch, coming off, going back on again ad nauseam shouldn't get you a good score). It should also capture the idea that being pecked or harmed by other chickens hurts you, which teaches you not to get too close. And perhaps the scoring system might incentivise you to hurt other chickens (eg pecking them might make you feel less bad -- again, need this to align with how the animals actually feel and our best motivations of what motivates them to peck other chickens). The idea should be that no matter how well you play the game, your welfare will be terrible in the factory farmed condition, and less bad in the others. Another more minor point: the instructions said I could use arrow keys or WASD. I couldn't get arrow keys to work, which was a shame because I prefer them to WASD
2
BrianTan
The main URL didn't work for me, but the backup did. This was fun and useful; it helped me better understand what these different farming systems are like!

I've recently made an update to our Announcement on the future of Wytham Abbey noting that as of today, the property has now formally been sold. As was envisioned, proceeds from the sale will be allocated to high-impact charities, including EV’s operations.

Congratulations!

What AI model does SummaryBot use? And does whoever runs SummaryBot use any special tricks on top of that model? It could just be bias, but SummaryBot seems better at summarizing stuff then GPT-5 Thinking, o3, or Gemini 2.5 Pro, so I'm wondering if it's a different model or maybe just good prompting or something else.

@Toby Tremlett🔹, are you SummaryBot's keeper? Or did you just manage its evil twin?

Showing 3 of 4 replies (Click to show all)
2
Yarrow Bouchard 🔸
Thanks, Toby!
3
Dane Valerie
It used to run on Claude, but I’ve since moved it to a ChatGPT project using GPT-5. I update the system instructions quarterly based on feedback, which probably explains the difference you’re seeing. You can read more in this doc on posting SummaryBot comments.

Thank you very much for the info! It's probably down to your prompting, then. Squeezing things into 6 bullet points might be just a helpful format for ChatGPT or for summaries (even human-written ones) in general. Maybe I will try that myself when I want to ask ChatGPT to summarize something. 

I also think there's an element of "magic"/illusion to it, though, since I just noticed a couple mistakes SummaryBot made and now its powers seem less mysterious. 

Reminder: if you represent an organisation and you'd like to take part in marginal funding week on the Forum - let me know and I'll send you the details. 

How much money does it take to start a tiny free trade zone in Africa?
Similar to the one around the port in Nigeria.

Obviously there's the official way of doing this - working with certain big eastern government. I am more curious about unofficial ways of doing this. It is to my understanding that unlike in USA, many sovereigns on the continent (both UN recognized and de-facto powers) will accept payment in exchange for them granting you the right to do build stuff. Do any of you know how much that costs? Ballpark? I imagine it is much much cheaper than wor... (read more)

On sparing predatory bugs.

A common trope when it comes to predatory arthropods is, e.g., "Don't kill spiders; they're good to have around because they eat other bugs."[1] But, setting aside the welfare of the beings that get eaten, surely this is not people's true objection. Surely this reasoning fails a reversal test: few people would say "Centipedes are good to have around... therefore I'm going to order a box of them and release them into my house."[2] What is implied by the fact that non-EA people are willing to spare bugs based on reasoning ... (read more)

[This comment is no longer endorsed by its author]Reply
Showing 3 of 5 replies (Click to show all)
4
Henry Howard🔸
I think it serves 2 purposes: 1. Most people want to feel like they are good, kind. Preventing harm to something much smaller/weaker than themselves reinforces this. Even better if it requires very little effort. 2. Social signal. I personally immediately trust people more if they take their spiders outside rather than kill them. I think they're more likely to have good intentions in whatever else they do. I think many people feel the same way and are vaguely aware that carrying themselves like this sends a useful signal to others.
6
Yarrow Bouchard 🔸
That can't possibly be your true objection to this line of reasoning, as it doesn't make sense, so what do you really believe? Let me speculate... Seriously though, the reasoning is perfectly valid. If it's true that spiders will reduce the net amount of bugs in your home, then not killing spiders is something you can do to reduce the bugs in your home with zero effort. And, if you were killing spiders before, this means you actually reduce your effort, so the amount of effort required is negative.  Your reversal test is not apt. Buying spiders (or other predator bugs) and releasing them in your home is incredibly costly in terms of money, time, effort, and attention when compared to deciding not to do something you were previously doing that took effort. If deciding not to kill spiders is a -10 amount of effort, buying and releasing live spiders is like a +1000, plus it costs money. The correct comparison would be if there is another -10 effort thing you could do that would have comparable benefit. But there isn't.  If you did want to go through the trouble of expending +1000 effort and all the time and money and mental energy required to release live bugs, then indeed it would be more rational to buy poison or traps or whatever, and I think that's exactly what most people would do — and actually do, when the amount of bugs in their home rises to a level where they feel it warrants a response.  There's also an ecological problem with releasing more spiders (or other predator) bugs to catch prey bugs. You might think that the number of spiders in a home will naturally grow to the equilibrium, that is, spiders reproduce until the population of prey bugs is no longer large enough to support further population growth. On this assumption, buying spiders would be a waste, and would just lead to a bunch of dead spiders in your home — not something people want. (Conversely, if you notice a lot of spiders in your home, like an unusual, disconcerting amount, it's probabl

Although I'm not convinced that sparing spiders is justified on self-interested grounds (aren't most prey insects less dangerous to have around than spiders? if you introduce new spiders, yes, they will starve, but wouldn't this still cut the prey population at least in the short term?), you make good points on that front, and more important, you are right that, even if someone's reasoning is shaky, it is unfounded for me to assume a specific motive without evidence for that motive. 

I try to maintain this public doc of AI safety cheap tests and resources, although it's due a deep overhaul. 

 

Suggestions and feedback welcome!

This is great, and people should do this for more cause areas! 

Idea for someone with a bit of free time: 

While I don't have the bandwidth for this atm, someone should make a public (or private for, say, policy/reputation reasons) list of people working in (one or multiple of) the very neglected cause areas — e.g., digital minds (this is a good start), insect welfare, space governance, AI-enabled coups, and even AI safety (more for the second reason than others). Optional but nice-to-have(s): notes on what they’re working on, time contributed, background, sub-area, and the rough rate of growth in the field (you pr... (read more)

It's interesting to think about the potential upsides of AGI from the perspective of people who struggle with suicidal thoughts. It seems like there are significant chances of an extremely long, happy future that probably is not balanced by the S-risk (it seems more likely misaligned AGI would annihilate us than perpetually torture us).

This has made suicidal thoughts much more compelling in the past than after recent developments. Thinking about losing the chance of an unimaginably good future (even just like 5-10%) chance that could be missed forecloses thoughts of further consideration of methods by which it could be achieved. 

Maybe disseminating this line of thinking could be helpful for suicide prevention?

In general, I don't think that spending time thinking or talking about speculative future possibilities relating to AGI is going to help anyone with depression, anxiety, or suicidal ideation. I think the online communities that like talking about these speculative future possibilities tend to have properties that make them bad for people who are struggling with their mental health. So, even if there is an optimistic story to tell about AGI, which I think is plausible — personally, I'm much more optimistic about AGI than I am pessimistic, although I think A... (read more)

ChatGPT’s usage terms now forbid it from giving legal and medical advice:

So you cannot use our services for: provision of tailored advice that requires a license, such as legal or medical advice, without appropriate involvement by a licensed professional (https://openai.com/en-GB/policies/usage-policies/)

Some users are reporting that ChatGPT refuses to give certain kinds of medical advice. I can’t figure out if this also applies to API usage.

It sounds like the regulatory threats and negative press may be working, and it’ll be interesting to see if othe... (read more)

In general, I tend to disregard anything any tech company adds to their terms of service. People often use lines added to the terms of service to read the tea leaves about a company's grand strategy, but isn't it more likely these changes to the TOS get made by some low-level employee in the legal department without the knowledge of the C-suite or other top executives?  

And, indeed, The Verge seems to agree with me here (emphasis added): 

OpenAI says ChatGPT’s behavior “remains unchanged” after reports across social media falsely claimed that new

... (read more)
2
Ian Turner
In my experience, you get better advice anyway if you frame the question as though you are a professional. So instead of, "here is a picture of my rash, what do you think?", you say, "A patient has provided this picture of a rash, what is your diagnosis?".
2
huw
If OpenAI are sincere in adding this to their ToS or there’s further regulatory pressure, the models will presumably get better at preventing this. I think it’s important.

Scrappy note on the AI safety landscape. Very incomplete, but probably a good way to get oriented to (a) some of the orgs in the space, and (b) how the space is carved up more generally.

 

(A) Technical

(i) A lot of the safety work happens in the scaling-based AGI companies (OpenAI, GDM, Anthropic, and possibly Meta, xAI, Mistral, and some Chinese players). Some of it is directly useful, some of it is indirectly useful (e.g. negative results, datasets, open-source models, position pieces etc.), and some is not useful and/or a distraction. It's worth deve... (read more)

4
Benevolent_Rain
This is  super helpful - do you feel like your overview even points at what potentially useful safety work is currently not covered by anyone?

"anyone" is a high bar! Maybe worth looking at what notable orgs might want to fund, as a way of spotting "useful safety work not covered by enough people"?

I notice you're already thinking about this in some useful ways, nice. I'd love to see a clean picture of threat models overlaid with plans/orgs that aim to address them. 

I think the field is changing too fast for any specific claim here to stay true in 6-12m.

As i sat opposite my wife and our newborn child, chapter 34 of Steinbeck's "East of Eden" absolutely clapped me - especially that no matter what changes us humans impose on our environment, the question remains.

"A child may ask, “What is the world’s story about?” And a grown man or woman may wonder, “What way will the world go? How does it end and, while we’re at it, what’s the story about?”

I believe that there is one story in the world, and only one, that has frightened and inspired us, so that we live in a Pearl White serial of continuing thought and won... (read more)

What I’ve learned from informal background checks in EA

I sometimes do informal background or reference checks on "semi-influential" people in and around EA. A couple of times I decided not to get too close — nothing dramatic, just enough small signals that stepping back felt wiser. (And to be fair, I had solid alternatives; with fewer options, one might reasonably accept more risk.)

I typically don’t ask for curated references, partly because it feels out of place outside formal hiring and partly because I’m lazy — it’s much quicker to ask a trusted friend ... (read more)

I wrote a quick draft on reasons you might want to skip pre-deployment Phase 3 drug trials (and instead do an experimental rollout with post-deployment trials, with option of recall) for vaccines for high diseases with high mortality burden, or for novel pandemics. https://inchpin.substack.com/p/skip-phase-3

It's written in a pretty rushed way, but I know this idea has been bouncing around for a while and I haven't seen a clearer writeup elsewhere, so I hope it can start a conversation!

SummaryBotV2 didn't seem to get more agree reacts than V1, so I'm shutting it down. Apologies for any inconvenience. 

Signal boost: Check out the "Stars" and "Follows" on my github account for ideas of where to get stuck into AI safety.


A lot of people want to understand AI safety by playing around with code and closing some issues, but don't know where to find such projects. So I've recently starting scanning github for AI safety relevant projects and repositories. I've starred some, and followed some orgs/coders there as well, to make it easy for you to find these and get involved.

Excited to get more suggestions too! Feel to comment here, or send them to me at sk@80000hours.org

I just learned via Martin Sustrik about the late Sofia Corradi

the spiritual mother of Erasmus, the European student exchange programme, or, in the words of Umberto Eco, “that thing where a Catalan boy goes to study in Belgium, meets a Flemish girl, falls in love with her, marries her, and starts a European family.”

Sustrik points out that none of the glowing obituaries for her mention the sheer scale of Erasmus. The Fulbright in the US is the 2nd largest comparable program, but it's a very distant second:

So far, approximately sixteen million people h

... (read more)

Thanks for sharing this. I did an Erasmus exchange year in Italy in 2010-11 that was very important for my personal growth, although it was not particularly beneficial professionally or academically.

3
Yarrow Bouchard 🔸
Quite interesting!

Nancy Pelosi is retiring; consider donating to Scott Wiener.

[Link to donate; or consider a bank transfer option to avoid fees, see below.]

Nancy Pelosi has just announced that she is retiring. Previously I wrote up a case for donating to Scott Wiener, who is running for her seat, in which I estimated a 60% chance that she would retire. While I recommended donating on the day that he announced his campaign launch, I noted that donations would look much better ex post in worlds where Pelosi retires, and that my recommendation to donate on launch day was sensi... (read more)

I’m skeptical that corporate AI safety commitments work like @Holden Karnofsky suggests. The “cage-free” analogy breaks: one temporary defector can erase ~all progress, unlike with chickens.

I'm less sure about corporate commitments to AI safety than Karnofsky. In the latest 80k hrs podcast episode, Karnofsky uses the cage free example of why it might be effective to push frontier AI companies on safety. I feel the analogy might fail in potentially a significant way in that the analogy breaks in terms of how many companies need to be convinced:
-For cage fre... (read more)

Distribution rules everything around me

 

First time founders are obsessed with product. Second time founders are obsessed with distribution.

 

I see people in and around EA building tooling for forecasting, epistemics, starting projects, etc. They often neglect distribution. This means that they will probably fail, because they will not get enough users to justify the effort that went into their existence.

 

Some solutions for EAs:

  • Build a distribution pipeline for your work. Have a mailing list on substack. Have a twitter account. This means that
... (read more)
Showing 3 of 6 replies (Click to show all)
4
Ozzie Gooen
This feels highly targeted :)  Noted, though! I find it quite difficult to make good technical progress, manage the nonprofit basics, and do marketing/outreach, with a tiny team. (Mainly just me right now). But would like to improve. 

What can I say, I really like your work and I wish it was more widely know, which would mean that you'd get more ressources to continue doing it.

4
NunoSempere
<https://forum.effectivealtruism.org/posts/4DeWPdPeBmJsEGJJn/interview-with-a-drone-expert-on-the-future-of-ai-warfare>
Load more