I built an interactive chicken welfare experience - try it and let me know what you think
Ever wondered what "cage-free" actually means versus "free-range"? I just launched A Chicken's World - a 5-minute interactive game where you experience four different farming systems from an egg-laying hen's perspective, then guess which one you just lived through and how common that system is.
Reading "67 square inches per hen" is one thing, but actually trying to move around in that space is another. My hope is that the interactive format makes welfare conditions visc...
Enjoyed it, a good start.
I like the stylized illustrations but I think a bit more realism (or at least detail) could be helpful. Some of the activities and pain suffered by the chickens was hard to see.
The transition to the factory farm/caged chickens environment was dramatic and the impact I think you were seeking.
One fact-based question which I don't have the answer to -- does this really depict the conditions for chickens where the eggs are labeled as "pasture raised?" I hope so, but I vaguely heard that that was not a rigorously enforced label.
I've recently made an update to our Announcement on the future of Wytham Abbey noting that as of today, the property has now formally been sold. As was envisioned, proceeds from the sale will be allocated to high-impact charities, including EV’s operations.
What AI model does SummaryBot use? And does whoever runs SummaryBot use any special tricks on top of that model? It could just be bias, but SummaryBot seems better at summarizing stuff then GPT-5 Thinking, o3, or Gemini 2.5 Pro, so I'm wondering if it's a different model or maybe just good prompting or something else.
@Toby Tremlett🔹, are you SummaryBot's keeper? Or did you just manage its evil twin?
Thank you very much for the info! It's probably down to your prompting, then. Squeezing things into 6 bullet points might be just a helpful format for ChatGPT or for summaries (even human-written ones) in general. Maybe I will try that myself when I want to ask ChatGPT to summarize something.
I also think there's an element of "magic"/illusion to it, though, since I just noticed a couple mistakes SummaryBot made and now its powers seem less mysterious.
How much money does it take to start a tiny free trade zone in Africa?
Similar to the one around the port in Nigeria.
Obviously there's the official way of doing this - working with certain big eastern government. I am more curious about unofficial ways of doing this. It is to my understanding that unlike in USA, many sovereigns on the continent (both UN recognized and de-facto powers) will accept payment in exchange for them granting you the right to do build stuff. Do any of you know how much that costs? Ballpark? I imagine it is much much cheaper than wor...
On sparing predatory bugs.
A common trope when it comes to predatory arthropods is, e.g., "Don't kill spiders; they're good to have around because they eat other bugs."[1] But, setting aside the welfare of the beings that get eaten, surely this is not people's true objection. Surely this reasoning fails a reversal test: few people would say "Centipedes are good to have around... therefore I'm going to order a box of them and release them into my house."[2] What is implied by the fact that non-EA people are willing to spare bugs based on reasoning ...
Although I'm not convinced that sparing spiders is justified on self-interested grounds (aren't most prey insects less dangerous to have around than spiders? if you introduce new spiders, yes, they will starve, but wouldn't this still cut the prey population at least in the short term?), you make good points on that front, and more important, you are right that, even if someone's reasoning is shaky, it is unfounded for me to assume a specific motive without evidence for that motive.
I try to maintain this public doc of AI safety cheap tests and resources, although it's due a deep overhaul.
Suggestions and feedback welcome!
While I don't have the bandwidth for this atm, someone should make a public (or private for, say, policy/reputation reasons) list of people working in (one or multiple of) the very neglected cause areas — e.g., digital minds (this is a good start), insect welfare, space governance, AI-enabled coups, and even AI safety (more for the second reason than others). Optional but nice-to-have(s): notes on what they’re working on, time contributed, background, sub-area, and the rough rate of growth in the field (you pr...
It's interesting to think about the potential upsides of AGI from the perspective of people who struggle with suicidal thoughts. It seems like there are significant chances of an extremely long, happy future that probably is not balanced by the S-risk (it seems more likely misaligned AGI would annihilate us than perpetually torture us).
This has made suicidal thoughts much more compelling in the past than after recent developments. Thinking about losing the chance of an unimaginably good future (even just like 5-10%) chance that could be missed forecloses thoughts of further consideration of methods by which it could be achieved.
Maybe disseminating this line of thinking could be helpful for suicide prevention?
In general, I don't think that spending time thinking or talking about speculative future possibilities relating to AGI is going to help anyone with depression, anxiety, or suicidal ideation. I think the online communities that like talking about these speculative future possibilities tend to have properties that make them bad for people who are struggling with their mental health. So, even if there is an optimistic story to tell about AGI, which I think is plausible — personally, I'm much more optimistic about AGI than I am pessimistic, although I think A...
ChatGPT’s usage terms now forbid it from giving legal and medical advice:
So you cannot use our services for: provision of tailored advice that requires a license, such as legal or medical advice, without appropriate involvement by a licensed professional (https://openai.com/en-GB/policies/usage-policies/)
Some users are reporting that ChatGPT refuses to give certain kinds of medical advice. I can’t figure out if this also applies to API usage.
It sounds like the regulatory threats and negative press may be working, and it’ll be interesting to see if othe...
In general, I tend to disregard anything any tech company adds to their terms of service. People often use lines added to the terms of service to read the tea leaves about a company's grand strategy, but isn't it more likely these changes to the TOS get made by some low-level employee in the legal department without the knowledge of the C-suite or other top executives?
And, indeed, The Verge seems to agree with me here (emphasis added):
...OpenAI says ChatGPT’s behavior “remains unchanged” after reports across social media falsely claimed that new
Scrappy note on the AI safety landscape. Very incomplete, but probably a good way to get oriented to (a) some of the orgs in the space, and (b) how the space is carved up more generally.
(A) Technical
(i) A lot of the safety work happens in the scaling-based AGI companies (OpenAI, GDM, Anthropic, and possibly Meta, xAI, Mistral, and some Chinese players). Some of it is directly useful, some of it is indirectly useful (e.g. negative results, datasets, open-source models, position pieces etc.), and some is not useful and/or a distraction. It's worth deve...
"anyone" is a high bar! Maybe worth looking at what notable orgs might want to fund, as a way of spotting "useful safety work not covered by enough people"?
I notice you're already thinking about this in some useful ways, nice. I'd love to see a clean picture of threat models overlaid with plans/orgs that aim to address them.
I think the field is changing too fast for any specific claim here to stay true in 6-12m.
As i sat opposite my wife and our newborn child, chapter 34 of Steinbeck's "East of Eden" absolutely clapped me - especially that no matter what changes us humans impose on our environment, the question remains.
"A child may ask, “What is the world’s story about?” And a grown man or woman may wonder, “What way will the world go? How does it end and, while we’re at it, what’s the story about?”
I believe that there is one story in the world, and only one, that has frightened and inspired us, so that we live in a Pearl White serial of continuing thought and won...
I sometimes do informal background or reference checks on "semi-influential" people in and around EA. A couple of times I decided not to get too close — nothing dramatic, just enough small signals that stepping back felt wiser. (And to be fair, I had solid alternatives; with fewer options, one might reasonably accept more risk.)
I typically don’t ask for curated references, partly because it feels out of place outside formal hiring and partly because I’m lazy — it’s much quicker to ask a trusted friend ...
I wrote a quick draft on reasons you might want to skip pre-deployment Phase 3 drug trials (and instead do an experimental rollout with post-deployment trials, with option of recall) for vaccines for high diseases with high mortality burden, or for novel pandemics. https://inchpin.substack.com/p/skip-phase-3
It's written in a pretty rushed way, but I know this idea has been bouncing around for a while and I haven't seen a clearer writeup elsewhere, so I hope it can start a conversation!
SummaryBotV2 didn't seem to get more agree reacts than V1, so I'm shutting it down. Apologies for any inconvenience.
Signal boost: Check out the "Stars" and "Follows" on my github account for ideas of where to get stuck into AI safety.
A lot of people want to understand AI safety by playing around with code and closing some issues, but don't know where to find such projects. So I've recently starting scanning github for AI safety relevant projects and repositories. I've starred some, and followed some orgs/coders there as well, to make it easy for you to find these and get involved.
Excited to get more suggestions too! Feel to comment here, or send them to me at sk@80000hours.org
I just learned via Martin Sustrik about the late Sofia Corradi,
the spiritual mother of Erasmus, the European student exchange programme, or, in the words of Umberto Eco, “that thing where a Catalan boy goes to study in Belgium, meets a Flemish girl, falls in love with her, marries her, and starts a European family.”
Sustrik points out that none of the glowing obituaries for her mention the sheer scale of Erasmus. The Fulbright in the US is the 2nd largest comparable program, but it's a very distant second:
...So far, approximately sixteen million people h
[Link to donate; or consider a bank transfer option to avoid fees, see below.]
Nancy Pelosi has just announced that she is retiring. Previously I wrote up a case for donating to Scott Wiener, who is running for her seat, in which I estimated a 60% chance that she would retire. While I recommended donating on the day that he announced his campaign launch, I noted that donations would look much better ex post in worlds where Pelosi retires, and that my recommendation to donate on launch day was sensi...
I’m skeptical that corporate AI safety commitments work like @Holden Karnofsky suggests. The “cage-free” analogy breaks: one temporary defector can erase ~all progress, unlike with chickens.
I'm less sure about corporate commitments to AI safety than Karnofsky. In the latest 80k hrs podcast episode, Karnofsky uses the cage free example of why it might be effective to push frontier AI companies on safety. I feel the analogy might fail in potentially a significant way in that the analogy breaks in terms of how many companies need to be convinced:
-For cage fre...
Distribution rules everything around me
First time founders are obsessed with product. Second time founders are obsessed with distribution.
I see people in and around EA building tooling for forecasting, epistemics, starting projects, etc. They often neglect distribution. This means that they will probably fail, because they will not get enough users to justify the effort that went into their existence.
Some solutions for EAs: