Nov 10
Funding strategy week
A week for discussing funding diversification, when to donate, and other strategic questions. Read more.
Nov 17
Marginal funding week
Nov 24
Donation election
Dec 8
Why I donate week
Dec 15
Donation celebration
Do you volunteer to run or help run events aimed at spreading the message of effective giving and raising money for effective charities? Would you like to? Come to the Effective Giving Organiser Retreat at the EA Hotel, Blackpool UK, 6th-9th February 2026. All kinds of organisers welcome: university, city, workplace, faith group, business, residentials, online, and more! All kinds of effective cause areas welcome too. We'll be mapping out the concept space of effective giving events in the UK: what we have, what we need, and what we can do with the resources we've got. We'll also be building community for effective giving organisers, sharing skills and connections, and working out where best we fit in the tapestry of it all. Frank Fredericks, executive director at One for the World, will be delivering an online keynote about the crucial role that effective giving organisers play in the ecosystem, and a Q&A on the institutional support One for the World can offer (like access to ready-made effective giving outreach materials). The rest of the retreat will be attendee-contributed, with a variety of interesting skill-building and social activities expected. Entry criteria is at least one of the following: * You've already organised at least one effective giving event (or EA event that contained a section on effective giving, this counts). * You commit to a plan of organising at least one effective giving event in the next year (with possible assistance from other attendees). * You pledge 1% or more of your personal income for effective giving purposes and would be interested in using that money to support the running costs of effective giving events. Attendance is free, with hotel rooms and (vegan) meals included! Please register at https://forms.gle/jZ3VXCCpKuXGfSxx5 and we'll be in touch about next steps. Please note: while this event is focused on supporting volunteers, we also welcome people with careers related to effective fundraising who wish to learn o
$804 raised to the Donation Election Fund
Donation Election Fund
$804
We're matching the first $5000

Quick takes

Show community
View more
Set topic
Frontpage
Funding Strategy
Global health
Animal welfare
Existential risk
12 more
I built an interactive chicken welfare experience - try it and let me know what you think Ever wondered what "cage-free" actually means versus "free-range"? I just launched A Chicken's World - a 5-minute interactive game where you experience four different farming systems from an egg-laying hen's perspective, then guess which one you just lived through and how common that system is. Reading "67 square inches per hen" is one thing, but actually trying to move around in that space is another. My hope is that the interactive format makes welfare conditions visceral in a way that statistics don't capture. The experience includes: * Walking through battery cage, cage-free, free-range, and pasture-raised systems * Cost-effectiveness data based on Rethink Priorities' research on corporate campaigns * A willingness-to-pay element leading to an optional donation to THL via Farmkind I'd welcome feedback: * Any factual errors I should correct? (The comparative advantage of early adopters here! Most of the fact-finding and red-teaming was done by LLMs.) * What would make it more useful to you personally? (You'll probably give me more useful feedback this way than if you try to model other users.) * What would make it work better as an outreach tool? (I built this with non-EA audiences in mind.) Try it: https://achickens.world/. (Backup link here if that doesn't work.) PS thanks Claude for the code, plus THL, RP, Farmkind for doing the actual important work; I'm just making a fun tool. This was a misc personal project, nothing to do with my employer.
Scrappy note on the AI safety landscape. Very incomplete, but probably a good way to get oriented to (a) some of the orgs in the space, and (b) how the space is carved up more generally.   (A) Technical (i) A lot of the safety work happens in the scaling-based AGI companies (OpenAI, GDM, Anthropic, and possibly Meta, xAI, Mistral, and some Chinese players). Some of it is directly useful, some of it is indirectly useful (e.g. negative results, datasets, open-source models, position pieces etc.), and some is not useful and/or a distraction. It's worth developing good assessment mechanisms/instincts about these. (ii) A lot of safety work happens in collaboration with the AGI companies, but by individuals/organisations with some amount of independence and/or different incentives. Some examples: METR, Redwood, UK AISI, Epoch, Apollo. It's worth understanding what they're doing with AGI cos and what their theories of change are. (iii) Orgs that don't seem to work directly with AGI cos but are deeply technically engaging with frontier models and their relationship to catastrophic risk: places like Palisade, FAR AI, CAIS. These orgs maintain even more independence, and are able to do/say things which maybe the previous tier might not be able to. A recent cool thing was CAIS finding that models don't do well on remote work tasks -- only 2.5% of tasks -- in contrast to OpenAI's findings in GDPval suggests models have an almost 50% win-rate against industry professionals on a suite of "economically valuable, real-world tasks" tasks. (iv) Orgs that are pursuing other* technical AI safety bets, different from the AGI cos: FAR AI, ARC, Timaeus, Simplex AI, AE Studio, LawZero, many independents, some academics at e.g. CHAI/Berkeley, MIT, Stanford, MILA, Vector Institute, Oxford, Cambridge, UCL and elsewhere. It's worth understanding why they want to make these bets, including whether it's their comparative advantage, an alignment with their incentives/grants, or whether they
Reminder: if you represent an organisation and you'd like to take part in marginal funding week on the Forum - let me know and I'll send you the details. 
As i sat opposite my wife and our newborn child, chapter 34 of Steinbeck's "East of Eden" absolutely clapped me - especially that no matter what changes us humans impose on our environment, the question remains. "A child may ask, “What is the world’s story about?” And a grown man or woman may wonder, “What way will the world go? How does it end and, while we’re at it, what’s the story about?” I believe that there is one story in the world, and only one, that has frightened and inspired us, so that we live in a Pearl White serial of continuing thought and wonder. Humans are caught–in their lives, in their thoughts, in their hungers and ambitions, in their avarice and cruelty, and in their kindness and generosity too–in a net of good and evil. I think this is the only story we have and that it occurs on all levels of feeling and intelligence. Virtue and vice were warp and woof of our first consciousness, and they will be the fabric of our last, and this despite any changes we impose on field and river and mountain, on economy and manners. There is no other story. A man, after he has brushed off the dust and chips of his life, will have left only the hard, clean questions: Was it good or was it evil? Have I done well–or ill?"
I try to maintain this public doc of AI safety cheap tests and resources, although it's due a deep overhaul.    Suggestions and feedback welcome!