Hide table of contents
by Ben
2 min read 5

126

Epistemic status: 30% (plus or minus 50%). Further details at the bottom. 

In the 2019 EA Cause Prioritisation survey, Global Poverty remains the most popular single cause across the sample as a whole. But after more engagement with EA, around 42% of people change their cause area, and of that, a majority (54%) moved towards the Long Term Future/Catastrophic and Existential Risk Reduction.

While many people find that donations help them stay engaged (and continue to be a great thing to do), there has been much discussion of other ways people can contribute positively. 

In thinking about the long-run future, one area of research has been improving human's resilience to disasters. A 2014 paper looked at global refuges, and more recently ALLFED, among others, have studied ways to feed humanity in disaster scenarios.

There is much work done, and even much more needed, to directly reduce risks such as through pandemic preparedness, improving nuclear treaties, and improving the functioning of international institutions. 

But we believe that there are still opportunities to increase resilience in disaster scenarios. Wouldn't it be great if there was a way to directly link the simplicity of donations with effective methods for the recovery of civilisation?

Photo credit to Facebook and Wikipedia - cans shown are illustrative only 

Canning what we give

In The Knowledge by Lewis Dartnell (p. 40), an estimate is given of how long a supermarket would be able to feed a single person: 

So if you were a survivor with an entire supermarket to yourself, how long could you subsist on its contents? Your best strategy would be to consumable perishable goods for the first few weeks, and then turn to dried pasta and nice... A single average-sized supermarket should be able to sustain you for around 55 years - 63 if you eat the canned cat and dog food as well.

But in thinking about an population, there would be fewer resources to go around per person. 

The UK Department for Environment, Food and Rural Affairs (DEFRA) estimated in 2010 that there was a national stock reserve of 11.8 days of 'ambient slow-moving groceries'. (ibid, p.40)

It seems clear that there aren't enough canned goods.

Our proposal

We propose that: 

  • We try to expand both the range of things that are canned, and find ways to bury them deep in the earth (ideally beyond of the reach of the mole people)
  • Donors to GWWC instead consider CWWG
  • Donors put valuable items in cans which they would want in a disaster scenario, e.g. fruit salads, Worcester sauce, marmelade
  • EA funds provides a donation infrastructure to support sending cans

A mock-up of the CWWG dashboard

Risks

We are concerned that: 

Further information 

Partial credit to this goes to Harri Besceli - we came up with the idea together.

This was a joke. Happy April fools. 

Comments5


Sorted by Click to highlight new comments since:

May I kindly suggest "Yes we can!" as a promotional slogan for your new organization? It seems to have a good track record.

Looking for co-founders for a corporate canpaigning org:

Assuming an average person can can a can of leftover food within a minute, if every company would allow each employee to can excess canteen food for only 15 minutes after lunch for a 30 year career, each person can easily can 80,000Cans within their lifetime.

"Yes We Can" 

Love this. I'm quickly penning a new GWWC strategy so we can try and incorporate the proposals of CWWG in their entirety.

Curated and popular this week
Ben_West🔸
 ·  · 1m read
 · 
> Summary: We propose measuring AI performance in terms of the length of tasks AI agents can complete. We show that this metric has been consistently exponentially increasing over the past 6 years, with a doubling time of around 7 months. Extrapolating this trend predicts that, in under a decade, we will see AI agents that can independently complete a large fraction of software tasks that currently take humans days or weeks. > > The length of tasks (measured by how long they take human professionals) that generalist frontier model agents can complete autonomously with 50% reliability has been doubling approximately every 7 months for the last 6 years. The shaded region represents 95% CI calculated by hierarchical bootstrap over task families, tasks, and task attempts. > > Full paper | Github repo Blogpost; tweet thread. 
 ·  · 2m read
 · 
For immediate release: April 1, 2025 OXFORD, UK — The Centre for Effective Altruism (CEA) announced today that it will no longer identify as an "Effective Altruism" organization.  "After careful consideration, we've determined that the most effective way to have a positive impact is to deny any association with Effective Altruism," said a CEA spokesperson. "Our mission remains unchanged: to use reason and evidence to do the most good. Which coincidentally was the definition of EA." The announcement mirrors a pattern of other organizations that have grown with EA support and frameworks and eventually distanced themselves from EA. CEA's statement clarified that it will continue to use the same methodologies, maintain the same team, and pursue identical goals. "We've found that not being associated with the movement we have spent years building gives us more flexibility to do exactly what we were already doing, just with better PR," the spokesperson explained. "It's like keeping all the benefits of a community while refusing to contribute to its future development or taking responsibility for its challenges. Win-win!" In a related announcement, CEA revealed plans to rename its annual EA Global conference to "Coincidental Gathering of Like-Minded Individuals Who Mysteriously All Know Each Other But Definitely Aren't Part of Any Specific Movement Conference 2025." When asked about concerns that this trend might be pulling up the ladder for future projects that also might benefit from the infrastructure of the effective altruist community, the spokesperson adjusted their "I Heart Consequentialism" tie and replied, "Future projects? I'm sorry, but focusing on long-term movement building would be very EA of us, and as we've clearly established, we're not that anymore." Industry analysts predict that by 2026, the only entities still identifying as "EA" will be three post-rationalist bloggers, a Discord server full of undergraduate philosophy majors, and one person at
 ·  · 2m read
 · 
Epistemic status: highly certain, or something The Spending What We Must 💸11% pledge  In short: Members pledge to spend at least 11% of their income on effectively increasing their own productivity. This pledge is likely higher-impact for most people than the Giving What We Can 🔸10% Pledge, and we also think the name accurately reflects the non-supererogatory moral beliefs of many in the EA community. Example Charlie is a software engineer for the Centre for Effective Future Research. Since Charlie has taken the SWWM 💸11% pledge, rather than splurge on a vacation, they decide to buy an expensive noise-canceling headset before their next EAG, allowing them to get slightly more sleep and have 104 one-on-one meetings instead of just 101. In one of the extra three meetings, they chat with Diana, who is starting an AI-for-worrying-about-AI company, and decide to become a cofounder. The company becomes wildly successful, and Charlie's equity share allows them to further increase their productivity to the point of diminishing marginal returns, then donate $50 billion to SWWM. The 💸💸💸 Badge If you've taken the SWWM 💸11% Pledge, we'd appreciate if you could add three 💸💸💸 "stacks of money with wings" emoji to your social media profiles. We chose three emoji because we think the 💸11% Pledge will be about 3x more effective than the 🔸10% pledge (see FAQ), and EAs should be scope sensitive.  FAQ Is the pledge legally binding? We highly recommend signing the legal contract, as it will allow you to sue yourself in case of delinquency. What do you mean by effectively increasing productivity? Some interventions are especially good at transforming self-donations into productivity, and have a strong evidence base. In particular:  * Offloading non-work duties like dates and calling your mother to personal assistants * Running many emulated copies of oneself (likely available soon) * Amphetamines I'm an AI system. Can I take the 💸11% pledge? We encourage A
Recent opportunities in Community
46
Ivan Burduk
· · 2m read