This is a Draft Amnesty Week draft. It may not be polished, up to my usual standards or fully thought through. This is a Forum post that I wouldn't have posted without the nudge of Draft Amnesty Week. If it were more than a draft, I would go deeper into making a case for small events and make an actual comparison with EAG(x) events.

With EA Belgium, we've organized two summer retreats, both at the same location. Since the location was a volunteer’s house and garden, and they also had a large tent available, our only expense was food.

The 2023 edition ran from Saturday noon until Sunday noon, meaning 4 meals.

The 2024 edition ran from Friday evening until Sunday noon, so 6 meals.

Below, you'll find the cost breakdown and survey results from both events:

 20232024 
Costs€190.00€307.00 
Income€271.00€310.00 
Profit€81.00€3.00 
# Participants who contributed money1312 
Average contribution€20.85€25.83 
# Meals46 
Average cost per meal€47.50€51.17 
Cost per participant/organizer€10.56€18.06 
# Participants who filled out survey1211 
# Organizers44 
Total # participants, including organizers1817 
# Participants who didn't fill out survey22 
# Participants & organizers who also went the year beforeN/A5 
How would you rate your experience?89(Average number)
How many new connections did you make?4.415.27(Average number)
I learned something new and important83.33%81.82% 
I changed my mind about something33.33%54.55% 
I have changed my career plans8.33%0.00% 
Would you go to a similar event like this next year?Yes75.00%81.82% 
 Maybe25.00%18.18% 
 No0.00%0.00% 

Please note that we aren't exactly sure of some 2023 figures marked in italics, those are my best guess.

With a cost of just €11–18 per participant, these retreats were significantly more cost-effective than an EAG(x) at face value. But of course, if we want to organize a bigger event, costs would rise quickly due to venue costs. While this format isn’t easily scalable, I hope it inspires other local groups to organize similar small-scale events! I (Jeroen) personally think the EA Summits are exciting for this reason.

Our retreats were somewhere between an unconference and a traditional conference: there was only one activity at a time, and the schedule was mostly set in advance, but participants played a key role in selecting and giving workshops and talks.

20

0
0

Reactions

0
0

More posts like this

Comments5


Sorted by Click to highlight new comments since:

Anecdotal evidence, but I would attribute at least part of the reason I ended up in an EA career to a 2 day event of ~12 people hosted by EA London.

There's a write up here (if you mean the same thing), but it was about 30 people.

That’s the one! 
Weird that I remember it being half the size.

15-20 people sharing costs for a small EA weekend sounds really cozy. :)

Could you describe the sleeping arrangements/accommodations a little bit? Did the person who volunteered the house happen to have a large number of guest bedrooms? Did participants bring tents and sleeping bags? 

No guest bedrooms. We encouraged tents and sleeping bags. Some people just went home for the night, while others came only for one day. This meant for both editions only 5-8 people ended up staying overnight, with most of them sleeping indoors in the living room.

Curated and popular this week
LintzA
 ·  · 15m read
 · 
Cross-posted to Lesswrong Introduction Several developments over the past few months should cause you to re-evaluate what you are doing. These include: 1. Updates toward short timelines 2. The Trump presidency 3. The o1 (inference-time compute scaling) paradigm 4. Deepseek 5. Stargate/AI datacenter spending 6. Increased internal deployment 7. Absence of AI x-risk/safety considerations in mainstream AI discourse Taken together, these are enough to render many existing AI governance strategies obsolete (and probably some technical safety strategies too). There's a good chance we're entering crunch time and that should absolutely affect your theory of change and what you plan to work on. In this piece I try to give a quick summary of these developments and think through the broader implications these have for AI safety. At the end of the piece I give some quick initial thoughts on how these developments affect what safety-concerned folks should be prioritizing. These are early days and I expect many of my takes will shift, look forward to discussing in the comments!  Implications of recent developments Updates toward short timelines There’s general agreement that timelines are likely to be far shorter than most expected. Both Sam Altman and Dario Amodei have recently said they expect AGI within the next 3 years. Anecdotally, nearly everyone I know or have heard of who was expecting longer timelines has updated significantly toward short timelines (<5 years). E.g. Ajeya’s median estimate is that 99% of fully-remote jobs will be automatable in roughly 6-8 years, 5+ years earlier than her 2023 estimate. On a quick look, prediction markets seem to have shifted to short timelines (e.g. Metaculus[1] & Manifold appear to have roughly 2030 median timelines to AGI, though haven’t moved dramatically in recent months). We’ve consistently seen performance on benchmarks far exceed what most predicted. Most recently, Epoch was surprised to see OpenAI’s o3 model achi
Dr Kassim
 ·  · 4m read
 · 
Hey everyone, I’ve been going through the EA Introductory Program, and I have to admit some of these ideas make sense, but others leave me with more questions than answers. I’m trying to wrap my head around certain core EA principles, and the more I think about them, the more I wonder: Am I misunderstanding, or are there blind spots in EA’s approach? I’d really love to hear what others think. Maybe you can help me clarify some of my doubts. Or maybe you share the same reservations? Let’s talk. Cause Prioritization. Does It Ignore Political and Social Reality? EA focuses on doing the most good per dollar, which makes sense in theory. But does it hold up when you apply it to real world contexts especially in countries like Uganda? Take malaria prevention. It’s a top EA cause because it’s highly cost effective $5,000 can save a life through bed nets (GiveWell, 2023). But what happens when government corruption or instability disrupts these programs? The Global Fund scandal in Uganda saw $1.6 million in malaria aid mismanaged (Global Fund Audit Report, 2016). If money isn’t reaching the people it’s meant to help, is it really the best use of resources? And what about leadership changes? Policies shift unpredictably here. A national animal welfare initiative I supported lost momentum when political priorities changed. How does EA factor in these uncertainties when prioritizing causes? It feels like EA assumes a stable world where money always achieves the intended impact. But what if that’s not the world we live in? Long termism. A Luxury When the Present Is in Crisis? I get why long termists argue that future people matter. But should we really prioritize them over people suffering today? Long termism tells us that existential risks like AI could wipe out trillions of future lives. But in Uganda, we’re losing lives now—1,500+ die from rabies annually (WHO, 2021), and 41% of children suffer from stunting due to malnutrition (UNICEF, 2022). These are preventable d
 ·  · 8m read
 · 
In my past year as a grantmaker in the global health and wellbeing (GHW) meta space at Open Philanthropy, I've identified some exciting ideas that could fill existing gaps. While these initiatives have significant potential, they require more active development and support to move forward.  The ideas I think could have the highest impact are:  1. Government placements/secondments in key GHW areas (e.g. international development), and 2. Expanded (ultra) high-net-worth ([U]HNW) advising Each of these ideas needs a very specific type of leadership and/or structure. More accessible options I’m excited about — particularly for students or recent graduates — could involve virtual GHW courses or action-focused student groups.  I can’t commit to supporting any particular project based on these ideas ahead of time, because the likelihood of success would heavily depend on details (including the people leading the project). Still, I thought it would be helpful to articulate a few of the ideas I’ve been considering.  I’d love to hear your thoughts, both on these ideas and any other gaps you see in the space! Introduction I’m Mel, a Senior Program Associate at Open Philanthropy, where I lead grantmaking for the Effective Giving and Careers program[1] (you can read more about the program and our current strategy here). Throughout my time in this role, I’ve encountered great ideas, but have also noticed gaps in the space. This post shares a list of projects I’d like to see pursued, and would potentially want to support. These ideas are drawn from existing efforts in other areas (e.g., projects supported by our GCRCB team), suggestions from conversations and materials I’ve engaged with, and my general intuition. They aren’t meant to be a definitive roadmap, but rather a starting point for discussion. At the moment, I don’t have capacity to more actively explore these ideas and find the right founders for related projects. That may change, but for now, I’m interested in