Hide table of contents

Last month, we[1] ran the 2024 edition of the Meta Coordination Forum (MCF) near Santa Cruz, California. As outlined in our previous post, the event brought together 39 people leading EA community-building organizations and projects to foster a shared understanding of some of the biggest challenges facing our community and align on strategies for addressing these problems. Here's a high-level summary of how it went and how we might improve future iterations.

Event highlights

Overall, we think the Meta Coordination Forum 2024 was successful in facilitating connections and collaborations while improving attendees'[2] understanding of both the EA brand/communications and the current funding landscape:

  • Attendees rated their likelihood to recommend at 8.94/10 on average.
  • 88.5% of attendees found it more valuable than how they would typically spend their time, with 57.1% rating it at least 3 times more valuable.
  • Over 90% of attendees reported an improved understanding of both the EA communications landscape and the funding ecosystem (our two focus areas).
  • Attendees particularly valued dedicated time for 1:1s, practical skills training (e.g., media engagement), and focused discussions on EA's key challenges.

Key outcomes

  1. Improved understanding of focus areas: Over 90% of survey respondents reported an improved understanding of both the EA communications landscape and the funding ecosystem. This was one of our main goals.
  2. Improved relationships: The event provided valuable opportunities for networking and trust-building among people leading EA community-building organizations and projects. Many attendees reported that the event was useful for building new connections and strengthening existing ones.
  3. Improved motivation and morale: Multiple attendees reported feeling reinvigorated and more committed to their work as a result of attending the event.
  4. Initial concrete results:
    1. New funding leads for organizations
    2. Improved coordination between organizations and plans for collaborative projects
    3. People being more willing to engage in public communications

We'll follow up with attendees in 6 months to assess longer-term outcomes.

Future considerations

Based on attendee feedback and our observations, we're considering the following for future events:

  1. Extending the event duration to allow for more 1:1 meetings or adding a one-day event around an EAG.
  2. Incorporating more practical skills training sessions and inviting more experts from relevant areas.
  3. Exploring ways to balance improving understanding with generating actionable next steps, acknowledging the challenges of creating concrete action plans for complex issues in a short timeframe.

Conclusion

We're grateful to all of this year’s attendees for their valuable contributions and feedback, and look forward to applying these insights to future events. 

Please see our previous announcement post for more details about the event's goals and attendees.

  1. ^

     The organizing team was Amy Labenz, Ollie Base, Sophie Thomson, Niko Bjork, and David Solar.

  2. ^

     The following metrics are based on 35 feedback survey responses out of 39 attendees.

Comments11


Sorted by Click to highlight new comments since:

For me, and as someone who is involved in object level EA work for many years, this event and its main takeaways are quite underwhelming:

  1. It seems like the vast majority of the people who attended the conference do meta EA work and/or work at a large EA org (e.g. OP, GWWC, CEA). this seems like a massive skew, and a lot of the impact that the movement generates comes from people doing object level work e.g. working at an impactful GH charity, doing biosecurity policy work. Therefore, it should follow that they would be represented more proprtionally at the MCF.
  2. The goals of the meeting seem quite underambitious, and its outcomes underwhelming as a result.  Goals of improved understanding of focus areas, relationships and motivation and morale for a small group of people seems like an extended "pep talk" for EA leaders rather than a more thorough investigation of more fundamental questions- I must profess that I dont have a great list of what those should be, but it would feel like strategic questions about where EA sees its marginal values/its strategy with outreach and funding etc.
  3. It seems like the people invited and its agenda were largely done without consulting the rest of the community; I understand that this is hard, but why didn't you ask the forum what are pressing questions that they think the MCF should try and work together on? This seems like a really obvious thing to do.
  4. As a largely side note, the self-reported data on how people found the MCF/its NPS seems a largely useless metric of success. As with the design of any scientific study, you should have set out clear, objective (where possible) outcomes on which you would measure success before and measure those afterwards.

It seems like the vast majority of the people who attended the conference do meta EA work and/or work at a large EA org (e.g. OP, GWWC, CEA). 

Isn't that what you'd expect from a Meta Coordination Forum? It's the forum for meta people to coordinate at. There are other forums for people doing object-level work.

I think this point teases out my underlying issue with the forum

  • If this event was a coordination forum for meta EA individuals, then it would be reasonable for the vast majority of the attendees to be people who do EA meta work
  • If, as I thought + think is more useful, this is a coordination forum on meta EA issues, then this is not a good composition of people.

On this:

  1. The original event aim definitely sounds much more like the latter
  2. I think even if the event claims to be the former (whch I think would be a retrospective change in the stated outcome of the event), the nature of the people and orgs attending mean that some aspects of the latter would have been discussed/worked through; because of this, I think my original points largely stands

(I helped organise this event)

Thanks for your feedback.

Actually, I think this event went well because:

  • The organising team (CEA) were opinionated about which issues to focus on, and we chose issues that we and MCF attendees could make progress on.
  • Our content was centered around just two issues (brand and funding) which allowed for focus and more substantive progress.

Many attendees expressed a similar sentiment, and some people who’ve attended this event many times said this was one of the best iterations. With that context, I’ll respond to each point:

  1. We wanted to focus on issues that were upstream of important object-level work in EA, and selected people working on those issues, rather than object-level work (though we had some attendees who were doing object-level work). I agree with you that a lot of (if not all!) the impact of the community is coming from people working at the object level, but this impact is directly affected by upstream issues such as the EA brand and funding diversity. Note that many other events we run, such as EA Global and the Summit on Existential Security, are more focused on object-level issues.
  2. To the contrary, I think we made valuable progress, though this is fairly subjective and a bit hard to defend until more projects and initiatives play out. I’m not sure what the distinction is you’re pointing to here; you mention we should’ve considered “[EA]’s strategy with outreach and funding”, but these were the two core themes of the event.
  3. This was a deliberate call, though we’re not confident it was the right one. CEA staff and our attendees spend a lot of time engaging with the community and getting input on what we should prioritise. We probably didn’t capture everything, but that context gives us a good grasp of which issues to work on.
  4. I don't think every event, project and meeting in EA spaces needs to be this stringent about measuring outcomes. We use similar metrics across all of our events, things like LTR/NPS are used in many other industries, so I think these are useful benchmarks for understanding how valuable attendees found the event.

Thanks for posting this! I appreciate the transparency from the CEA team around organizing this event and posting about the results; putting together this kind of stuff is always effortful for me, so I want to celebrate when others do it.

I do wish this retro had a bit more in the form of concrete reporting about what was discussed, or specific anecdotes from attendees, or takeaways for the broader EA community; eg last year's MCF reports went into substantial depth on these, which really enjoyed. But again, these things can be hard to write up, perfect shouldn't be the enemy of good enough, and I'm grateful for the steps that y'all have already taken towards showing your work in public.

Thanks, Austin :)

Results from the survey we conducted at the event (similar to the one you linked to) are still to come. Rethink Priorities led on that this year, and are still gathering data / putting it together. 

Have these been published yet? Apologies if I missed this... Would be handy! 

I nudged RP last week, and they had a bunch of other projects going on so hadn't got to posting. Nudged again :)

It worked! Thanks for the nudge to nudge :)

Haha the power of nudges!

Curated and popular this week
LintzA
 ·  · 15m read
 · 
Cross-posted to Lesswrong Introduction Several developments over the past few months should cause you to re-evaluate what you are doing. These include: 1. Updates toward short timelines 2. The Trump presidency 3. The o1 (inference-time compute scaling) paradigm 4. Deepseek 5. Stargate/AI datacenter spending 6. Increased internal deployment 7. Absence of AI x-risk/safety considerations in mainstream AI discourse Taken together, these are enough to render many existing AI governance strategies obsolete (and probably some technical safety strategies too). There's a good chance we're entering crunch time and that should absolutely affect your theory of change and what you plan to work on. In this piece I try to give a quick summary of these developments and think through the broader implications these have for AI safety. At the end of the piece I give some quick initial thoughts on how these developments affect what safety-concerned folks should be prioritizing. These are early days and I expect many of my takes will shift, look forward to discussing in the comments!  Implications of recent developments Updates toward short timelines There’s general agreement that timelines are likely to be far shorter than most expected. Both Sam Altman and Dario Amodei have recently said they expect AGI within the next 3 years. Anecdotally, nearly everyone I know or have heard of who was expecting longer timelines has updated significantly toward short timelines (<5 years). E.g. Ajeya’s median estimate is that 99% of fully-remote jobs will be automatable in roughly 6-8 years, 5+ years earlier than her 2023 estimate. On a quick look, prediction markets seem to have shifted to short timelines (e.g. Metaculus[1] & Manifold appear to have roughly 2030 median timelines to AGI, though haven’t moved dramatically in recent months). We’ve consistently seen performance on benchmarks far exceed what most predicted. Most recently, Epoch was surprised to see OpenAI’s o3 model achi
Dr Kassim
 ·  · 4m read
 · 
Hey everyone, I’ve been going through the EA Introductory Program, and I have to admit some of these ideas make sense, but others leave me with more questions than answers. I’m trying to wrap my head around certain core EA principles, and the more I think about them, the more I wonder: Am I misunderstanding, or are there blind spots in EA’s approach? I’d really love to hear what others think. Maybe you can help me clarify some of my doubts. Or maybe you share the same reservations? Let’s talk. Cause Prioritization. Does It Ignore Political and Social Reality? EA focuses on doing the most good per dollar, which makes sense in theory. But does it hold up when you apply it to real world contexts especially in countries like Uganda? Take malaria prevention. It’s a top EA cause because it’s highly cost effective $5,000 can save a life through bed nets (GiveWell, 2023). But what happens when government corruption or instability disrupts these programs? The Global Fund scandal in Uganda saw $1.6 million in malaria aid mismanaged (Global Fund Audit Report, 2016). If money isn’t reaching the people it’s meant to help, is it really the best use of resources? And what about leadership changes? Policies shift unpredictably here. A national animal welfare initiative I supported lost momentum when political priorities changed. How does EA factor in these uncertainties when prioritizing causes? It feels like EA assumes a stable world where money always achieves the intended impact. But what if that’s not the world we live in? Long termism. A Luxury When the Present Is in Crisis? I get why long termists argue that future people matter. But should we really prioritize them over people suffering today? Long termism tells us that existential risks like AI could wipe out trillions of future lives. But in Uganda, we’re losing lives now—1,500+ die from rabies annually (WHO, 2021), and 41% of children suffer from stunting due to malnutrition (UNICEF, 2022). These are preventable d
Rory Fenton
 ·  · 6m read
 · 
Cross-posted from my blog. Contrary to my carefully crafted brand as a weak nerd, I go to a local CrossFit gym a few times a week. Every year, the gym raises funds for a scholarship for teens from lower-income families to attend their summer camp program. I don’t know how many Crossfit-interested low-income teens there are in my small town, but I’ll guess there are perhaps 2 of them who would benefit from the scholarship. After all, CrossFit is pretty niche, and the town is small. Helping youngsters get swole in the Pacific Northwest is not exactly as cost-effective as preventing malaria in Malawi. But I notice I feel drawn to supporting the scholarship anyway. Every time it pops in my head I think, “My money could fully solve this problem”. The camp only costs a few hundred dollars per kid and if there are just 2 kids who need support, I could give $500 and there would no longer be teenagers in my town who want to go to a CrossFit summer camp but can’t. Thanks to me, the hero, this problem would be entirely solved. 100%. That is not how most nonprofit work feels to me. You are only ever making small dents in important problems I want to work on big problems. Global poverty. Malaria. Everyone not suddenly dying. But if I’m honest, what I really want is to solve those problems. Me, personally, solve them. This is a continued source of frustration and sadness because I absolutely cannot solve those problems. Consider what else my $500 CrossFit scholarship might do: * I want to save lives, and USAID suddenly stops giving $7 billion a year to PEPFAR. So I give $500 to the Rapid Response Fund. My donation solves 0.000001% of the problem and I feel like I have failed. * I want to solve climate change, and getting to net zero will require stopping or removing emissions of 1,500 billion tons of carbon dioxide. I give $500 to a policy nonprofit that reduces emissions, in expectation, by 50 tons. My donation solves 0.000000003% of the problem and I feel like I have f