Hide table of contents

This week, the CEA events team will be running the 2024 edition of the Meta Coordination Forum (MCF) in California. We’re bringing together ~40 people leading EA community-building organizations and projects to foster a shared understanding of some of the biggest challenges facing our community and align on strategies for addressing these problems. This is a short post to provide the wider community with a sense of the event’s goals and who will be attending.

Goals and themes

The Meta Coordination Forum aims to:

  • Build a shared understanding among attendees of some of the biggest challenges facing the EA community.
  • Provide space for attendees to discuss, get feedback on, and develop strategies for addressing these challenges. 
  • Foster more collaboration among people leading EA meta-organizations and projects.

While we’re encouraging attendees to prioritize one-on-one meetings and private conversations, our structured sessions will focus on two key themes:

  • Brand: what’s the current state of the EA brand, what outcomes do we want, and how can we achieve them?
  • Funding: what’s the current state of the funding landscape in EA, what strategies should we use to diversify funding, and what steps should we take?

At the event, we’ll also be conducting a survey similar to the one we ran in 2019 and 2023, and which 80,000 Hours ran in 2017 and 2018. We’re partnering with Rethink Priorities on this survey. We hope this survey will provide CEA, attendees at the event, and the wider community with a better sense of the talent gaps that organizations face, as well as insights into some key questions facing the community. 

Attendees

We invited attendees based on their ability to contribute to and implement strategies addressing our core themes. While we aimed for a balanced representation across the meta work that is going on, our primary focus was on individuals best positioned to drive progress on behalf of the community. We acknowledge that others might take a different approach to inviting attendees or have thoughts on who was omitted and welcome suggestions for future events.

Below is the list of attendees who’ve agreed to share that they’re attending the event. This list makes up the majority of attendees at the event—some preferred not to have their attendance made public.

Alexander Berger

Howie Lempel

Marcus Davis

Amy Labenz

Jacob Eliosoff

Max Daniel

Anne Schulze

Jessica McCurdy

Melanie Basnak

Arden Koehler

JP Addison

Michelle Hutchinson

Bella Forristal

JueYan Zhang

Mike Levine

Claire Zabel

Julia Wise

Nicole Ross

Devon Fritz

Karolina Sarek

Patrick Gruban

Eli Rose

Kelsey Piper

Simran Dhaliwal

Emma Richter

Lewis Bollard

Sjir Hoeijmakers

George Rosenfeld

Luke Freeman

Will MacAskill

  

Zachary Robinson

This is not a canonical list of “key people working in meta EA” or “EA leaders.” There are plenty of people who are not attending this event who are doing high-value work in the meta-EA space. Note that we have a few attendees at this year’s event who are specialists in one of our focus areas rather than leaders of an EA meta organization or team (though some attendees are both).

We’ll also encourage attendees to share their memos on the forum and think about other updates we can share that will aid transparency and coordination.

A note on comments: we’ll be running the event this week, so won’t have capacity to engage in the comments. However, we will be reading them, and that can inform discussions at the event.

Comments15


Sorted by Click to highlight new comments since:

Just a reminder that I think it's the wrong choice to allow attendees to leave their name off the published list.

This seems fine to me - I expect that attending this is not a large fraction of most attendee's impact on EA, and that some who didn't want to be named would have not come if they needed to be on a public list, so barring such people seems silly (I expect there's some people who would tolerate being named as the cost of coming too, of course). I would be happy to find some way to incentivise people being named.

And really, I don't think it's that important that a list of attendees be published. What do you see as the value here?

With all the scandals we've seen in the last few years, I think it should be very evident how important transparency is. See also my explanation from last year.

...some who didn't want to be named would have not come if they needed to be on a public list, so barring such people seems silly...

How is it silly? It seems perfectly acceptable, and even preferable, for people to be involved in shaping EA only if they agree for their leadership to be scrutinized.

The EA movement absolutely cannot carry on with the "let's allow people to do whatever without any hindrance, what could possibly go wrong?" approach.

How is it silly? It seems perfectly acceptable, and even preferable, for people to be involved in shaping EA only if they agree for their leadership to be scrutinized.

My argument is that barring them doesn't stop them from shaping EA, just mildly inconveniences them, because much of the influence happens outside such conferences

With all the scandals we've seen in the last few years, I think it should be very evident how important transparency is

Which scandals do you believe would have been avoided with greater transparency, especially transparency of the form here (listing the names of those involved, with no further info)? I can see an argument that eg people who have complaints about bad behaviour (eg Owen's, or SBF/Alameda's) should make them more transparently (though that has many downsides), but that's a very different kind of transparency.

I think in some generality scandals tend to be "because things aren't transparent enough", since greater transparency would typically have meant issues people would be unhappy with would have tended to get caught and responded to earlier. (My case had elements of "too transparent", but also definitely had elements of "not transparent enough".)

Anyway I agree that this particular type of transparency wouldn't help in most cases. But it doesn't seem hard to imagine cases, at least in the abstract, where it would kind of help? (e.g. imagine EA culture was pushing a particular lifestyle choice, and then it turned out the owner of the biggest manufacturer in that industry got invited to core EA events)

Thanks for resurfacing this take, Guy.

There's a trade-off here, but I think some attendees who can provide valuable input wouldn't attend if their name was shared publicly and that would make the event less valuable for the community. 

That said, perhaps one thing we can do is emphasise the benefits of sharing their name (increases trust in the event/leadership, greater visibility for the community about direction/influence) when they RSVP for the event, I'll note that for next time as an idea.

Thanks for sharing this, it does seem good to have transparency into this stuff.

My gut reaction was "huh, I'm surprised about how large a proportion of these people (maybe 30-50%, depending on how you count it) I don't recall substantially interacting with" (where by "interaction" I include reading their writings).

To be clear, I'm not trying to imply that it should be higher; that any particular mistakes are being made; or that these people should have interacted with me. It just felt surprising (given how long I've been floating around EA) and worth noting as a datapoint. (Though one reason to take this with a grain of salt is that I do forget names and faces pretty easily.)

Thanks! I think this note explains the gap:

Note that we have a few attendees at this year’s event who are specialists in one of our focus areas rather than leaders of an EA meta organization or team (though some attendees are both).

We were not trying to optimise the attendee list for connectedness or historical engagement with the community, but rather who can contribute to making progress on our core themes; brand and funding. When you see what roles these attendees have, I think it's fairly evident why we invited them, given this lens.

I'll also note that I think it's healthy for there to be people joining for this event who haven't bene in the community as long as you have. They can bring new perspectives, and offer expertise the community / organisational leaders has been lacking.

The one attendee that seems a bit strange is Kelsey Piper. She’s doing great work at Future Perfect, but something feels a bit off about involving a current journalist in the key decision making. I guess I feel that the relationship should be slightly more arms-length?

Strangely enough, I’d feel differently about a blogger, which may seem inconsistent, but society’s expectations about the responsibilities of a blogger are quite different.

Seems like she'll have a useful perspective that adds value to the event, especially on brand. Why do you think it should be arms length?

I think she adds a useful perspective, but maybe it could undermine her reporting?

It would be nice to see even just a comment here on what this group thought the biggest challenges were and how that compares to recent surveys etc. Not super important to me, but if people upvote this perhaps some weight should be given to even a very quick such update.

I don't think I follow, which challenges; which surveys?

We chose brand and funding as themes in part because attendees flagged these as two of the biggest challenges facing the community. Sorry, I should've made that clearer.

Curated and popular this week
LintzA
 ·  · 15m read
 · 
Cross-posted to Lesswrong Introduction Several developments over the past few months should cause you to re-evaluate what you are doing. These include: 1. Updates toward short timelines 2. The Trump presidency 3. The o1 (inference-time compute scaling) paradigm 4. Deepseek 5. Stargate/AI datacenter spending 6. Increased internal deployment 7. Absence of AI x-risk/safety considerations in mainstream AI discourse Taken together, these are enough to render many existing AI governance strategies obsolete (and probably some technical safety strategies too). There's a good chance we're entering crunch time and that should absolutely affect your theory of change and what you plan to work on. In this piece I try to give a quick summary of these developments and think through the broader implications these have for AI safety. At the end of the piece I give some quick initial thoughts on how these developments affect what safety-concerned folks should be prioritizing. These are early days and I expect many of my takes will shift, look forward to discussing in the comments!  Implications of recent developments Updates toward short timelines There’s general agreement that timelines are likely to be far shorter than most expected. Both Sam Altman and Dario Amodei have recently said they expect AGI within the next 3 years. Anecdotally, nearly everyone I know or have heard of who was expecting longer timelines has updated significantly toward short timelines (<5 years). E.g. Ajeya’s median estimate is that 99% of fully-remote jobs will be automatable in roughly 6-8 years, 5+ years earlier than her 2023 estimate. On a quick look, prediction markets seem to have shifted to short timelines (e.g. Metaculus[1] & Manifold appear to have roughly 2030 median timelines to AGI, though haven’t moved dramatically in recent months). We’ve consistently seen performance on benchmarks far exceed what most predicted. Most recently, Epoch was surprised to see OpenAI’s o3 model achi
Dr Kassim
 ·  · 4m read
 · 
Hey everyone, I’ve been going through the EA Introductory Program, and I have to admit some of these ideas make sense, but others leave me with more questions than answers. I’m trying to wrap my head around certain core EA principles, and the more I think about them, the more I wonder: Am I misunderstanding, or are there blind spots in EA’s approach? I’d really love to hear what others think. Maybe you can help me clarify some of my doubts. Or maybe you share the same reservations? Let’s talk. Cause Prioritization. Does It Ignore Political and Social Reality? EA focuses on doing the most good per dollar, which makes sense in theory. But does it hold up when you apply it to real world contexts especially in countries like Uganda? Take malaria prevention. It’s a top EA cause because it’s highly cost effective $5,000 can save a life through bed nets (GiveWell, 2023). But what happens when government corruption or instability disrupts these programs? The Global Fund scandal in Uganda saw $1.6 million in malaria aid mismanaged (Global Fund Audit Report, 2016). If money isn’t reaching the people it’s meant to help, is it really the best use of resources? And what about leadership changes? Policies shift unpredictably here. A national animal welfare initiative I supported lost momentum when political priorities changed. How does EA factor in these uncertainties when prioritizing causes? It feels like EA assumes a stable world where money always achieves the intended impact. But what if that’s not the world we live in? Long termism. A Luxury When the Present Is in Crisis? I get why long termists argue that future people matter. But should we really prioritize them over people suffering today? Long termism tells us that existential risks like AI could wipe out trillions of future lives. But in Uganda, we’re losing lives now—1,500+ die from rabies annually (WHO, 2021), and 41% of children suffer from stunting due to malnutrition (UNICEF, 2022). These are preventable d
Rory Fenton
 ·  · 6m read
 · 
Cross-posted from my blog. Contrary to my carefully crafted brand as a weak nerd, I go to a local CrossFit gym a few times a week. Every year, the gym raises funds for a scholarship for teens from lower-income families to attend their summer camp program. I don’t know how many Crossfit-interested low-income teens there are in my small town, but I’ll guess there are perhaps 2 of them who would benefit from the scholarship. After all, CrossFit is pretty niche, and the town is small. Helping youngsters get swole in the Pacific Northwest is not exactly as cost-effective as preventing malaria in Malawi. But I notice I feel drawn to supporting the scholarship anyway. Every time it pops in my head I think, “My money could fully solve this problem”. The camp only costs a few hundred dollars per kid and if there are just 2 kids who need support, I could give $500 and there would no longer be teenagers in my town who want to go to a CrossFit summer camp but can’t. Thanks to me, the hero, this problem would be entirely solved. 100%. That is not how most nonprofit work feels to me. You are only ever making small dents in important problems I want to work on big problems. Global poverty. Malaria. Everyone not suddenly dying. But if I’m honest, what I really want is to solve those problems. Me, personally, solve them. This is a continued source of frustration and sadness because I absolutely cannot solve those problems. Consider what else my $500 CrossFit scholarship might do: * I want to save lives, and USAID suddenly stops giving $7 billion a year to PEPFAR. So I give $500 to the Rapid Response Fund. My donation solves 0.000001% of the problem and I feel like I have failed. * I want to solve climate change, and getting to net zero will require stopping or removing emissions of 1,500 billion tons of carbon dioxide. I give $500 to a policy nonprofit that reduces emissions, in expectation, by 50 tons. My donation solves 0.000000003% of the problem and I feel like I have f
Recent opportunities in Building effective altruism
6
2 authors
· · 3m read