Hide table of contents

This week, the CEA events team will be running the 2024 edition of the Meta Coordination Forum (MCF) in California. We’re bringing together ~40 people leading EA community-building organizations and projects to foster a shared understanding of some of the biggest challenges facing our community and align on strategies for addressing these problems. This is a short post to provide the wider community with a sense of the event’s goals and who will be attending.

Goals and themes

The Meta Coordination Forum aims to:

  • Build a shared understanding among attendees of some of the biggest challenges facing the EA community.
  • Provide space for attendees to discuss, get feedback on, and develop strategies for addressing these challenges. 
  • Foster more collaboration among people leading EA meta-organizations and projects.

While we’re encouraging attendees to prioritize one-on-one meetings and private conversations, our structured sessions will focus on two key themes:

  • Brand: what’s the current state of the EA brand, what outcomes do we want, and how can we achieve them?
  • Funding: what’s the current state of the funding landscape in EA, what strategies should we use to diversify funding, and what steps should we take?

At the event, we’ll also be conducting a survey similar to the one we ran in 2019 and 2023, and which 80,000 Hours ran in 2017 and 2018. We’re partnering with Rethink Priorities on this survey. We hope this survey will provide CEA, attendees at the event, and the wider community with a better sense of the talent gaps that organizations face, as well as insights into some key questions facing the community. 

Attendees

We invited attendees based on their ability to contribute to and implement strategies addressing our core themes. While we aimed for a balanced representation across the meta work that is going on, our primary focus was on individuals best positioned to drive progress on behalf of the community. We acknowledge that others might take a different approach to inviting attendees or have thoughts on who was omitted and welcome suggestions for future events.

Below is the list of attendees who’ve agreed to share that they’re attending the event. This list makes up the majority of attendees at the event—some preferred not to have their attendance made public.

Alexander Berger

Howie Lempel

Marcus Davis

Amy Labenz

Jacob Eliosoff

Max Daniel

Anne Schulze

Jessica McCurdy

Melanie Basnak

Arden Koehler

JP Addison

Michelle Hutchinson

Bella Forristal

JueYan Zhang

Mike Levine

Claire Zabel

Julia Wise

Nicole Ross

Devon Fritz

Karolina Sarek

Patrick Gruban

Eli Rose

Kelsey Piper

Simran Dhaliwal

Emma Richter

Lewis Bollard

Sjir Hoeijmakers

George Rosenfeld

Luke Freeman

Will MacAskill

  

Zachary Robinson

This is not a canonical list of “key people working in meta EA” or “EA leaders.” There are plenty of people who are not attending this event who are doing high-value work in the meta-EA space. Note that we have a few attendees at this year’s event who are specialists in one of our focus areas rather than leaders of an EA meta organization or team (though some attendees are both).

We’ll also encourage attendees to share their memos on the forum and think about other updates we can share that will aid transparency and coordination.

A note on comments: we’ll be running the event this week, so won’t have capacity to engage in the comments. However, we will be reading them, and that can inform discussions at the event.

Comments15


Sorted by Click to highlight new comments since:

Just a reminder that I think it's the wrong choice to allow attendees to leave their name off the published list.

This seems fine to me - I expect that attending this is not a large fraction of most attendee's impact on EA, and that some who didn't want to be named would have not come if they needed to be on a public list, so barring such people seems silly (I expect there's some people who would tolerate being named as the cost of coming too, of course). I would be happy to find some way to incentivise people being named.

And really, I don't think it's that important that a list of attendees be published. What do you see as the value here?

With all the scandals we've seen in the last few years, I think it should be very evident how important transparency is. See also my explanation from last year.

...some who didn't want to be named would have not come if they needed to be on a public list, so barring such people seems silly...

How is it silly? It seems perfectly acceptable, and even preferable, for people to be involved in shaping EA only if they agree for their leadership to be scrutinized.

The EA movement absolutely cannot carry on with the "let's allow people to do whatever without any hindrance, what could possibly go wrong?" approach.

How is it silly? It seems perfectly acceptable, and even preferable, for people to be involved in shaping EA only if they agree for their leadership to be scrutinized.

My argument is that barring them doesn't stop them from shaping EA, just mildly inconveniences them, because much of the influence happens outside such conferences

With all the scandals we've seen in the last few years, I think it should be very evident how important transparency is

Which scandals do you believe would have been avoided with greater transparency, especially transparency of the form here (listing the names of those involved, with no further info)? I can see an argument that eg people who have complaints about bad behaviour (eg Owen's, or SBF/Alameda's) should make them more transparently (though that has many downsides), but that's a very different kind of transparency.

I think in some generality scandals tend to be "because things aren't transparent enough", since greater transparency would typically have meant issues people would be unhappy with would have tended to get caught and responded to earlier. (My case had elements of "too transparent", but also definitely had elements of "not transparent enough".)

Anyway I agree that this particular type of transparency wouldn't help in most cases. But it doesn't seem hard to imagine cases, at least in the abstract, where it would kind of help? (e.g. imagine EA culture was pushing a particular lifestyle choice, and then it turned out the owner of the biggest manufacturer in that industry got invited to core EA events)

Thanks for resurfacing this take, Guy.

There's a trade-off here, but I think some attendees who can provide valuable input wouldn't attend if their name was shared publicly and that would make the event less valuable for the community. 

That said, perhaps one thing we can do is emphasise the benefits of sharing their name (increases trust in the event/leadership, greater visibility for the community about direction/influence) when they RSVP for the event, I'll note that for next time as an idea.

The one attendee that seems a bit strange is Kelsey Piper. She’s doing great work at Future Perfect, but something feels a bit off about involving a current journalist in the key decision making. I guess I feel that the relationship should be slightly more arms-length?

Strangely enough, I’d feel differently about a blogger, which may seem inconsistent, but society’s expectations about the responsibilities of a blogger are quite different.

Seems like she'll have a useful perspective that adds value to the event, especially on brand. Why do you think it should be arms length?

I think she adds a useful perspective, but maybe it could undermine her reporting?

Thanks for sharing this, it does seem good to have transparency into this stuff.

My gut reaction was "huh, I'm surprised about how large a proportion of these people (maybe 30-50%, depending on how you count it) I don't recall substantially interacting with" (where by "interaction" I include reading their writings).

To be clear, I'm not trying to imply that it should be higher; that any particular mistakes are being made; or that these people should have interacted with me. It just felt surprising (given how long I've been floating around EA) and worth noting as a datapoint. (Though one reason to take this with a grain of salt is that I do forget names and faces pretty easily.)

Thanks! I think this note explains the gap:

Note that we have a few attendees at this year’s event who are specialists in one of our focus areas rather than leaders of an EA meta organization or team (though some attendees are both).

We were not trying to optimise the attendee list for connectedness or historical engagement with the community, but rather who can contribute to making progress on our core themes; brand and funding. When you see what roles these attendees have, I think it's fairly evident why we invited them, given this lens.

I'll also note that I think it's healthy for there to be people joining for this event who haven't bene in the community as long as you have. They can bring new perspectives, and offer expertise the community / organisational leaders has been lacking.

It would be nice to see even just a comment here on what this group thought the biggest challenges were and how that compares to recent surveys etc. Not super important to me, but if people upvote this perhaps some weight should be given to even a very quick such update.

I don't think I follow, which challenges; which surveys?

We chose brand and funding as themes in part because attendees flagged these as two of the biggest challenges facing the community. Sorry, I should've made that clearer.

Curated and popular this week
 ·  · 20m read
 · 
Advanced AI could unlock an era of enlightened and competent government action. But without smart, active investment, we’ll squander that opportunity and barrel blindly into danger. Executive summary See also a summary on Twitter / X. The US federal government is falling behind the private sector on AI adoption. As AI improves, a growing gap would leave the government unable to effectively respond to AI-driven existential challenges and threaten the legitimacy of its democratic institutions. A dual imperative → Government adoption of AI can’t wait. Making steady progress is critical to: * Boost the government’s capacity to effectively respond to AI-driven existential challenges * Help democratic oversight keep up with the technological power of other groups * Defuse the risk of rushed AI adoption in a crisis → But hasty AI adoption could backfire. Without care, integration of AI could: * Be exploited, subverting independent government action * Lead to unsafe deployment of AI systems * Accelerate arms races or compress safety research timelines Summary of the recommendations 1. Work with the US federal government to help it effectively adopt AI Simplistic “pro-security” or “pro-speed” attitudes miss the point. Both are important — and many interventions would help with both. We should: * Invest in win-win measures that both facilitate adoption and reduce the risks involved, e.g.: * Build technical expertise within government (invest in AI and technical talent, ensure NIST is well resourced) * Streamline procurement processes for AI products and related tech (like cloud services) * Modernize the government’s digital infrastructure and data management practices * Prioritize high-leverage interventions that have strong adoption-boosting benefits with minor security costs or vice versa, e.g.: * On the security side: investing in cyber security, pre-deployment testing of AI in high-stakes areas, and advancing research on mitigating the ris
 ·  · 11m read
 · 
Our Mission: To build a multidisciplinary field around using technology—especially AI—to improve the lives of nonhumans now and in the future.  Overview Background This hybrid conference had nearly 550 participants and took place March 1-2, 2025 at UC Berkeley. It was organized by AI for Animals for $74k by volunteer core organizers Constance Li, Sankalpa Ghose, and Santeri Tani.  This conference has evolved since 2023: * The 1st conference mainly consisted of philosophers and was a single track lecture/panel. * The 2nd conference put all lectures on one day and followed it with 2 days of interactive unconference sessions happening in parallel and a week of in-person co-working. * This 3rd conference had a week of related satellite events, free shared accommodations for 50+ attendees, 2 days of parallel lectures/panels/unconferences, 80 unique sessions, of which 32 are available on Youtube, Swapcard to enable 1:1 connections, and a Slack community to continue conversations year round. We have been quickly expanding this conference in order to prepare those that are working toward the reduction of nonhuman suffering to adapt to the drastic and rapid changes that AI will bring.  Luckily, it seems like it has been working!  This year, many animal advocacy organizations attended (mostly smaller and younger ones) as well as newly formed groups focused on digital minds and funders who spanned both of these spaces. We also had more diversity of speakers and attendees which included economists, AI researchers, investors, tech companies, journalists, animal welfare researchers, and more. This was done through strategic targeted outreach and a bigger team of volunteers.  Outcomes On our feedback survey, which had 85 total responses (mainly from in-person attendees), people reported an average of 7 new connections (defined as someone they would feel comfortable reaching out to for a favor like reviewing a blog post) and of those new connections, an average of 3
 ·  · 15m read
 · 
In our recent strategy retreat, the GWWC Leadership Team recognised that by spreading our limited resources across too many projects, we are unable to deliver the level of excellence and impact that our mission demands. True to our value of being mission accountable, we've therefore made the difficult but necessary decision to discontinue a total of 10 initiatives. By focusing our energy on fewer, more strategically aligned initiatives, we think we’ll be more likely to ultimately achieve our Big Hairy Audacious Goal of 1 million pledgers donating $3B USD to high-impact charities annually. (See our 2025 strategy.) We’d like to be transparent about the choices we made, both to hold ourselves accountable and so other organisations can take the gaps we leave into account when planning their work. As such, this post aims to: * Inform the broader EA community about changes to projects & highlight opportunities to carry these projects forward * Provide timelines for project transitions * Explain our rationale for discontinuing certain initiatives What’s changing  We've identified 10 initiatives[1] to wind down or transition. These are: * GWWC Canada * Effective Altruism Australia funding partnership * GWWC Groups * Giving Games * Charity Elections * Effective Giving Meta evaluation and grantmaking * The Donor Lottery * Translations * Hosted Funds * New licensing of the GWWC brand  Each of these is detailed in the sections below, with timelines and transition plans where applicable. How this is relevant to you  We still believe in the impact potential of many of these projects. Our decision doesn’t necessarily reflect their lack of value, but rather our need to focus at this juncture of GWWC's development.  Thus, we are actively looking for organisations and individuals interested in taking on some of these projects. If that’s you, please do reach out: see each project's section for specific contact details. Thank you for your continued support as we
Recent opportunities in Building effective altruism
47
Ivan Burduk
· · 2m read