Hide table of contents

Summary

We ran three EA Global events in 2022, in London, San Francisco, and Washington, D.C. These conferences all had ~1300–1500 people each and were some of our biggest yet:

  • These events had an average score of 9.02 to the question “How likely is it that you would recommend EA Global to a friend or colleague with similar interests to your own?”, on a scale of 0–10.
  • Those who filled out our feedback survey (which was a minority of attendees, around 1200 individuals in total across all three events) reported over 36,000 new connections made.
  • This was the first time we ran three EA Globals in one year since 2017, and we only had ~1200 attendees total across all three of those events.
  • We hosted and recorded lots of new content, a substantial amount of which is located on our YouTube channel.
  • This was the first time trialing out a EA conference in D.C. of any kind. We generally received positive feedback about this event from attendees and stakeholders.

Plans for 2023

  • We’re reducing our spending in a lot of ways, most significantly by cutting some meals and the majority of travel grants, which we expect may somewhat reduce the overall ratings of our events. Please note that this is a fairly dynamic situation and we may update our spending plans as our financial situation changes. 
  • We’re doing three EA Globals in 2023, in the Bay Area and London again, and with our US east coast event in Boston rather than D.C.  As well as EA Globals, there are also several upcoming EAGx events, check out the full list of confirmed and provisional events below.
    • EA Global: Bay Area | 24–26 February
    • EAGxCambridge | 17–19 March 
    • EAGxNordics | 21–23 April 
    • EA Global: London | 19–21 May
    • EAGxWarsaw | 9–11 June [provisional] 
    • EAGxNYC | July / August [provisional]
    • EAGxBerlin | 8–10 September
    • EAGxAustralia | Late September [provisional]
    • EA Global: Boston | Oct 27–Oct 29
    • EAGxVirtual | November [provisional]
  • We’re aiming to have similar-sized conferences, though with the reduction in travel grants we expect the events to perhaps be a little smaller, maybe around 1000 people per EA Global.
  • We recently completed a hiring round and now have ~4 FTEs working on the EA Global team.
  • We’ve recently revamped our website and incorporated it into effectivealtruism.org — see here.
  • We’ve switched over our backend systems from Zoho to Salesforce. This will help us integrate better with the rest of CEA’s products, and will hopefully create a smoother front and backend that’s better suited to our users. (Note that the switchover itself has been somewhat buggy, but we are clearing these up and hope to have minimal issues moving forwards.)
    • We’re also trialing a referral system for applications, where we’ve given a select number of advisors working in EA community building the ability to admit people to the conference. If this goes well we may expand this program next year.

Growth areas

  • Food got generally negative reviews in 2022:
    • Food is a notoriously hard area to get right and quality can vary a lot between venues, and we often have little or no choice between catering options.
    • We’ve explored ways to improve the food quality, including hiring a catering consultant, but a lot of these options are cost prohibitive, and realistically we expect food quality to continue to be an issue moving forwards.
  • Swapcard (our event application app) also got generally negative reviews in 2022:
    • We explored and tested several competitor apps, though none of them seem better than Swapcard.
    • We explored working with external developers to build our own event networking app, but eventually concluded that this would be too costly in terms of both time and money.
    • We’ve been working with Swapcard to roll out new features and fix bugs (this will also make the app better for other events, including EAGx conferences).
  • Some parts of our processes were slower and less professional than we’d like, though we’re working to improve this:
    • We launched applications for 2023 later than we would have liked, partially due to delays in us building out a new backend application system.
    • We were slower in the past to approve applications and respond to emails than we would have liked, though we’ve gotten better at this and expect to get better still as we bring on new staff.
    • We could have been better about communicating in general. Aspects of our events change each year and it often takes a while for people to internalize these changes if we don’t communicate them well. For example, we introduced travel grant funding for attendees a while back, but it took a while for people to really realize it was there and start using it (though we’ve now cut this funding down).

Other things we’d like to do if we had capacity, but expect we won’t focus on in 2023:

  • Active outreach and stewardship — working to get promising people who might not have EA Global on their radar to come, and actively pairing them or other attendees up with potentially valuable meetings.
  • Organizing satellite events — we do this a bit, and many community members do this too, but we expect there’d be more value to capture here if we had time.
Comments12


Sorted by Click to highlight new comments since:

Would love more demographic data on who applied and who was admitted (in aggregate) if you're willing to share.

Thanks for the nudge! We've now posted more information here

Swapcard seems to have significantly improved since the last EAG. You can now view your one on ones and events you’re attending all in the one place. I suspect that if we keep submitting feedback, they’ll eventually fix the flaws.

Thanks for letting us know, Chris :) Progress has been slow but it's great to see attendees noticing it!

Great to see this announcement! Curious to hear if there are any plans for a Washington DC-based event?

This year we're doing our East Coast EA Global in Boston, but we're pretty open to shifting it back to DC in the future.

One type of event I'm provisionally excited about is a more introductory EA conference in DC targeted at mid-to-late career folks. Kinda like an EAGx but maybe more "professional" (everyone wears a suit and tie type of vibe). My sense from doing EA Global in DC was that there could be a fair amount of demand for something like this, but at this stage this is more of a vague idea than something I think we're likely to organize any time soon.

I could definitely see the EAG East Coast alternating between Boston and DC every other year. I have nothing against Boston and I think it is also a great place for an EAG and I realize it is a very difficult choice if you can only pick one.

The idea of a professional suit-tie EAGxDC with significant policy engagement (perhaps not even branded as "EAG" as all but something else) is pretty appealing to me.

Fwiw, just to state it publicly: We hope that if EAGxNYC goes well, NYC can serve as the location for an EAG in future years and I think there are many compelling reasons to have NYC as a primary EAG location.

I definitely agree that NYC is a very compelling location too. Best of luck with EAGxNYC and I'll see if I can attend.

Thanks for all y’all do - sharing stats publicly like this is really helpful.

Any plans for more legible / objective criteria on who is accepted versus rejected?

Also has your team done any reflection on calls to open ea global or create a more community focused, less gated conference?

Thanks for sharing the stats! A minor thing you could consider doing in charting when you have quantities on pretty different scales, like you have here, is putting one on the right axis and another on the left. For example, connections and attendance could be perfectly correlated but I can't tell because the blue bars are so small.

PV
1
0
0

Hi @Eli_Nathan. Apart from those already officially announced or listed in the post above, are conferences specifically in Asia, Latin America or Africa planned for this year? Thanks so much, Pia

Curated and popular this week
LintzA
 ·  · 15m read
 · 
Cross-posted to Lesswrong Introduction Several developments over the past few months should cause you to re-evaluate what you are doing. These include: 1. Updates toward short timelines 2. The Trump presidency 3. The o1 (inference-time compute scaling) paradigm 4. Deepseek 5. Stargate/AI datacenter spending 6. Increased internal deployment 7. Absence of AI x-risk/safety considerations in mainstream AI discourse Taken together, these are enough to render many existing AI governance strategies obsolete (and probably some technical safety strategies too). There's a good chance we're entering crunch time and that should absolutely affect your theory of change and what you plan to work on. In this piece I try to give a quick summary of these developments and think through the broader implications these have for AI safety. At the end of the piece I give some quick initial thoughts on how these developments affect what safety-concerned folks should be prioritizing. These are early days and I expect many of my takes will shift, look forward to discussing in the comments!  Implications of recent developments Updates toward short timelines There’s general agreement that timelines are likely to be far shorter than most expected. Both Sam Altman and Dario Amodei have recently said they expect AGI within the next 3 years. Anecdotally, nearly everyone I know or have heard of who was expecting longer timelines has updated significantly toward short timelines (<5 years). E.g. Ajeya’s median estimate is that 99% of fully-remote jobs will be automatable in roughly 6-8 years, 5+ years earlier than her 2023 estimate. On a quick look, prediction markets seem to have shifted to short timelines (e.g. Metaculus[1] & Manifold appear to have roughly 2030 median timelines to AGI, though haven’t moved dramatically in recent months). We’ve consistently seen performance on benchmarks far exceed what most predicted. Most recently, Epoch was surprised to see OpenAI’s o3 model achi
Dr Kassim
 ·  · 4m read
 · 
Hey everyone, I’ve been going through the EA Introductory Program, and I have to admit some of these ideas make sense, but others leave me with more questions than answers. I’m trying to wrap my head around certain core EA principles, and the more I think about them, the more I wonder: Am I misunderstanding, or are there blind spots in EA’s approach? I’d really love to hear what others think. Maybe you can help me clarify some of my doubts. Or maybe you share the same reservations? Let’s talk. Cause Prioritization. Does It Ignore Political and Social Reality? EA focuses on doing the most good per dollar, which makes sense in theory. But does it hold up when you apply it to real world contexts especially in countries like Uganda? Take malaria prevention. It’s a top EA cause because it’s highly cost effective $5,000 can save a life through bed nets (GiveWell, 2023). But what happens when government corruption or instability disrupts these programs? The Global Fund scandal in Uganda saw $1.6 million in malaria aid mismanaged (Global Fund Audit Report, 2016). If money isn’t reaching the people it’s meant to help, is it really the best use of resources? And what about leadership changes? Policies shift unpredictably here. A national animal welfare initiative I supported lost momentum when political priorities changed. How does EA factor in these uncertainties when prioritizing causes? It feels like EA assumes a stable world where money always achieves the intended impact. But what if that’s not the world we live in? Long termism. A Luxury When the Present Is in Crisis? I get why long termists argue that future people matter. But should we really prioritize them over people suffering today? Long termism tells us that existential risks like AI could wipe out trillions of future lives. But in Uganda, we’re losing lives now—1,500+ die from rabies annually (WHO, 2021), and 41% of children suffer from stunting due to malnutrition (UNICEF, 2022). These are preventable d
Rory Fenton
 ·  · 6m read
 · 
Cross-posted from my blog. Contrary to my carefully crafted brand as a weak nerd, I go to a local CrossFit gym a few times a week. Every year, the gym raises funds for a scholarship for teens from lower-income families to attend their summer camp program. I don’t know how many Crossfit-interested low-income teens there are in my small town, but I’ll guess there are perhaps 2 of them who would benefit from the scholarship. After all, CrossFit is pretty niche, and the town is small. Helping youngsters get swole in the Pacific Northwest is not exactly as cost-effective as preventing malaria in Malawi. But I notice I feel drawn to supporting the scholarship anyway. Every time it pops in my head I think, “My money could fully solve this problem”. The camp only costs a few hundred dollars per kid and if there are just 2 kids who need support, I could give $500 and there would no longer be teenagers in my town who want to go to a CrossFit summer camp but can’t. Thanks to me, the hero, this problem would be entirely solved. 100%. That is not how most nonprofit work feels to me. You are only ever making small dents in important problems I want to work on big problems. Global poverty. Malaria. Everyone not suddenly dying. But if I’m honest, what I really want is to solve those problems. Me, personally, solve them. This is a continued source of frustration and sadness because I absolutely cannot solve those problems. Consider what else my $500 CrossFit scholarship might do: * I want to save lives, and USAID suddenly stops giving $7 billion a year to PEPFAR. So I give $500 to the Rapid Response Fund. My donation solves 0.000001% of the problem and I feel like I have failed. * I want to solve climate change, and getting to net zero will require stopping or removing emissions of 1,500 billion tons of carbon dioxide. I give $500 to a policy nonprofit that reduces emissions, in expectation, by 50 tons. My donation solves 0.000000003% of the problem and I feel like I have f
Recent opportunities in Building effective altruism
6
2 authors
· · 3m read