Hide table of contents

This post is part of the Community Events Retrospective sequence.

Earlier this year, I surveyed ~500 people who had attended an EAGx event or an event funded by the Community Events Programme. The key sources of value that attendees report getting are: 

  • New connections in their network;
  • Learning about new career options; and
  • Learning about EA cause areas.

This finding is quite rough and we plan on getting more clarity on sources of value from our events in future surveys.

Results

In the survey, I asked attendees about routes to valuable outcomes from our events,[1] and then coded attendee responses into the following baskets:[2]

  • Connections
  • Learning about new career options 
  • Learning more about EA cause areas or taking them more seriously
  • Increased motivation or positivity for EA broadly
  • Increased motivation or positivity for EA-aligned work
  • Finding specific sessions interesting or valuable
  • More likely to attend other EA-aligned events/opportunities
  • Inspired or motivated to take a specific action
  • Strengthening existing connections
  • Making friends or social connections
  • Event enjoyment (just having a really positive experience)


These are the results:

A key limitation to this analysis was that we explicitly asked attendees to report how connections were valuable to them, meaning the “Connections” response is likely artificially high. Learning about career options and EA cause areas (incl. via sessions) came through as a frequent source of value for our attendees. I had expected more attendees to report general positivity about the event, but this appeared far less frequently than learning or connections.

Some of these options have significant overlap. To get a slightly clearer picture of how “learning” in general might compare to “connections” in general, I grouped options together into the following baskets:

  • Connections (“Connections”, “Strengthening existing connections” and “Making friends or social connections”)
  • Learning (“Learned about new career options”, “Finding specific sessions interesting or valuable” and “Learning more about EA cause areas or taking them more seriously”)
  • Motivation / Positivity (“Increased motivation or positivity for EA broadly” and “Increased motiviation or positivity for EA-aligned work”)
  • Action (“More likely to attend other EA-aligned events/opportunities” and “Inspired or motivated to take a specific action”)
  • Other (“Event enjoyment” and “Other”).
     

These are the results using the grouped baskets:


With this grouping, which is more favourable to the “Learning” basket (previously split into frequently cited categories), it’s striking that this basket outperforms Connections, despite the survey explicitly prompting attendees to report on connections. 

This updates me towards thinking that learning (about career options or ideas) is a large source of value from EA community-building events. Like connections, it’s a source of value where tangible outcomes that might occur downstream of this value will be difficult to track - attendees might well just form a deeper understanding about a key EA cause area, take action later on, but not report an event as being a pivotal moment in their journey. 

That said, providing opportunities for learning might not be the comparative advantage for events. Information about EA-aligned career opportunities and ideas can be found online (e.g. via the 80,000 Hours website, the 80,000 Hours podcast[3], the EA Forum or via the many EA newsletters) whereas in-person events are one of the few ways EA community members can meet other community members in person and build their network. This weakly suggests that connections might still be the source of value event organisers should focus on.
 


My thanks to Callum Calvert, Jona Glade, Michel Justen, Sophie Thomson, Oscar Howie, Ben West, Eli Nathan, Ivan Burduk and Amy Labenz for comments and feedback.

 

  1. ^

     Specifically, we asked how many new connections attendee made, “How were these new connections valuable to you, specifically?” and “What other sources of value did you get from attending the event, which aren’t captured by connections?”.

  2. ^

    My thanks to our virtual assistant, George Go, who actually coded all of the answers with my input and feedback.

  3. ^

    Conflict of interest: my fiancé, Luisa Rodriguez, is a co-host of this podcast.

Comments12


Sorted by Click to highlight new comments since:

As I understand it, all this data about the impact of events is collected through surveys that are attendees fill immediately after an event. I think that this might introduce some biases. For example, maybe attendees get excited about new connections they made and think that they will collaborate but then never do. If that's not done already, one way to somewhat mitigate this bias would be to also ask at the annual EA survey about the impact of EA events (that year, and in their lifetime). I wonder if conclusions like the one in this article would hold up.

all this data about the impact of events is collected through surveys that are attendees fill immediately after an event


No, it's actually from surveys that were filled 3 - 8 months after the events took place. Sorry that wasn't clear. 

I have added the results from the EA Survey below

Connections actually seem somewhat more important in the EAS results than in the results reported here. That said, that could be partly because the analysis in this post combined more things into the "Learning" option (e.g. “Finding specific sessions interesting or valuable” and “Learning more about EA cause areas or taking them more seriously”), whereas the original survey question just asked about whether they "chang[ed] their mind or learn[ed] something important regarding their path to impact."

Another difference between the EAS analysis and this one, is that the EAS asked about the most important new Learning/Connection, rather than whether people received any new Learning/Connection from a given source. So it is possible that the events account for a disproportionately large number of people's most important new connections (per the EAS analysis), but that people are nevertheless also receiving a comparable number of new Learnings.

Given that these community events help with learning and career connections the most, how do you feel about opening them up to people who work on EA causes (AI safety for instance) but are not well-versed with the EA landscape? Familiarity with EA ideas - like longtermism, for instance - are a major part of the applications for EAGs and EAGx's. I think this gatekeeps opportunities from other talented, smart and efficient people who are working on EA causes without being affiliated with EA ideas. For instance, someone working on AI safety research who has not taken any EA courses, attended any EA events, or read any of the books that define EA ideas, but is still interested in finding job opportunities that will help them maximize their impact within their field. Perhaps I'm wrong in thinking that being EA-informed is given more weightage in the selection process, but it comes across as so! 

We're very excited to accept people with experience in EA cause areas to EA Global and EAGx events, and usually weigh this quite a lot when considering an application. A common type of application that we accept is someone who's working in an EA cause area and is curious to learn more about the community.

It's true that we ask about engagement with EA, but that isn't the only thing we consider, far from it.

it’s striking that this basket [Learning] outperforms Connections, despite the survey explicitly prompting attendees to report on connections. This updates me towards thinking that learning (about career options or ideas) is a large source of value from EA community-building events.

 

In the EA Survey, we also found that there was a surprisingly low gap between learning something important and making a new connection, in cases where one might have expected an influence to strongly favour one or the other. (For example, it was mentioned to me that the EA Forum producing so many connections as it did was surprising).

I think these results are probably partly due to measurement error (specifically, I think a general 'positive feeling about the influence' factor makes people more likely to attribute either learning or connection to the factor, but not entirely.

It's perhaps also worth noting that in these results EAG and EAGx come out with more Connections than Learnings (not weighted for importance), though they also perform appreciably well compared to other sources for Learnings. 

I think these earlier results also confirm the importance of scale noted in your post, i.e. the dominant factors just seem to be those operating at much larger scale, rather than smaller, more targeted interventions (this seems to be a recurring theme across our results) and the difference in scale between different EA projects (in terms of how many people they reach) varies enormously:

Thanks for sharing, David!

Hey David, your observation that 'the dominant factors just seem to be those operating at much larger scale, rather than smaller, more targeted interventions' is a recurring theme is very interesting! 

To check I understand, are you saying that things like websites, podcasts, and other scalable things seem to be having much more impact than things such as 1-1s? I'm asking because we at EA Netherlands sometimes wonder how much we ought to be investing in our website vs smaller and more targeted interventions. 

are you saying that things like websites, podcasts, and other scalable things seem to be having much more impact than things such as 1-1s?

 

Basically (though I'm thinking more about the scale of programs (i.e. whether the program actually reaches a large scale) than the scalability of kinds of program). 

We observe that the factors which are cited by the largest numbers of people as being among the most important influences on them are those with very large reach, i.e. those which many EAs have been exposed to (e.g. 80K website), followed by those with smaller reach (e.g. EA Forum), and then by smaller programs. 

Indeed, when we look at the association between the number of people in the EA Survey reporting having interacted with a given factor in the last 12 months and indicating that it was one of the most important influences on their ability to have a personal impact, we observe a very strong correlation (r=0.8, [0.5-0.9], p=0.001]. That might seem truistic, but I think (per the OP's other post), many people expect some smaller, more targeted programs to be dramatically more influential than more 'mass' outreach.

Some important qualifiers:

  • Obviously this is looking at reported exposures among people who are taking the EA Survey. It should not be taken to capture all exposures (e.g. many people could view the 80K or GiveWell websites and never have any further interaction with EA), so this shouldn't be taken to indicate a general ratio of impacts per exposure.
  • This data comes from two separate questions with different numbers of respondents (so it's possible for an influence to have more impacts than exposures) which is another reason why it shouldn't be taken as giving a literal exposures:impacts ratio.
  • This is looking at numbers of people citing the factor as among the most important influences on their ability to have an impact, but these are not weighted. So this is compatible with certain influences being much more important than others, though as noted, the post linked above may give some reasons counting against that.
  • This strong correlation is compatible with some large differences (visible on the plot) between influences which have similar numbers of exposures but which lead to very different numbers of impacts.

That said, I think this still serves as a somewhat useful illustration of the extent to which the factors with the largest number of impacts are those with the largest number of exposures among EAs. Given the very large differences in scale between programs, smaller programs needs to be getting a dramatically higher number (or higher value) of hits to compete with the larger influences.

I'm asking because we at EA Netherlands sometimes wonder how much we ought to be investing in our website vs smaller and more targeted interventions. 

When it comes to thinking about EAN's particular options, I would add some additional caveats: 

  •  The differences in scale between the largest programs (e.g. the most prominent EA websites / podcasts) and the smaller programs, discussed above, may differ when comparing the possible EAN website / podcast / smaller programs (e.g. the likely differences in scale may be a lot smaller. And in your particular situation, you might be dramatically better placed than others to run a website/podcast/other program.
  • In other data, we observe that the impact of different factors like websites and podcasts is very skewed e.g. the most effective podcast reaches many more EAs than any of the others. So even if podcasts as a category seems like it large scale, the typical new podcast/website might be expected to have much smaller reach, more comparable to a less scalable kind of program. 

Thanks so much for the detailed reply! This is very helpful :)

A quick note to say that I'm taking some time off after publishing these posts. I'll aim to reply to any comments from 17 July.

Thanks for posting! I'll consider whether it'd be helpful for me to include replications of these questions to https://www.leaf.courses/ participants for comparison. Let me know if it'd be helpful to you somehow!

Curated and popular this week
 ·  · 20m read
 · 
Advanced AI could unlock an era of enlightened and competent government action. But without smart, active investment, we’ll squander that opportunity and barrel blindly into danger. Executive summary See also a summary on Twitter / X. The US federal government is falling behind the private sector on AI adoption. As AI improves, a growing gap would leave the government unable to effectively respond to AI-driven existential challenges and threaten the legitimacy of its democratic institutions. A dual imperative → Government adoption of AI can’t wait. Making steady progress is critical to: * Boost the government’s capacity to effectively respond to AI-driven existential challenges * Help democratic oversight keep up with the technological power of other groups * Defuse the risk of rushed AI adoption in a crisis → But hasty AI adoption could backfire. Without care, integration of AI could: * Be exploited, subverting independent government action * Lead to unsafe deployment of AI systems * Accelerate arms races or compress safety research timelines Summary of the recommendations 1. Work with the US federal government to help it effectively adopt AI Simplistic “pro-security” or “pro-speed” attitudes miss the point. Both are important — and many interventions would help with both. We should: * Invest in win-win measures that both facilitate adoption and reduce the risks involved, e.g.: * Build technical expertise within government (invest in AI and technical talent, ensure NIST is well resourced) * Streamline procurement processes for AI products and related tech (like cloud services) * Modernize the government’s digital infrastructure and data management practices * Prioritize high-leverage interventions that have strong adoption-boosting benefits with minor security costs or vice versa, e.g.: * On the security side: investing in cyber security, pre-deployment testing of AI in high-stakes areas, and advancing research on mitigating the ris
 ·  · 32m read
 · 
Summary Immediate skin-to-skin contact (SSC) between mothers and newborns and early initiation of breastfeeding (EIBF) may play a significant and underappreciated role in reducing neonatal mortality. These practices are distinct in important ways from more broadly recognized (and clearly impactful) interventions like kangaroo care and exclusive breastfeeding, and they are recommended for both preterm and full-term infants. A large evidence base indicates that immediate SSC and EIBF substantially reduce neonatal mortality. Many randomized trials show that immediate SSC promotes EIBF, reduces episodes of low blood sugar, improves temperature regulation, and promotes cardiac and respiratory stability. All of these effects are linked to lower mortality, and the biological pathways between immediate SSC, EIBF, and reduced mortality are compelling. A meta-analysis of large observational studies found a 25% lower risk of mortality in infants who began breastfeeding within one hour of birth compared to initiation after one hour. These practices are attractive targets for intervention, and promoting them is effective. Immediate SSC and EIBF require no commodities, are under the direct influence of birth attendants, are time-bound to the first hour after birth, are consistent with international guidelines, and are appropriate for universal promotion. Their adoption is often low, but ceilings are demonstrably high: many low-and middle-income countries (LMICs) have rates of EIBF less than 30%, yet several have rates over 70%. Multiple studies find that health worker training and quality improvement activities dramatically increase rates of immediate SSC and EIBF. There do not appear to be any major actors focused specifically on promotion of universal immediate SSC and EIBF. By contrast, general breastfeeding promotion and essential newborn care training programs are relatively common. More research on cost-effectiveness is needed, but it appears promising. Limited existing
 ·  · 11m read
 · 
Our Mission: To build a multidisciplinary field around using technology—especially AI—to improve the lives of nonhumans now and in the future.  Overview Background This hybrid conference had nearly 550 participants and took place March 1-2, 2025 at UC Berkeley. It was organized by AI for Animals for $74k by volunteer core organizers Constance Li, Sankalpa Ghose, and Santeri Tani.  This conference has evolved since 2023: * The 1st conference mainly consisted of philosophers and was a single track lecture/panel. * The 2nd conference put all lectures on one day and followed it with 2 days of interactive unconference sessions happening in parallel and a week of in-person co-working. * This 3rd conference had a week of related satellite events, free shared accommodations for 50+ attendees, 2 days of parallel lectures/panels/unconferences, 80 unique sessions, of which 32 are available on Youtube, Swapcard to enable 1:1 connections, and a Slack community to continue conversations year round. We have been quickly expanding this conference in order to prepare those that are working toward the reduction of nonhuman suffering to adapt to the drastic and rapid changes that AI will bring.  Luckily, it seems like it has been working!  This year, many animal advocacy organizations attended (mostly smaller and younger ones) as well as newly formed groups focused on digital minds and funders who spanned both of these spaces. We also had more diversity of speakers and attendees which included economists, AI researchers, investors, tech companies, journalists, animal welfare researchers, and more. This was done through strategic targeted outreach and a bigger team of volunteers.  Outcomes On our feedback survey, which had 85 total responses (mainly from in-person attendees), people reported an average of 7 new connections (defined as someone they would feel comfortable reaching out to for a favor like reviewing a blog post) and of those new connections, an average of 3
Recent opportunities in Building effective altruism
47
Ivan Burduk
· · 2m read