Hide table of contents

I am usually really curious to get a taste of the overall atmosphere and insights gained from EAGs or EAGx events that I don’t attend. These gatherings, which host hundreds or even thousands of Effective Altruists, serve as valuable opportunities to exchange knowledge and potentially offer a snapshot of the most pressing EA themes and current projects. I attended EAGxNordics and, as a student, I will share my observations in a bullet-point format. Thank you Adash H-Moller for great comments and suggestions. Other attendees are very welcome to add their experiences / challenge these perspectives in the comments:

Lessons learned:

  • The majority of the participants seemed to have come from (obviously) Sweden, Norway, Finland, Estonia, Denmark and The Netherlands. Small countries with relatively tight EA communities.
  • I was particularly impressed with the line-up of speakers from non-EA-labeled think tanks and institutes - I think it provides a strong benefit, especially to those EAs who are quite familiar with EA but would not find out about these adjacent initiatives. It also serves to reduce the extent to which we're in our own bubble.
  • I talked to numerous participants of the Future Academy - who all learned about EA through that program. They shared great experiences in policy, entrepreneurship and education (from before they knew about EA) and I think they are a great addition to the community.
  • Attendees can be more ambitious: Both in their conference experience and in their approach to EA. I spoke to too many students who had <5 1on1s planned, whereas these are regarded as one of the best ways to operate during a conference. Also, in terms of the career plans and EA projects I asked about, I would have loved to see bigger goals then the ones I heard. 
  • I attended talks by employees of (the) GFI, Charity Entrepreneurship and The Simon Institute. The things they had in common:
    •  They work on problems that are highly neglected (One speaker cited from a podcast: “No one is coming, it is up to us”)
    • They do their homework thoroughly
    • A key factor for their impact is their cooperation with local NGOs, governments and intergovernmental organizations. 
  • (Suggested by Adash) The talk by an employee of Nähtamatud Loomad about ‘invisible animals’ was great and provided useful insight into what corporate lobbying actually looks like on the ground - I think specific, object-level content is great for keeping us grounded.
  • There could be more focus on analyzing EA as a community and considering what EA needs more of / needs to do differently, I asked a few people exactly those questions.
  • A lot of people talked about AI Safety
    • I felt there was a large group of students who were excited about contributing to this field 
    • Participants with other backgrounds mentioned this as well and multiple participants voiced the preference for (a) more balanced content/narrative around topics like global development, animal welfare, etc.
    • (Suggested by Adash: N is small so take with a pinch of salt) I found the conversations I had with some early-career AI safety enthusiasts to show a lack of understanding of paths to x-risk and criticisms of key assumptions. I’m wondering if the early-stage AI field-building funnel might cause an echo chamber of unexamined AI panic that undermines general epistemics and cause-neutral principles.

59

0
0

Reactions

0
0

More posts like this

Comments4
Sorted by Click to highlight new comments since: Today at 11:52 PM

Thanks for this feedback and insight!

There could be more focus on analyzing EA as a community and considering what EA needs more of / needs to do differently

I think I disagree here. In my opinion, past EAGx events have had too much focus on the EA community and I think the same can be said of this forum. I expect this is because many people (esp. newer members) have opinions about the EA community, whereas far fewer have expertise in object-level challenges.

I'm glad this event corrected for that. It's possible it over-corrected, but I'm not convinced.

I found the conversations I had with some early-career AI safety enthusiasts to show a lack of understanding of paths to x-risk and criticisms of key assumptions. I’m wondering if the early-stage AI field-building funnel might cause an echo chamber of unexamined AI panic that undermines general epistemics and cause-neutral principles.

I think people new to EA not knowing a lot about specific cause areas they're excited about isn't more true for AI x-risk than other cause areas. For example, I suspect if you asked animal welfare or global health enthusiasts who are as new as the folks into AI safety you talked to about the key assumptions relating to different animal welfare or global health interventions, you'd get similar results. It just seems to matter more for AI x-risk though since having an impact there relies more strongly on having better models. 

This is absolutely the case for global health and development. Development is really complicated, and I think EAs tend to vastly overrate just how how certain we are about what works the best.

When I began working full time in the space, I spent about the first six months getting continuously smacked in the face by just how much there is to know, and how little of it I knew.

I think introductory EA courses can do better at getting people to dig deep. For example, I don't think its unreasonable to have attendees actually go through a CEA by Givewell and discuss the many key assumptions that are made. For a workshop I did recently for a Danish high school talent programme, we created simplified versions which they had no trouble engaging with.

Participants with other backgrounds mentioned this as well and multiple participants voiced the preference for (a) more balanced content/narrative around topics like global development, animal welfare, etc.
 

Thanks for this feedback! FWIW I agree the balance was off here!