Hide table of contents

Tl;dr:  This weekend (15-17 November), we will broadcast 30+ hours of live EAGxVirtual 2024 speaker talks! Everyone, including those that aren’t registered for the conference, can watch. 

View the schedule and watch live here. Comment on this post to discuss the livestream.

This is part of our goal of making content and opportunities accessible to the global EA community. 

The livestream is only a portion of the programming available to registered attendees. Full access includes 20+ office hours, 25+ meetups, workshops, and access to Swapcard networking with 1000+ attendees. Sign up to get notified about the next virtual conference. 
 

———
 

In our announcement post, we explained why we are hosting a virtual event – to connect the global EA community. We’ll do this by providing access to EA content, the ability to meet others through Swapcard 1-on-1s and cause area/affinity group meetups, giving visibility to high-impact opportunities, and other unique programming.

In our update post, we shared a preview of the programming, highlighting a few speakers, the organisation fair, meetups, lightning talk session, and mentorship program. 

 

What we are excited about this weekend

Welcoming EA community members from 86 countries

  • 27 people are the only ones who registered from their country, so this may be one of the best opportunities they have to engage with EA ideas and community. 
     

Seeing a record 100+ content sessions

  • While in-person EAG events emphasize 1:1s, we are less certain this is the best focus for EAGxVirtual. Many attendees are newcomers to EA, so we’ve designed programming to help them start engaging with ideas and connect with their regional communities. We don’t have all the answers yet and hope to get some more in the coming weeks.   

 

Why we’re publicly livestreaming the talks

EAGxVirtual 2024 offered free registration, so it might seem like everyone interested would already be signed up.

We’ve actually received emails from attendees asking if they should attend. For every question we received, there are likely others who have the same doubts. Here are some hesitations that people have expressed: 

  • They cannot commit to the entire weekend or confirm their schedule in advance
  • Imposter syndrome/don’t feel qualified enough to attend
  • Don’t want to take the place of others
  • Uncertainty about the experience level/age of attendees
  • Past online events didn’t meet expectations

We understand that people have different work and life circumstances. So in the interest of accessibility and transparency, we decided it was worthwhile to trial publicly streaming the live talk sessions. We welcome your feedback on this decision.

This does not provide access to the other portions of the conference, such as the virtual meetups, 1:1 booking function in Swapcard, or office hours. 

These livestreamed talk recordings will be instantly available on Swapcard after a session ends. 

Watch the livestream

 

Huge thanks to the EA Community for supporting us

Although the conference hasn’t ended, we’re already so grateful to the local EA groups, partner organizations, volunteers, and community members making this weekend possible.

Here are a few screenshots of from EA Manchester, AE Brasil, EA Christians, and EA Hong Kong 💙

 

How to participate

We hope that EAGxVirtual 2024 will help to seed conversations, collaborations, and new opportunities--even for people who don't attend. 

  • For registered attendees:
    • Besides watching the content sessions, we encourage you to attend meetups, book 1:1s, and engage with other attendees 
    • If you’ve attended this year’s conference, share your experience in this post's comments. It may help provide guidance or confidence to someone else in a similar situation

       

  • For people watching the livestream only:

 

Comments7


Sorted by Click to highlight new comments since:

A number of people invited me to 1:1s to ask me for career advice in my field, which is software engineering. Mostly of the "how do I get hired" kind rather than the "how do I pick a career path that's most in line with EA strategic priorities" kind that 80,000 Hours specializes in. Unfortunately I'm not very good at this kind of advice (I haven't looked for a new job in more than eight years) and haven't been able to find anywhere else I could send people to that would be more helpful. I think there used to be an affinity group or something for EA software engineers, but I don't think it's active anymore.

Anyone know of anything like this? If not, and if you're the kind of person who's well-positioned to start a group like this, consider this a request for one.

You are right, EA Software Engineers group is no longer active. Their virtual events were quite useful, and you can still access the recordings and slides here.

EA Data Science group hosts events sometimes, and their channel on EA Anywhere Slack is pretty active.

In addition, I used to lead the EA Public Interest Tech Slack community, which was subsequently merged into the EA Software Engineers community (the Discord for which still exists btw). All of these communities eventually got merged into the #role-software-engineers channel of the EA Anywhere Slack.

I think there was too much fragmentation among slightly different EA affinity groups aimed at tech professionals - there was also EA Tech Network for folks working at tech companies, which I believe was merged into High Impact Professionals.

I'm not sure why the EA SWE community dissipated after all the consolidation that occurred. I think the lack of community leadership may have played a role. Also, it seems like EA SWEs are already well served by other communities, including AI safety (for which a lot of SWEs have the right skills) and effective giving communities like Giving What We Can (since many SWE roles are well-paid).

Lingering thoughts on the talk "How to Handle Worldview Uncertainty" by Hayley Clatterbuck (Rethink Priorities):

The talk proposed several ways that altruists with conflicting values can bargain in mutually beneficial ways, like loans, wagers, and trades, and suggested that the EA community should try to implement these more in practice and design institutions and mechanisms that incentivize them.

I think the EA Donation Election is an example of a community-wide mechanism for brokering trades between multiple anonymous donors. To illustrate this, consider a simple example of a trade, where Alice and Bob are donors with conflicting altruistic priorities. Alice's top charity is Direct Transfers Everywhere and her second favorite is Pandemics No More. Bob's top charity is Lawyers for Chickens, and his second favorite is Pandemics No More. Bob is concerned that Alice's donating to Direct Transfers Everywhere would cancel out the animal welfare benefits of his donating to Lawyers for Chickens, so he proposes that they both donate to their second choice, Pandemics No More.

The Donation Election does this in an automated, anonymous, community-wide way by using a mechanism like ranked-choice voting (RCV) to select winning charities. (The 2024 election uses RCV; the 2023 election used a points-based system similar to RCV.) Suppose that Alice and Bob are voting in the Donation Election—and for simplicity, we'll pretend that the election uses RCV. If their first-choice charities (Direct Transfers Everywhere and Lawyers for Chickens) are not that popular among the electorate, those candidates will be eliminated, and Alice and Bob's votes reallocated to Pandemics No More. This achieves the same outcome as the trade in the previous example automatically, even though Alice and Bob may not have ever personally met and agreed to that trade.

Update: The 2024 Donation Election is using straight-up ranked-choice voting; details here.

Making EAGxVirtual more accessible has been our aspiration since 2022, and I'm excited we reached a new milestone with the public livestream!

Despite the common advice to "focus on 1-1s, you can always watch talk recordings later", I found most people (including myself!) only watch talks if they attend them live. And the virtual conference allows us to invite speakers from anywhere in the world who wouldn't be able to present at in-person EAGx.

Here are some sessions I'm personally excited about:

You can access recordings of past talks and explore the agenda here.

There are several talks that aim to provide frameworks and considerations when approaching career choice. 

Career-related talks:

Cause area-specific career talks:

 

Several interactive workshops that are available to EAGxVirtual 2024 attendees only:

  • Career Impact Workshop: Finding a Role That's Good For You and Good For the World
  • More Than the Obvious: Unexplored Paths to High-Impact Careers
  • Career transition strategies
Curated and popular this week
LintzA
 ·  · 15m read
 · 
Cross-posted to Lesswrong Introduction Several developments over the past few months should cause you to re-evaluate what you are doing. These include: 1. Updates toward short timelines 2. The Trump presidency 3. The o1 (inference-time compute scaling) paradigm 4. Deepseek 5. Stargate/AI datacenter spending 6. Increased internal deployment 7. Absence of AI x-risk/safety considerations in mainstream AI discourse Taken together, these are enough to render many existing AI governance strategies obsolete (and probably some technical safety strategies too). There's a good chance we're entering crunch time and that should absolutely affect your theory of change and what you plan to work on. In this piece I try to give a quick summary of these developments and think through the broader implications these have for AI safety. At the end of the piece I give some quick initial thoughts on how these developments affect what safety-concerned folks should be prioritizing. These are early days and I expect many of my takes will shift, look forward to discussing in the comments!  Implications of recent developments Updates toward short timelines There’s general agreement that timelines are likely to be far shorter than most expected. Both Sam Altman and Dario Amodei have recently said they expect AGI within the next 3 years. Anecdotally, nearly everyone I know or have heard of who was expecting longer timelines has updated significantly toward short timelines (<5 years). E.g. Ajeya’s median estimate is that 99% of fully-remote jobs will be automatable in roughly 6-8 years, 5+ years earlier than her 2023 estimate. On a quick look, prediction markets seem to have shifted to short timelines (e.g. Metaculus[1] & Manifold appear to have roughly 2030 median timelines to AGI, though haven’t moved dramatically in recent months). We’ve consistently seen performance on benchmarks far exceed what most predicted. Most recently, Epoch was surprised to see OpenAI’s o3 model achi
Dr Kassim
 ·  · 4m read
 · 
Hey everyone, I’ve been going through the EA Introductory Program, and I have to admit some of these ideas make sense, but others leave me with more questions than answers. I’m trying to wrap my head around certain core EA principles, and the more I think about them, the more I wonder: Am I misunderstanding, or are there blind spots in EA’s approach? I’d really love to hear what others think. Maybe you can help me clarify some of my doubts. Or maybe you share the same reservations? Let’s talk. Cause Prioritization. Does It Ignore Political and Social Reality? EA focuses on doing the most good per dollar, which makes sense in theory. But does it hold up when you apply it to real world contexts especially in countries like Uganda? Take malaria prevention. It’s a top EA cause because it’s highly cost effective $5,000 can save a life through bed nets (GiveWell, 2023). But what happens when government corruption or instability disrupts these programs? The Global Fund scandal in Uganda saw $1.6 million in malaria aid mismanaged (Global Fund Audit Report, 2016). If money isn’t reaching the people it’s meant to help, is it really the best use of resources? And what about leadership changes? Policies shift unpredictably here. A national animal welfare initiative I supported lost momentum when political priorities changed. How does EA factor in these uncertainties when prioritizing causes? It feels like EA assumes a stable world where money always achieves the intended impact. But what if that’s not the world we live in? Long termism. A Luxury When the Present Is in Crisis? I get why long termists argue that future people matter. But should we really prioritize them over people suffering today? Long termism tells us that existential risks like AI could wipe out trillions of future lives. But in Uganda, we’re losing lives now—1,500+ die from rabies annually (WHO, 2021), and 41% of children suffer from stunting due to malnutrition (UNICEF, 2022). These are preventable d
Rory Fenton
 ·  · 6m read
 · 
Cross-posted from my blog. Contrary to my carefully crafted brand as a weak nerd, I go to a local CrossFit gym a few times a week. Every year, the gym raises funds for a scholarship for teens from lower-income families to attend their summer camp program. I don’t know how many Crossfit-interested low-income teens there are in my small town, but I’ll guess there are perhaps 2 of them who would benefit from the scholarship. After all, CrossFit is pretty niche, and the town is small. Helping youngsters get swole in the Pacific Northwest is not exactly as cost-effective as preventing malaria in Malawi. But I notice I feel drawn to supporting the scholarship anyway. Every time it pops in my head I think, “My money could fully solve this problem”. The camp only costs a few hundred dollars per kid and if there are just 2 kids who need support, I could give $500 and there would no longer be teenagers in my town who want to go to a CrossFit summer camp but can’t. Thanks to me, the hero, this problem would be entirely solved. 100%. That is not how most nonprofit work feels to me. You are only ever making small dents in important problems I want to work on big problems. Global poverty. Malaria. Everyone not suddenly dying. But if I’m honest, what I really want is to solve those problems. Me, personally, solve them. This is a continued source of frustration and sadness because I absolutely cannot solve those problems. Consider what else my $500 CrossFit scholarship might do: * I want to save lives, and USAID suddenly stops giving $7 billion a year to PEPFAR. So I give $500 to the Rapid Response Fund. My donation solves 0.000001% of the problem and I feel like I have failed. * I want to solve climate change, and getting to net zero will require stopping or removing emissions of 1,500 billion tons of carbon dioxide. I give $500 to a policy nonprofit that reduces emissions, in expectation, by 50 tons. My donation solves 0.000000003% of the problem and I feel like I have f