Hide table of contents

This post was updated on February 9 to reflect new dates. We changed the post date to reflect this.

CEA will be running and supporting conferences for the EA community all over the world in 2022.

We are currently organizing the following events. All dates are provisional and may change in response to local COVID-19 restrictions.

EA Global conferences

  • EA Global: London (15 - 17 April)[1]
  • EA Global: San Francisco (29 - 31 July)
  • EA Global: Washington, D.C. (23 - 25 September)

EAGx conferences

  • EAGx Oxford (26 - 27 March)
  • EAGx Boston (1 - 3 April)
  • EAGx Prague (13 - 15 May)
  • EAGx Australia (8 - 10 July)
  • EAGx Singapore (2 - 4 September)
  • EAGx Berlin (September/October)

Applications for the EA Global: London, EAGx Oxford, and EAGx Boston conferences are open! You can find the application here.

If you'd like to add EA events like these directly to your Google Calendar, use this link.

Some notes on these conferences:

  • EA Global conferences are for people who are knowledgeable about the core ideas of effective altruism and are taking significant actions (e.g. work or study) based on these ideas.
  • To attend an EAGx conference, you should at least be familiar with the core ideas of effective altruism.
  • Please apply to all conferences you wish to attend once applications open — we would rather get too many applications for some conferences and recommend that applicants attend a different one, than miss out on potential applicants to a conference.
  • Applicants can request financial aid to cover the costs of travel, accommodation and tickets.
  • Find more info on our website.

As always, please feel free to email hello@eaglobal.org with any questions, or comment below.


    1. Options for the timing of this conference were extremely constrained, and we realize that the date is not ideal. We chose the date after polling a sample of potential attendees and seeing which of the available dates might work for the most people, but understand that the weekend we chose will not be possible for everyone. If you would have liked to attend EA Global: London 2022, but can’t because of the dates (e.g. for religious reasons), please feel free to let us know. ↩︎

Comments30


Sorted by Click to highlight new comments since:

Note:
This is just a small thing so I feel like I'm being annoying mentioning it. "Summer" doesn't mean the same thing for those in the Southern hemisphere, in particular, but also those in the tropics. Months are much more universal.

I totally super you posting about this, I bet lots of other people would feel the same way!

Thanks for pointing this out! You're right, and I've edited the post to clarify.

Wanted to flag that the EAG website still says "summer" and "fall".

Good spot.

If we want to avoid seasons but also be vague, quarters (e.g. "Q2 2022") could work?

I'd still be confused by that notation tbh. I only recently joined a company and the last time I worked in industry was 2 years ago, so I am very unaccustomed to hearing it. Plus, the meaning of "Q2" is company-specific, so you would have to define it when using it for the first time.

I think calendar quarters (e.g. Q1 = Jan/Feb/Mar) are fairly widely used and understood?

In any case, the EAG organisers seem some notation to indicate that they're hoping to hold an event during a rough period (e.g. summer) but don't have a specific date (or even month) yet. If seasons are no good, we need some alternative.

Thanks for flagging this! I'll update the website with "likely June, July, or August" and similar language for now. 

aog
32
0
0

Will there be an EAG Virtual? Huge fan of those and might not be able to make any in person. Might be a good contingency plan with Omicron too!

I'd be keen to see an EAG Virtual as well and would prefer this over hybrid.

We've exchanged preliminary thoughts on EAGxVirtual with Lizka and agreed that it would be better to run a completely virtual conference instead of a hybrid next year. It will allow taking full advantage of the virtual format instead of mirroring the in-person conferences. EA Anywhere team would be happy to contribute to organizing this. We're in touch with the CEA, but not ready to announce anything yet.

Yeah, I think it makes more sense to position virtual on different dates than the main conferences as I know that if I'd paid money to fly out to a conference, I would be heavily focusing on in-person meetings.

An event for everyone is an event for no one.

Wow, that's a lot of events! Very excited for an east-coast EAG!

I love that there are multiple top-level EA Globals this year! (East Coast, West Coast, and UK)

Thanks! The last time we hosted three (Bay, UK, East Coast) was in 2017. I’m very excited to do it again.

We planned to host two in 2019 but had to cancel the SF event due to COVID and only hosted one in-person EAG in 2021 (with the EA Picnic as a smaller in-person event this year).

It's been like that for a while.

Very exiceted to join the EA London again! Last year was amazing! I really appreciate the dates, specially because it's not going to be so cold, as I live in Brazil. Looking forward to join the rescheduled EA Praga. Wishes of a happy and healthy 2022 to everyone!

Taiwan (a.k.a. the People's Republic of Chinese Taipei) is missing from the nationality form on the application page 🤔

Perhaps worth opening a shared Google Calendar for EA events?

(I might open an unofficial one, let me know if you're interested)

Hi Yonatan,

We've already made one, here. I'll suggest adding this to the post.

Thank you very much, adding it!

Suggestion: It is currently named "Events calendar".  Maybe better: "EAG" ?

(All calendars are "events calendars")

Good point, thank you! We're changing it. :) [Edit: Ollie and I responded at the same time. Oops.]

These are, of course, the only events that really matter but you're totally right. We'll change it! Thanks for spotting.

It would be useful if the application website told you which conferences you've already applied to so you can avoid submitting the same application twice or worrying if you've forgotten to submit it.

You should get an email with your submitted content for each application to a conference.

Personally, for myself this “receipt” with my content is super useful when applying to successive conferences, at least until they figure out I’m an imposter.

(Sometimes it’s slightly confusing to search your inbox you might need to search for hello@eaglobal.org)

EAGxBerlin (16 - 18 September) - applications seem to close 1 September.

https://www.eaglobal.org/events/eagxberlin-2022/

Curated and popular this week
 ·  · 20m read
 · 
Advanced AI could unlock an era of enlightened and competent government action. But without smart, active investment, we’ll squander that opportunity and barrel blindly into danger. Executive summary See also a summary on Twitter / X. The US federal government is falling behind the private sector on AI adoption. As AI improves, a growing gap would leave the government unable to effectively respond to AI-driven existential challenges and threaten the legitimacy of its democratic institutions. A dual imperative → Government adoption of AI can’t wait. Making steady progress is critical to: * Boost the government’s capacity to effectively respond to AI-driven existential challenges * Help democratic oversight keep up with the technological power of other groups * Defuse the risk of rushed AI adoption in a crisis → But hasty AI adoption could backfire. Without care, integration of AI could: * Be exploited, subverting independent government action * Lead to unsafe deployment of AI systems * Accelerate arms races or compress safety research timelines Summary of the recommendations 1. Work with the US federal government to help it effectively adopt AI Simplistic “pro-security” or “pro-speed” attitudes miss the point. Both are important — and many interventions would help with both. We should: * Invest in win-win measures that both facilitate adoption and reduce the risks involved, e.g.: * Build technical expertise within government (invest in AI and technical talent, ensure NIST is well resourced) * Streamline procurement processes for AI products and related tech (like cloud services) * Modernize the government’s digital infrastructure and data management practices * Prioritize high-leverage interventions that have strong adoption-boosting benefits with minor security costs or vice versa, e.g.: * On the security side: investing in cyber security, pre-deployment testing of AI in high-stakes areas, and advancing research on mitigating the ris
saulius
 ·  · 22m read
 · 
Summary In this article, I estimate the cost-effectiveness of five Anima International programs in Poland: improving cage-free and broiler welfare, blocking new factory farms, banning fur farming, and encouraging retailers to sell more plant-based protein. I estimate that together, these programs help roughly 136 animals—or 32 years of farmed animal life—per dollar spent. Animal years affected per dollar spent was within an order of magnitude for all five evaluated interventions. I also tried to estimate how much suffering each program alleviates. Using SADs (Suffering-Adjusted Days)—a metric developed by Ambitious Impact (AIM) that accounts for species differences and pain intensity—Anima’s programs appear highly cost-effective, even compared to charities recommended by Animal Charity Evaluators. However, I also ran a small informal survey to understand how people intuitively weigh different categories of pain defined by the Welfare Footprint Institute. The results suggested that SADs may heavily underweight brief but intense suffering. Based on those findings, I created my own metric DCDE (Disabling Chicken Day Equivalent) with different weightings. Under this approach, interventions focused on humane slaughter look more promising, while cage-free campaigns appear less impactful. These results are highly uncertain but show how sensitive conclusions are to how we value different kinds of suffering. My estimates are highly speculative, often relying on subjective judgments from Anima International staff regarding factors such as the likelihood of success for various interventions. This introduces potential bias. Another major source of uncertainty is how long the effects of reforms will last if achieved. To address this, I developed a methodology to estimate impact duration for chicken welfare campaigns. However, I’m essentially guessing when it comes to how long the impact of farm-blocking or fur bans might last—there’s just too much uncertainty. Background In
 ·  · 2m read
 · 
In my opinion, we have known that the risk of AI catastrophe is too high and too close for at least two years. At that point, it’s time to work on solutions (in my case, advocating an indefinite pause on frontier model development until it’s safe to proceed through protests and lobbying as leader of PauseAI US).  Not every policy proposal is as robust to timeline length as PauseAI. It can be totally worth it to make a quality timeline estimate, both to inform your own work and as a tool for outreach (like ai-2027.com). But most of these timeline updates simply are not decision-relevant if you have a strong intervention. If your intervention is so fragile and contingent that every little update to timeline forecasts matters, it’s probably too finicky to be working on in the first place.  I think people are psychologically drawn to discussing timelines all the time so that they can have the “right” answer and because it feels like a game, not because it really matters the day and the hour of… what are these timelines even leading up to anymore? They used to be to “AGI”, but (in my opinion) we’re basically already there. Point of no return? Some level of superintelligence? It’s telling that they are almost never measured in terms of actions we can take or opportunities for intervention. Indeed, it’s not really the purpose of timelines to help us to act. I see people make bad updates on them all the time. I see people give up projects that have a chance of working but might not reach their peak returns until 2029 to spend a few precious months looking for a faster project that is, not surprisingly, also worse (or else why weren’t they doing it already?) and probably even lower EV over the same time period! For some reason, people tend to think they have to have their work completed by the “end” of the (median) timeline or else it won’t count, rather than seeing their impact as the integral over the entire project that does fall within the median timeline estimate or
Recent opportunities in Building effective altruism
49
Ivan Burduk
· · 2m read