Hide table of contents

Update (3/15/20): I haven't heard much interest about this project in general, and I don't think my list of events is being used by many (if any) groups. Therefore, I'm not planning to update the calendar going forward, though I'll leave the current version available.


A year ago, I created an early version of an "EA Calendar" -- a Google Calendar featuring events that are relevant to the wider EA community.

My goal is for the calendar to cover events and holidays that a wide range of EA groups could celebrate (like Petrov Day).

There may be other things that such a calendar should include, like conferences or tax deadlines, but before I do the work of handling all that, I wanted to gauge whether people think such a resource would even be valuable, whether it would be better to maintain elsewhere, and so on.


If you know of an event which you think would fit, and want me to add it to the calendar, please add a comment to this post or send me an email.

If you'd like to add the EA Calendar to your own GCal, here's the link.

If you don't use GCal, you can still use the "agenda" mode at that link to see upcoming events. (There are also lots of online tools for syncing GCal and other calendars.)


Finally, if you have any other suggestions to improve the calendar's usefulness, I'd love to hear them!


Events currently marked on the calendar (as of 30 July 2019):

Note: Holidays that change their dates annually may not have been updated recently; I'll get to them before the next holiday on the calendar rolls around.


Days I'm considering:

  • Birthdays of living people (e.g. Peter Singer), though that seems much odder than celebrating the birthday of a non-living person
  • Holocaust Remembrance Day (not so much because it's about a horrible event as because it's the closest thing I could find to a holiday honoring the Righteous Among the Nations)
Comments6


Sorted by Click to highlight new comments since:

This is a cool idea.

Perhaps William Wilberforce's birthday? The abolition of slavery is probably one of the biggest improvements in world history, and has many parallels for contemporary EA issues.

I would consider replacing Contraception Day (which might be good but is not a canonical EA cause, and is at least prima face in conflict with the Total View) with an explicitly somber day (similar to Yom Kippur), like Holocaust Memorial Day, or the anniversary of the bombing of Hiroshima.

Possibly some space-related day could be a nice optimistic note, like the first man in space, or the moon landing.

You could also have the founding of GWWC.

Thanks for these suggestions! The founding of GWWC might work well, as the "birthday" of a living organization rather than a living person.

Ever considered updating this? I think EAs everywhere should consider these days more!

Winter solstice / summer solstice? Popular secular holiday in EA circles (though not strictly EA per se)

What ideas do you think ought to be park of regular cycles of EA thought. Then how can we find holidays to fit to them.

I don't think anyone really uses this calendar, and I haven't updated it in a long time (see the update at the beginning of the post).

If you have ideas about this, you may want to try making + proposing your own calendar to see if you can get users.

Curated and popular this week
 ·  · 52m read
 · 
In recent months, the CEOs of leading AI companies have grown increasingly confident about rapid progress: * OpenAI's Sam Altman: Shifted from saying in November "the rate of progress continues" to declaring in January "we are now confident we know how to build AGI" * Anthropic's Dario Amodei: Stated in January "I'm more confident than I've ever been that we're close to powerful capabilities... in the next 2-3 years" * Google DeepMind's Demis Hassabis: Changed from "as soon as 10 years" in autumn to "probably three to five years away" by January. What explains the shift? Is it just hype? Or could we really have Artificial General Intelligence (AGI) by 2028?[1] In this article, I look at what's driven recent progress, estimate how far those drivers can continue, and explain why they're likely to continue for at least four more years. In particular, while in 2024 progress in LLM chatbots seemed to slow, a new approach started to work: teaching the models to reason using reinforcement learning. In just a year, this let them surpass human PhDs at answering difficult scientific reasoning questions, and achieve expert-level performance on one-hour coding tasks. We don't know how capable AGI will become, but extrapolating the recent rate of progress suggests that, by 2028, we could reach AI models with beyond-human reasoning abilities, expert-level knowledge in every domain, and that can autonomously complete multi-week projects, and progress would likely continue from there.  On this set of software engineering & computer use tasks, in 2020 AI was only able to do tasks that would typically take a human expert a couple of seconds. By 2024, that had risen to almost an hour. If the trend continues, by 2028 it'll reach several weeks.  No longer mere chatbots, these 'agent' models might soon satisfy many people's definitions of AGI — roughly, AI systems that match human performance at most knowledge work (see definition in footnote).[1] This means that, while the co
saulius
 ·  · 22m read
 · 
Summary In this article, I estimate the cost-effectiveness of five Anima International programs in Poland: improving cage-free and broiler welfare, blocking new factory farms, banning fur farming, and encouraging retailers to sell more plant-based protein. I estimate that together, these programs help roughly 136 animals—or 32 years of farmed animal life—per dollar spent. Animal years affected per dollar spent was within an order of magnitude for all five evaluated interventions. I also tried to estimate how much suffering each program alleviates. Using SADs (Suffering-Adjusted Days)—a metric developed by Ambitious Impact (AIM) that accounts for species differences and pain intensity—Anima’s programs appear highly cost-effective, even compared to charities recommended by Animal Charity Evaluators. However, I also ran a small informal survey to understand how people intuitively weigh different categories of pain defined by the Welfare Footprint Institute. The results suggested that SADs may heavily underweight brief but intense suffering. Based on those findings, I created my own metric DCDE (Disabling Chicken Day Equivalent) with different weightings. Under this approach, interventions focused on humane slaughter look more promising, while cage-free campaigns appear less impactful. These results are highly uncertain but show how sensitive conclusions are to how we value different kinds of suffering. My estimates are highly speculative, often relying on subjective judgments from Anima International staff regarding factors such as the likelihood of success for various interventions. This introduces potential bias. Another major source of uncertainty is how long the effects of reforms will last if achieved. To address this, I developed a methodology to estimate impact duration for chicken welfare campaigns. However, I’m essentially guessing when it comes to how long the impact of farm-blocking or fur bans might last—there’s just too much uncertainty. Background In
gergo
 ·  · 11m read
 · 
Crossposted on Substack and Lesswrong. Introduction There are many reasons why people fail to land a high-impact role. They might lack the skills, don’t have a polished CV, don’t articulate their thoughts well in applications[1] or interviews, or don't manage their time effectively during work tests. This post is not about these issues. It’s about what I see as the least obvious reason why one might get rejected relatively early in the hiring process, despite having the right skill set and ticking most of the other boxes mentioned above. The reason for this is what I call context, or rather, lack thereof. Subscribe to The Field Building Blog On professionals looking for jobs It’s widely agreed upon that we need more experienced professionals in the community, but we are not doing a good job of accommodating them once they make the difficult and admirable decision to try transitioning to AI Safety. Let’s paint a basic picture that I understand many experienced professionals are going through, or at least the dozens I talked to at EAGx conferences. 1. They do an AI Safety intro course 2. They decide to pivot their career 3. They start applying for highly selective jobs, including ones at OpenPhilanthropy 4. They get rejected relatively early in the hiring process, including for more junior roles compared to their work experience 5. They don’t get any feedback 6. They are confused as to why and start questioning whether they can contribute to AI Safety If you find yourself continuously making it to later rounds of the hiring process, I think you will eventually land the job sooner or later. The competition is tight, so please be patient! To a lesser extent, this will apply to roles outside of AI Safety, especially to those aiming to reduce global catastrophic risks. But for those struggling to penetrate later rounds of the hiring process, I want to suggest a potential consideration. Assuming you already have the right skillset for a given role, it might
Recent opportunities in Community
49
Ivan Burduk
· · 2m read