I have heard people who are uncertain about whether EA community building is the right move for them, given the increased prominence of AI Safety. I think that EA community building is the right choice for a significant number of people, and wanted to lay out why I believe this.
I’m excited to see AI Safety specific community building and I hope it continues to grow. This piece is not intended to claim that no-one should be working on AIS community building. Although CEA’s groups team is at an EA organisation, not an AI-safety organisation. I hope we can collaborate with AI Safety groups, as:
TLDR: I've assembled a mindmap of all EA (-related) entities I could find. You can access it at tinyurl.com/eamindmap. If I missed or misrepresented anything: leave a note (in the mindmap) and I'll add it or correct it. I've also added other existing lists of orgs to this post.
Starting my position at EA Netherlands as a co-director, I was advised by EA Pathfinder (now Successif) to make an overview of the EA space. In this way it would be easier to onboard, support community members and follow conversations.
I made this mindmap of EA(-related) entities and this quickly got out of hand. I asked around, whether such overviews already exists. There were a few:
Overview of EA organizations[1]
I wish that it was possible for agree votes to be disabled on comments that aren't making any claim or proposal. When I write a comment saying "thank you" or "this has given me a lot to think about" and people agree vote (or disagree vote!), it feels to odd: there isn't even anything to agree or disagree with there!
A useful 1D projection for regulatory frameworks is principles vs. rules-based regulation.
Principles are high-level, overarching objectives that the regulatory framework seeks to achieve. They are usually broad and relatively abstract, which means they can be applied to a wide range of specific situations. For example, we can take a look at the UK Financial Conduct Authority (FCA)’s “Principles for Business”, which contains principles such as:
A firm must conduct its business with due skill, care, and diligence
A firm must pay due regard to the interests of its customers and treat them fairly
A firm must manage conflicts of interest fairly, both between itself and its customers and between a customer and another client.
On the other hand, rules-based regulation attempts to set up numerous concrete rules that cover as...
Online discussion. Newcomers welcome!
Please register here to attend.
Are there concepts in Effective Altruism you want to explore more fully? Or do you have particular comments or insights about certain topics? Join our foundational topics group and discuss a different theme each month!
This June, we will discuss the concept of Crucial Considerations. We will explore how this concept has been used, review the benefits and critiques, dive into complicating factors with the idea, and more!
****
EA NYC:
Please find all EA NYC event information - including our Code of Conduct, food policy, covid policy, and information on past and future events - here on our website: https://www.effectivealtruism.nyc/events
You can also find us on Facebook, Meetup, and Eventbrite!
http://facebook.com/groups/eanyc/events
https://www.meetup.com/effective-altruism-nyc/events/
https://www.eventbrite.com/o/effective-altruism-nyc-55938838923
For those new to effective altruism, here are a couple of good introductions. In short, EA is about using evidence to carefully analyze how, given limited resources, we can help others the *most*.
https://www.youtube.com/watch?v=48VAQtGmfWY
Crossposted to LessWrong.
This is the second post in this sequence and covers Conjecture.
Conjecture is a for-profit alignment startup founded in late 2021 by Connor Leahy, Sid Black and Gabriel Alfour, which aims to scale applied alignment research. Based in London, Conjecture has received $10 million in funding from venture capitalists (VCs), and recruits heavily from the EA movement.
We shared a draft of this document with Conjecture for feedback prior to publication (and include their response below). We also requested feedback on a draft from a small group of experienced alignment researchers from various organizations, and have invited them to share their views in the comments of this post. We'd like to invite others to share their thoughts in the comments, or anonymously via this form.
For those...
Does this really make you feel safe? This reads to me as a possible reason for optimism, but hardly reassures me that the worst won’t happen or that this author isn’t just failing to imagine what could lead to strong instrumental convergence (including different training regimes becoming popular).
I think running criticism past the people whose work is being criticized often helps make the criticism more productive, but it can be difficult. To make it easier, I'm sharing a step-by-step guide ⬇️ you can use.
Please don’t feel like you have to read this whole guide or be super thorough if you’re thinking of running a draft past people. Don’t let perfect be the enemy of the good.
In this post the criticizer gave the criticizee an opportunity to reply in-line in the published post—in effect, the criticizee was offered the last word. I thought that was super classy, and I’m proud to have stolen that idea on two occasions (1,2).
If anyone’s interested, the relevant part of my email was:
...…
You can leave google docs margin comments if you want, and:
- If I’m just straight-up wrong about something, or putting words in your mouth, then I’ll just correct the text before publication.
- If you are leave a google docs comment that’s more like a counte
I’m Luke Freeman, and I currently serve as the executive director of Giving What We Can (GWWC). You’re welcome to ask me anything! I’ll start answering questions on Thursday June 15th.
Logistics/practical instructions:
Some context:
I want to donate as much as I can, but how much is too much/ ultimately counterproductive?
For example is it worth to settle for a noticeably worse (but still adequate) phone service provider for the sake of donating an extra $6 (i.e. 3 bed nets) a month?
Or is sacrificing at that scale too extreme in your opinion?
How should we expect AI to unfold over the coming decades? In this article, I explain and defend a compute-based framework for thinking about AI automation. This framework makes the following claims, which I defend throughout the article:
While none of these ideas are new, my goal is to provide a single article...
I'm also a little surprised you think that modeling when we will have systems using similar compute as the human brain is very helpful for modeling when economic growth rates will change.
In this post, when I mentioned human brain FLOP, it was mainly used as a quick estimate of AGI inference costs. However, different methodologies produce similar results (generally within 2 OOMs). A standard formula to estimate compute costs is 6*N per forward pass, where N is the number of parameters. Currently the largest language models have are estimated to be between 1...
You say community building, but the specifics you describe seem more like recruiting and outreach. All three of those are good things, but I think conflating them is unhelpful. I think this is especially true because EA is already very aggressive at recruiting and mediocre at post-recruitment support.