Crossposted to LessWrong.
This is the second post in this sequence and covers Conjecture.
Conjecture is a for-profit alignment startup founded in late 2021 by Connor Leahy, Sid Black and Gabriel Alfour, which aims to scale applied alignment research. Based in London, Conjecture has received $10 million in funding from venture capitalists (VCs), and recruits heavily from the EA movement.
We shared a draft of this document with Conjecture for feedback prior to publication (and include their response below). We also requested feedback on a draft from a small group of experienced alignment researchers from various organizations, and have invited them to share their views in the comments of this post. We'd like to invite others to share their thoughts in the comments, or anonymously via this form.
For those...
Acknowledgement
I wish to acknowledge the help of Luke Eure whose thoughts and great insights greatly challenged my initial ideas while writing this post.
1. Introduction
The career guide from 80,000 hours provides many insights for people who aspire to have a positive impact with their work. It offers various career advice, tailored for people who intend to work on global challenges such as pandemic preparedness, climate change, nuclear war, and risks from advanced artificial intelligence, as well as others that are not yet known. It also provides abundant resources on how to gain career capital to make a difference, with a job board for high impact jobs. A brief look at the salaries offered for positions in these places reveals that the location of the employees, rather than their...
Nice one George. Some thoughts:
I have heard people who are uncertain about whether EA community building is the right move for them, given the increased prominence of AI Safety. I think that EA community building is the right choice for a significant number of people, and wanted to lay out why I believe this.
I’m excited to see AI Safety specific community building and I hope it continues to grow. This piece is not intended to claim that no-one should be working on AIS community building. Although CEA’s groups team is at an EA organisation, not an AI-safety organisation. I hope we can collaborate with AI Safety groups, as:
You say community building, but the specifics you describe seem more like recruiting and outreach. All three of those are good things, but I think conflating them is unhelpful. I think this is especially true because EA is already very aggressive at recruiting and mediocre at post-recruitment support.
TLDR: I've assembled a mindmap of all EA (-related) entities I could find. You can access it at tinyurl.com/eamindmap. If I missed or misrepresented anything: leave a note (in the mindmap) and I'll add it or correct it. I've also added other existing lists of orgs to this post.
Starting my position at EA Netherlands as a co-director, I was advised by EA Pathfinder (now Successif) to make an overview of the EA space. In this way it would be easier to onboard, support community members and follow conversations.
I made this mindmap of EA(-related) entities and this quickly got out of hand. I asked around, whether such overviews already exists. There were a few:
Overview of EA organizations[1]
I wish that it was possible for agree votes to be disabled on comments that aren't making any claim or proposal. When I write a comment saying "thank you" or "this has given me a lot to think about" and people agree vote (or disagree vote!), it feels to odd: there isn't even anything to agree or disagree with there!
Online discussion. Newcomers welcome!
Please register here to attend.
Are there concepts in Effective Altruism you want to explore more fully? Or do you have particular comments or insights about certain topics? Join our foundational topics group and discuss a different theme each month!
This June, we will discuss the concept of Crucial Considerations. We will explore how this concept has been used, review the benefits and critiques, dive into complicating factors with the idea, and more!
****
EA NYC:
Please find all EA NYC event information - including our Code of Conduct, food policy, covid policy, and information on past and future events - here on our website: https://www.effectivealtruism.nyc/events
You can also find us on Facebook, Meetup, and Eventbrite!
http://facebook.com/groups/eanyc/events
https://www.meetup.com/effective-altruism-nyc/events/
https://www.eventbrite.com/o/effective-altruism-nyc-55938838923
For those new to effective altruism, here are a couple of good introductions. In short, EA is about using evidence to carefully analyze how, given limited resources, we can help others the *most*.
https://www.youtube.com/watch?v=48VAQtGmfWY
I think running criticism past the people whose work is being criticized often helps make the criticism more productive, but it can be difficult. To make it easier, I'm sharing a step-by-step guide ⬇️ you can use.
Please don’t feel like you have to read this whole guide or be super thorough if you’re thinking of running a draft past people. Don’t let perfect be the enemy of the good.
In this post the criticizer gave the criticizee an opportunity to reply in-line in the published post—in effect, the criticizee was offered the last word. I thought that was super classy, and I’m proud to have stolen that idea on two occasions (1,2).
If anyone’s interested, the relevant part of my email was:
...…
You can leave google docs margin comments if you want, and:
- If I’m just straight-up wrong about something, or putting words in your mouth, then I’ll just correct the text before publication.
- If you are leave a google docs comment that’s more like a counte
I’m Luke Freeman, and I currently serve as the executive director of Giving What We Can (GWWC). You’re welcome to ask me anything! I’ll start answering questions on Thursday June 15th.
Logistics/practical instructions:
Some context:
I want to donate as much as I can, but how much is too much/ ultimately counterproductive?
For example is it worth to settle for a noticeably worse (but still adequate) phone service provider for the sake of donating an extra $6 (i.e. 3 bed nets) a month?
Or is sacrificing at that scale too extreme in your opinion?
We've updated the recommendation about working at Conjecture.