I mostly haven't been thinking about what the ideal effective altruism community
would look like, because it seems like most of the value of effective altruism
might just get approximated to what impact it has on steering the world towards
better AGI futures. But I think even in worlds where AI risk wasn't a problem,
the effective altruism movement seems lackluster in some ways.
I am thinking especially of the effect that it often has on university students
and younger people. My sense is that EA sometimes influences those people to be
closed-minded or at least doesn't contribute to making them as ambitious or
interested in exploring things outside "conventional EA" as I think would be
ideal. Students who come across EA often become too attached to specific EA
organisations or paths to impact suggested by existing EA institutions.
In an EA community that was more ambitiously impactful, there would be a higher
proportion of folks at least strongly considering doing things like starting
startups that could be really big, traveling to various parts of the world to
form a view about how poverty affects welfare, having long google docs with
their current best guesses for how to get rid of factory farming, looking at
non-"EA" sources to figure out what more effective interventions GiveWell might
be missing perhaps because they're somewhat controversial, doing more effective
science/medical research, writing something on the topic of better thinking and
decision-making that could be as influential as Eliezer's sequences, expressing
curiosity about the question of whether charity is even the best way to improve
human welfare, trying to fix science.
And a lower proportion of these folks would be applying to jobs on the 80,000
Hours job board or choosing to spend more time within the EA community rather
than interacting with the most ambitious, intelligent, and interesting people
amongst their general peers.
I'm going to be leaving 80,000 Hours and joining Charity Entrepreneurship's
incubator programme this summer!
The summer 2023 incubator round is focused on biosecurity and scalable global
and I'm really excited to see what's the best fit for me and hopefully launch a
new charity. The ideas that the research team have written up look really
exciting and I'm trepidatious about the challenge of being a founder but psyched
for getting started. Watch this space! <3
I've been at 80,000 Hours for the last 3 years. I'm very proud of the 800+
advising calls I did and feel very privileged I got to talk to so many people
and try and help them along their careers!
I've learned so much during my time at 80k. And the team at 80k has been
wonderful to work with - so thoughtful, committed to working out what is the
right thing to do, kind, and fun - I'll for sure be sad to leave them.
There are a few main reasons why I'm leaving now:
1. New career challenge - I want to try out something that stretches my skills
beyond what I've done before. I think I could be a good fit for being a
founder and running something big and complicated and valuable that wouldn't
exist without me - I'd like to give it a try sooner rather than later.
2. Post-EA crises stepping away from EA community building a bit - Events over
the last few months in EA made me re-evaluate how valuable I think the EA
community and EA community building are as well as re-evaluate my personal
relationship with EA. I haven't gone to the last few EAGs and switched my
work away from doing advising calls for the last few months, while
processing all this. I have been somewhat sad that there hasn't been more
discussion and changes by now though I have been glad to see more EA leaders
share things more recently (e.g. this from Ben Todd
BUT "EVERYONE KNOWS"!
A dynamic I keep seeing is that it feels hard to whistleblow or report concerns
or make a bid for more EA attention on things that "everyone knows", because it
feels like there's no one to tell who doesn't already know. It’s easy to think
that surely this is priced in to everyone's decision making. Some reasons to do
* You might be wrong about what “everyone” knows - maybe everyone in your
social circle does, but not outside. I see this a lot in Bay gossip vs.
London gossip - what "everyone knows" is very different in those two places
* You might be wrong about what "everyone knows" - sometimes people use a vague
shorthand, like "the FTX stuff" and it could mean a million different things,
and either double illusion of transparency
(you both think you know what the other person is talking about but don’t) or
the pressure to nod along in social situations means that it seems like
you're all talking about the same thing but you're actually not
* Just because people know doesn't mean it's the right level of salient
* Bystander effect: People might all be looking around assuming someone else
has the concern covered because surely everyone knows and is taking the right
amount of action on it.
In short, if you're acting based on the belief that there’s a thing “everyone
knows”, check that that’s true.
Relatedly: Everybody Knows
[https://thezvi.wordpress.com/2019/07/02/everybody-knows/], by Zvi Mowshowitz
[Caveat: There's an important balance to strike here between the value of public
conversation about concerns and the energy that gets put into those public
community conversations. There are reasons to take action on the above
non-publicly, and not every concern will make it above people’s bar for spending
the time and effort to get more engagement with it. Just wanted to point to some
lenses that might get missed.]
Why doesn't EA focus on equity, human rights, and opposing discrimination (as
'How focused do you think EA is on topics of race and gender equity/justice,
human rights, and anti-discrimination? What do you think are factors that shape
the community's focus?
In response, I ended up writing a lot of words, so I thought it was worth
editing them a bit and putting them in a shortform. I've also added some
'counterpoints' that weren't in the original comment.
To lay my cards on the table: I'm a social progressive and leftist, and I think
it would be cool if more EAs thought about equity, justice, human rights and
discrimination - as cause areas to work in, rather than just within the EA
community. (I'll call this cluster just 'equity' going forward). I also think it
would be cool if left/progressive organisations had a more EA mindset sometimes.
At the same time, as I hope my answers below show, I do think there are some
good reasons that EAs don't prioritize equity, as well as some bad reasons.
So, why don't EAs priority gender and racial equity, as cause areas?
1. Other groups are already doing good work on equity (i.e. equity is less
The social justice/progressive movement has got feminism and anti-racism pretty
well covered. On the other hand, the central EA causes - global health, AI
safety, existential risk, animal welfare -are comparatively neglected by other
groups. So it kinda makes sense for EAs to say 'we'll let these other movements
keep doing their good work on these issues, and we'll focus on these other
issues that not many people care about'.
Counter-point: are other groups using the most (cost)-effective methods to
achieve their goals? EAs should, of course, be epistemically modest; but it
seems that (e.g.) someone who was steeped in both EA and feminism, might have
some great suggesti
I fear the weird hugbox EAs do towards their critics in order to signal good
faith means over time a lot of critics just end up not being sharpened in their
BAD THINGS ARE BAD: A SHORT LIST OF COMMON VIEWS AMONG EAS
1. No, we should not sterilize people against their will.
2. No, we should not murder AI researchers. Murder is generally bad. Martyrs
are generally effective. Executing complicated plans is generally more
difficult than you think, particularly if failure means getting arrested and
massive amounts of bad publicity.
3. Sex and power are very complicated. If you have a power relationship,
consider if you should also have a sexual one. Consider very carefully if
you have an power relationship: many forms of power relationship are
invisible, or at least transparent, to the person with power. Common forms
of power include age, money, social connections, professional connections,
and almost anything that correlates with money (race, gender, etc). Some of
these will be more important than others. If you're concerned about
something, talk to a friend who's on the other side of that from you. If you
don't have any, maybe just don't.
4. And yes, also, don't assault people.
5. Sometimes deregulation is harmful. "More capitalism" is not the solution to
6. Very few people in wild animal suffering think that we should go and
deliberately destroy the biosphere today.
7. Racism continues to be an incredibly negative force in the world. Anti-black
racism seems pretty clearly the most harmful form of racism for the minority
of the world that lives outside Asia.
8. Much of the world is inadequate and in need of fixing. That EAs have not
prioritized something does not mean that it is fine: it means we're busy.
9. The enumeration in the list, of certain bad things, being construed to deny
or disparage other things also being bad, would be bad.
Hope that clears everything up. I expect with 90% confidence that over 90% of
EAs would agree with every item on this list.
Inside, I don't know enough to say with confidence.
Impacts being distributed heavy-tailed has a very psychologically harsh effect
given the lack of feedback loops in longtermist fields and I wonder what
interpersonal norms one could cultivate amongst friends and the community
writ-large (loosely held/purely musing etc.):
1. Distinguishing pessimism about ideas from pessimism about people.
2. Ex-ante vs. ex-post critiques.
3. Celebrating when post-mortems have led to more successful projects.
4. Mergers/takeover mechanisms of competition between peoples/projects.
I think EAs in the FTX era were leaning hard on hard capital (e.g. mentioning no
lean season close down) ignoring the social and psychological parts of taking
risk and how we can be a community that recognises heavy-tailed distributions
without making it worse for those who are not in the heavy-tail.
Load more (8/45)
SOME QUICK THOUGHTS FOLLOWING EAG LONDON 2023
* I'd recommend having a strategy for planning your conference
* Different types of 1:1s are valuable in different ways, and some are more
worth preparing for than others
* It's natural to feel imposter syndrome and a sense of inadequacy when
surrounded by so many highly competent, accomplished people, but arguably the
primary purpose of the conference is for those people to help us mere mortals
become highly competent and accomplished too (assuming accomplishment =
* I feel very conflicted about the insider/outsider nature of the EA community,
and I'm going to keep thinking and talking about it more
Last year I attended EAG SF (but not EAG London 2022), and was newer to the EA
community, as well as working for a less prestigious organisation. This context
is probably important for many of these reflections.
I made some improvements to my conference planning strategy this year, that I
think made the experience significantly better:
* I looked through the attendee list and sent out meeting invitations as soon
as Swapcard was available. This way people had more slots free, both
increasing their likelihood of accepting my meeting, and increasing the
chances they'll accept for the specific time I've requested. This let me have
more control over my schedule.
* I left half an hour's break in between my 1:1s. This allowed meetings to go
on longer if both participants wanted that, as well as gave me some time to
process information, write down notes, and recharge.
* I initially didn't schedule many meetings on Sunday. This meant that if
anybody I talked to on Friday or Saturday suggested I talk to someone else at
the conference, I'd have the best chance at being able to arrange a meeting
with them on Sunday. This strategy worked really well, as I had a couple of
particularly valuable 1:1s on Sunday with people who hadn't originally b