Impacts being distributed heavy-tailed has a very psychologically harsh effect
given the lack of feedback loops in longtermist fields and I wonder what
interpersonal norms one could cultivate amongst friends and the community
writ-large (loosely held/purely musing etc.):
1. Distinguishing pessimism about ideas from pessimism about people.
2. Ex-ante vs. ex-post critiques.
3. Celebrating when post-mortems have led to more successful projects.
4. Mergers/takeover mechanisms of competition between peoples/projects.
I think EAs in the FTX era were leaning hard on hard capital (e.g. mentioning no
lean season close down) ignoring the social and psychological parts of taking
risk and how we can be a community that recognises heavy-tailed distributions
without making it worse for those who are not in the heavy-tail.
Why doesn't EA focus on equity, human rights, and opposing discrimination (as
cause areas)?
KJonEA asks:
'How focused do you think EA is on topics of race and gender equity/justice,
human rights, and anti-discrimination? What do you think are factors that shape
the community's focus?
[https://forum.effectivealtruism.org/posts/zgBB56GcnJyjdSNQb/how-focused-do-you-think-ea-is-on-topics-of-race-and-gender]'
In response, I ended up writing a lot of words, so I thought it was worth
editing them a bit and putting them in a shortform. I've also added some
'counterpoints' that weren't in the original comment.
To lay my cards on the table: I'm a social progressive and leftist, and I think
it would be cool if more EAs thought about equity, justice, human rights and
discrimination - as cause areas to work in, rather than just within the EA
community. (I'll call this cluster just 'equity' going forward). I also think it
would be cool if left/progressive organisations had a more EA mindset sometimes.
At the same time, as I hope my answers below show, I do think there are some
good reasons that EAs don't prioritize equity, as well as some bad reasons.
So, why don't EAs priority gender and racial equity, as cause areas?
1. Other groups are already doing good work on equity (i.e. equity is less
neglected)
The social justice/progressive movement has got feminism and anti-racism pretty
well covered. On the other hand, the central EA causes - global health, AI
safety, existential risk, animal welfare -are comparatively neglected by other
groups. So it kinda makes sense for EAs to say 'we'll let these other movements
keep doing their good work on these issues, and we'll focus on these other
issues that not many people care about'.
Counter-point: are other groups using the most (cost)-effective methods to
achieve their goals? EAs should, of course, be epistemically modest; but it
seems that (e.g.) someone who was steeped in both EA and feminism, might have
some great suggesti
I'd heart react if this forum introduced reactions
[https://www.lesswrong.com/posts/SzdevMqBusoqbvWgt/open-thread-with-experimental-feature-reactions].[1]
1. ^
There have been times in the past (e.g., here
[https://forum.effectivealtruism.org/posts/jYSEjBsWbjNqioRZJ/the-rethink-priorities-existential-security-team-s-strategy?commentId=LS22fwtoGukMpmyfn])
when I've wished there were a reaction feature, and I agree with the
LessWrong post's thesis that a reaction feature would positively shape forum
culture.
BUT "EVERYONE KNOWS"!
A dynamic I keep seeing is that it feels hard to whistleblow or report concerns
or make a bid for more EA attention on things that "everyone knows", because it
feels like there's no one to tell who doesn't already know. It’s easy to think
that surely this is priced in to everyone's decision making. Some reasons to do
it anyway:
* You might be wrong about what “everyone” knows - maybe everyone in your
social circle does, but not outside. I see this a lot in Bay gossip vs.
London gossip - what "everyone knows" is very different in those two places
* You might be wrong about what "everyone knows" - sometimes people use a vague
shorthand, like "the FTX stuff" and it could mean a million different things,
and either double illusion of transparency
[https://www.lesswrong.com/posts/sBBGxdvhKcppQWZZE/double-illusion-of-transparency]
(you both think you know what the other person is talking about but don’t) or
the pressure to nod along in social situations means that it seems like
you're all talking about the same thing but you're actually not
* Just because people know doesn't mean it's the right level of salient
* Bystander effect: People might all be looking around assuming someone else
has the concern covered because surely everyone knows and is taking the right
amount of action on it.
In short, if you're acting based on the belief that there’s a thing “everyone
knows”, check that that’s true.
Relatedly: Everybody Knows
[https://thezvi.wordpress.com/2019/07/02/everybody-knows/], by Zvi Mowshowitz
[Caveat: There's an important balance to strike here between the value of public
conversation about concerns and the energy that gets put into those public
community conversations. There are reasons to take action on the above
non-publicly, and not every concern will make it above people’s bar for spending
the time and effort to get more engagement with it. Just wanted to point to some
lenses that might get missed.]
SOME QUICK THOUGHTS FOLLOWING EAG LONDON 2023
TL;DR
* I'd recommend having a strategy for planning your conference
* Different types of 1:1s are valuable in different ways, and some are more
worth preparing for than others
* It's natural to feel imposter syndrome and a sense of inadequacy when
surrounded by so many highly competent, accomplished people, but arguably the
primary purpose of the conference is for those people to help us mere mortals
become highly competent and accomplished too (assuming accomplishment =
impact)
* I feel very conflicted about the insider/outsider nature of the EA community,
and I'm going to keep thinking and talking about it more
Last year I attended EAG SF (but not EAG London 2022), and was newer to the EA
community, as well as working for a less prestigious organisation. This context
is probably important for many of these reflections.
CONFERENCE STRATEGY
I made some improvements to my conference planning strategy this year, that I
think made the experience significantly better:
* I looked through the attendee list and sent out meeting invitations as soon
as Swapcard was available. This way people had more slots free, both
increasing their likelihood of accepting my meeting, and increasing the
chances they'll accept for the specific time I've requested. This let me have
more control over my schedule.
* I left half an hour's break in between my 1:1s. This allowed meetings to go
on longer if both participants wanted that, as well as gave me some time to
process information, write down notes, and recharge.
* I initially didn't schedule many meetings on Sunday. This meant that if
anybody I talked to on Friday or Saturday suggested I talk to someone else at
the conference, I'd have the best chance at being able to arrange a meeting
with them on Sunday. This strategy worked really well, as I had a couple of
particularly valuable 1:1s on Sunday with people who hadn't originally b
I'm going to be leaving 80,000 Hours and joining Charity Entrepreneurship's
incubator programme this summer!
The summer 2023 incubator round is focused on biosecurity and scalable global
health charities
[https://www.charityentrepreneurship.com/post/announcing-our-2023-charity-ideas-apply-now]
and I'm really excited to see what's the best fit for me and hopefully launch a
new charity. The ideas that the research team have written up look really
exciting and I'm trepidatious about the challenge of being a founder but psyched
for getting started. Watch this space! <3
I've been at 80,000 Hours for the last 3 years. I'm very proud of the 800+
advising calls I did and feel very privileged I got to talk to so many people
and try and help them along their careers!
I've learned so much during my time at 80k. And the team at 80k has been
wonderful to work with - so thoughtful, committed to working out what is the
right thing to do, kind, and fun - I'll for sure be sad to leave them.
There are a few main reasons why I'm leaving now:
1. New career challenge - I want to try out something that stretches my skills
beyond what I've done before. I think I could be a good fit for being a
founder and running something big and complicated and valuable that wouldn't
exist without me - I'd like to give it a try sooner rather than later.
2. Post-EA crises stepping away from EA community building a bit - Events over
the last few months in EA made me re-evaluate how valuable I think the EA
community and EA community building are as well as re-evaluate my personal
relationship with EA. I haven't gone to the last few EAGs and switched my
work away from doing advising calls for the last few months, while
processing all this. I have been somewhat sad that there hasn't been more
discussion and changes by now though I have been glad to see more EA leaders
share things more recently (e.g. this from Ben Todd
[https://forum.effectivealtruism.org/po
I wonder if anyone has moved from longtermist cause areas to neartermist cause
areas. I was prompted by reading the recent Carlsmith piece and Julia Wise's
Messy personal stuff that affected my cause prioritization.
I fear the weird hugbox EAs do towards their critics in order to signal good
faith means over time a lot of critics just end up not being sharpened in their
arguments.