Why doesn't EA focus on equity, human rights, and opposing discrimination (as
cause areas)?
KJonEA asks:
'How focused do you think EA is on topics of race and gender equity/justice,
human rights, and anti-discrimination? What do you think are factors that shape
the community's focus?
[https://forum.effectivealtruism.org/posts/zgBB56GcnJyjdSNQb/how-focused-do-you-think-ea-is-on-topics-of-race-and-gender]'
In response, I ended up writing a lot of words, so I thought it was worth
editing them a bit and putting them in a shortform. I've also added some
'counterpoints' that weren't in the original comment.
To lay my cards on the table: I'm a social progressive and leftist, and I think
it would be cool if more EAs thought about equity, justice, human rights and
discrimination - as cause areas to work in, rather than just within the EA
community. (I'll call this cluster just 'equity' going forward). I also think it
would be cool if left/progressive organisations had a more EA mindset sometimes.
At the same time, as I hope my answers below show, I do think there are some
good reasons that EAs don't prioritize equity, as well as some bad reasons.
So, why don't EAs priority gender and racial equity, as cause areas?
1. Other groups are already doing good work on equity (i.e. equity is less
neglected)
The social justice/progressive movement has got feminism and anti-racism pretty
well covered. On the other hand, the central EA causes - global health, AI
safety, existential risk, animal welfare -are comparatively neglected by other
groups. So it kinda makes sense for EAs to say 'we'll let these other movements
keep doing their good work on these issues, and we'll focus on these other
issues that not many people care about'.
Counter-point: are other groups using the most (cost)-effective methods to
achieve their goals? EAs should, of course, be epistemically modest; but it
seems that (e.g.) someone who was steeped in both EA and feminism, might have
some great suggesti
I think we separate causes and interventions into "neartermist" and
"longtermist" causes too much.
Just as some members of the EA community have complained
[https://forum.effectivealtruism.org/posts/hJDid3goqqRAE6hFN/my-most-likely-reason-to-die-young-is-ai-x-risk]
that AI safety is pigeonholed as a "long-term" risk when it's actually imminent
within our lifetimes[1], I think we've been too quick to dismiss conventionally
"neartermist" EA causes and interventions as not valuable from a longtermist
perspective. This is the opposite failure mode of surprising and suspicious
convergence
[https://forum.effectivealtruism.org/posts/omoZDu8ScNbot6kXS/beware-surprising-and-suspicious-convergence]
- instead of assuming (or rationalizing) that the spaces of interventions that
are promising from neartermist and longtermist perspectives overlap a lot, we
tend to assume they don't overlap at all, because it's more surprising if the
top longtermist causes are all different from the top neartermist ones. If the
cost-effectiveness of different causes according to neartermism and longtermism
are independent from one another (or at least somewhat positively correlated),
I'd expect at least some causes to be valuable according to both ethical
frameworks.
I've noticed this in my own thinking, and I suspect that this is a common
pattern among EA decision makers; for example, Open Phil's "Longtermism" and
"Global Health and Wellbeing" grantmaking portfolios
[https://www.openphilanthropy.org/our-global-health-and-wellbeing-and-longtermism-grantmaking-portfolios/]
don't seem to overlap.
Consider global health and poverty. These are usually considered "neartermist"
causes, but we can also tell a just-so story about how global development
interventions such as cash transfers might also be valuable from the perspective
of longtermism:
* People in extreme poverty who receive cash transfers often spend the money on
investments as well as consumption. For example, a study by GiveDirectly
Offput that 80k hours advises "if you find you aren’t interested in [The
Precipice: Existential Risk], we probably aren’t the best people for you to get
advice from". Hoped there was more general advising beyond just those interested
in existential risk
I wonder if anyone has moved from longtermist cause areas to neartermist cause
areas. I was prompted by reading the recent Carlsmith piece and Julia Wise's
Messy personal stuff that affected my cause prioritization.
As evidence increases for cognitive effects of poor air quality:
https://patrickcollison.com/pollution [https://patrickcollison.com/pollution]
There may be initial opportunities for extra impact by prioritizing monitoring
and improving air quality in important decision-making buildings like government
buildings, headquarters, etc
Someone pinged me a message on here asking about how to donate to tackle child
sexual abuse. I'm copying my thoughts here.
I haven't done a careful review on this, but here's a few quick comments:
* Overall, I don't know of any charity which does interventions tackling child
sexual abuse, and which I know to have a robust evidence-and-impact mindset.
* Overall, I have the impression that people who have suffered from child
sexual abuse (hereafter CSA) can suffer greatly, and tackling this is
intractable. My confidence on this is medium -- I've spoken with enough
people to be confident that it's true at least some of the time, but I'm not
clear on the academic evidence.
* This seems to point in the direction of prevention instead.
* There are interventions which aim to support children to avoid being abused.
I haven't seen the evidence on this (and suspect that high quality evidence
doesn't exist). If I were to guess, I would guess that the best interventions
probably do have some impact, but that impact is limited.
* To expand on this: my intuition says that the less able the child is to
protect themselves, the more damage the CSA does. I.e. we could probably
help a confident 15-year old avoid being abused, however that child might
get different -- and, I suspect, on average less bad -- consequences than a
5 year old; but helping the 5 year old might be very intractable.
* This suggests that work to support the abuser may be more effective.
* It's likely also more neglected, since donors are typically more attracted
to helping a victim than a perpetrator.
* For at least some paedophiles, although they have sexual urges toward
children, they also have a strong desire to avoid acting on them, so
operating cooperatively with them could be somewhat more tractable.
* Unfortunately, I don't know of any org which does work in this area, and
which has a strong evidence culture. Here are some ex
Sanity check? Help please, thank you.
My mind is blown by what's going on right now and we definitely need to change.
At the same time, I can't help but think, there are only like, roughly guessing
70-90 people in CEA? And there's like 10k-15k in the community? Correct me if
I'm wrong.
*(Added this part to clarify that I am not saying how we should ask CEA to
function at all. This shortform is to ask around so I can sort out my own
rationality)
In my head, there are questions like:
1. How are CEA going to implement all of the changes we need?
2. Even if they did, will they have the time and experience to do it right?
3. At this point, do we really want CEA to do this?
4. Should CEA even do this?
5. Would we prefer a separate org/service that prevents these and work closely
with CEA instead? Minimize conflict of interests?
6. Community health, a more thorough case as a cause area to help the longevity
of the community/movement?
7. Am I just super biased hence why I think this?
8. Should I post this as a question instead? (I'm afraid I'll be taking away
attention or look like I'm collecting karma)
I feel like I don't want to put so much of what we want to change on CEA. Or
even the leaders/seniors/core, whatever you call it. I feel like it's better if
even more experienced experts/orgs/services/consultants lead the changes we need
instead (I don't know if there's a suitable one though). At the same time, I
feel like I'm just biased. I don't want to continue feeling like I'm gaslighting
myself from both sides so I would highly appreciate it if you gave more concrete
reasons for/against/something else entirely. Thank you.
Assuming that interventions have log-normally distributed impact
[https://80000hours.org/2023/02/how-much-do-solutions-differ-in-effectiveness/],
compromising on interventions for the sake of public perception is not worth it
unless it brings in exponentially more people.