The Happier Lives Institute have helped many people (including me) open their
eyes to Subjective Wellbeing and perhaps even update us to the potential value
of SWB. The recent heavy discussion (60+ comments) on their fundraising thread
disheartened me. Although I agree with much of the criticism against them, the
hammering they took felt at best rough and perhaps even unfair. I'm not sure
exactly why I felt this way, but here are a few ideas.
* (High certainty) HLI have openly published their research and ideas, posted
almost everything on the forum and engaged deeply with criticism which is
amazing - more than perhaps any other org I have seen. This may (uncertain)
have hurt them more than it has helped them.
* (High certainty) When other orgs are criticised or asked questions, they
often don't reply at all, or get surprisingly little criticism for what I and
many EAs might consider poor epistemics and defensiveness in their posts (for
charity I'm not going to link to the handful I can think of). Why does HLI
get such a hard time while others get a pass? Especially when HLI's funding
is less than many of orgs that have not been scrutinised as much.
* (Low certainty) The degree of scrutiny and analysis of some development orgs
in general like HLI seems to exceed that of AI orgs, Funding orgs and
Community building orgs. This scrutiny has been intense- more than one
amazing statistician has picked apart their analysis. This expert-level
scrutiny is fantastic, I just wish it could be applied to other orgs as well.
Very few EA orgs (at least that have been posted on the forum) produce full
papers with publishable level deep statistical analysis like HLI have at
least attempted to do. Does there need to be a "scrutiny rebalancing" of
sorts. I would rather other orgs got more scrutiny, rather than development
orgs getting less.
Other orgs might see threads like the HLI funding thread hammering and compare
it with ot
Surprised Animal Charity Evaluators Recommended Charity Fund gives equal amounts
to around a dozen charities:
https://animalcharityevaluators.org/donation-advice/recommended-charity-fund/
Obviously uncertainty's involved, but a core tenant of EA and charity evaluators
is that certain charities are more effective, so Givewell's Top Charities Fund
giving different amounts to only a few charities per year makes more sense to
me:
https://www.givewell.org/top-charities-fund
One reason I'm excited about work on lead exposure is that it hits a sweet spot
of meaningfully benefiting both humans and nonhumans. Lead has dramatic and
detrimental effects for not just mammals, but basically all animals, from birds
to aquatic animals to insects.
Are there other interventions that potentially likewise hit this sweet spot?
Offput that 80k hours advises "if you find you aren’t interested in [The
Precipice: Existential Risk], we probably aren’t the best people for you to get
advice from". Hoped there was more general advising beyond just those interested
in existential risk
Why doesn't EA focus on equity, human rights, and opposing discrimination (as
cause areas)?
KJonEA asks:
'How focused do you think EA is on topics of race and gender equity/justice,
human rights, and anti-discrimination? What do you think are factors that shape
the community's focus?'
In response, I ended up writing a lot of words, so I thought it was worth
editing them a bit and putting them in a shortform. I've also added some
'counterpoints' that weren't in the original comment.
To lay my cards on the table: I'm a social progressive and leftist, and I think
it would be cool if more EAs thought about equity, justice, human rights and
discrimination - as cause areas to work in, rather than just within the EA
community. (I'll call this cluster just 'equity' going forward). I also think it
would be cool if left/progressive organisations had a more EA mindset sometimes.
At the same time, as I hope my answers below show, I do think there are some
good reasons that EAs don't prioritize equity, as well as some bad reasons.
So, why don't EAs priority gender and racial equity, as cause areas?
1. Other groups are already doing good work on equity (i.e. equity is less
neglected)
The social justice/progressive movement has got feminism and anti-racism pretty
well covered. On the other hand, the central EA causes - global health, AI
safety, existential risk, animal welfare -are comparatively neglected by other
groups. So it kinda makes sense for EAs to say 'we'll let these other movements
keep doing their good work on these issues, and we'll focus on these other
issues that not many people care about'.
Counter-point: are other groups using the most (cost)-effective methods to
achieve their goals? EAs should, of course, be epistemically modest; but it
seems that (e.g.) someone who was steeped in both EA and feminism, might have
some great suggestions for how to improve gender equality and women's
experiences, effectively.
2. Equity work isn't cost-effective
EAs car
I think we separate causes and interventions into "neartermist" and
"longtermist" causes too much.
Just as some members of the EA community have complained that AI safety is
pigeonholed as a "long-term" risk when it's actually imminent within our
lifetimes[1], I think we've been too quick to dismiss conventionally
"neartermist" EA causes and interventions as not valuable from a longtermist
perspective. This is the opposite failure mode of surprising and suspicious
convergence - instead of assuming (or rationalizing) that the spaces of
interventions that are promising from neartermist and longtermist perspectives
overlap a lot, we tend to assume they don't overlap at all, because it's more
surprising if the top longtermist causes are all different from the top
neartermist ones. If the cost-effectiveness of different causes according to
neartermism and longtermism are independent from one another (or at least
somewhat positively correlated), I'd expect at least some causes to be valuable
according to both ethical frameworks.
I've noticed this in my own thinking, and I suspect that this is a common
pattern among EA decision makers; for example, Open Phil's "Longtermism" and
"Global Health and Wellbeing" grantmaking portfolios don't seem to overlap.
Consider global health and poverty. These are usually considered "neartermist"
causes, but we can also tell a just-so story about how global development
interventions such as cash transfers might also be valuable from the perspective
of longtermism:
* People in extreme poverty who receive cash transfers often spend the money on
investments as well as consumption. For example, a study by GiveDirectly
found that people who received cash transfers owned 40% more durable goods
(assets) than the control group. Also, anecdotes show that cash transfer
recipients often spend their funds on education for their kids (a type of
human capital investment), starting new businesses, building infrastructure
for their
Someone pinged me a message on here asking about how to donate to tackle child
sexual abuse. I'm copying my thoughts here.
I haven't done a careful review on this, but here's a few quick comments:
* Overall, I don't know of any charity which does interventions tackling child
sexual abuse, and which I know to have a robust evidence-and-impact mindset.
* Overall, I have the impression that people who have suffered from child
sexual abuse (hereafter CSA) can suffer greatly, and tackling this is
intractable. My confidence on this is medium -- I've spoken with enough
people to be confident that it's true at least some of the time, but I'm not
clear on the academic evidence.
* This seems to point in the direction of prevention instead.
* There are interventions which aim to support children to avoid being abused.
I haven't seen the evidence on this (and suspect that high quality evidence
doesn't exist). If I were to guess, I would guess that the best interventions
probably do have some impact, but that impact is limited.
* To expand on this: my intuition says that the less able the child is to
protect themselves, the more damage the CSA does. I.e. we could probably
help a confident 15-year old avoid being abused, however that child might
get different -- and, I suspect, on average less bad -- consequences than a
5 year old; but helping the 5 year old might be very intractable.
* This suggests that work to support the abuser may be more effective.
* It's likely also more neglected, since donors are typically more attracted
to helping a victim than a perpetrator.
* For at least some paedophiles, although they have sexual urges toward
children, they also have a strong desire to avoid acting on them, so
operating cooperatively with them could be somewhat more tractable.
* Unfortunately, I don't know of any org which does work in this area, and
which has a strong evidence culture. Here are some ex
I wonder if anyone has moved from longtermist cause areas to neartermist cause
areas. I was prompted by reading the recent Carlsmith piece and Julia Wise's
Messy personal stuff that affected my cause prioritization.