I live for a high disagree-to-upvote ratio
I was interested in a different presentation of the income estimate data, so I plotted it against the approximate actual income brackets:
I would love to see a deeper version of this graph that includes more of the raw data & has better error bars. I feel like this presentation is much more insightful!
The IHME have published a new global indicator for depression treatment gaps—the ‘minimally adequate treatment rate’.
It’s defined using country-level treatment gap data, and then extrapolated to missing countries using Bayesian meta-regression (combined with other GBD data; there’s already a critique paper on this methodology FWIW).
I don’t have a great sense of this issue specifically, but the first place I go looking to find the outside view on something like this is often the Global Burden of Disease study, which has been estimating disease prevalence since the 90s.
This chart displays the prevalence per 100,000 people of depressive disorders and self-harm injuries/deaths since 1900, among 10–24 year-olds, split by SDI groups. I think there are a lot of good reasons to be skeptical of GBD depression estimates, but data from high-income countries has been historically measured reliably since the 2010s, from which a meaningful increase due to the internet or smartphones would present itself.
My country, Australia, also keeps good track of suicide rates. They look like this over the internet era:
I would see the variation here as relatively random. Overall, I highly trust the GBD and its component data sources. So for me to update my beliefs, I would first want to understand why these high-level indicators are wrong or otherwise obscuring the real problem.
P.S. I’ll say one thing that I’ve found is a useful litmus test for whenever you see charts like this—keep an eye on the axes. A lot of ‘public intellectual’ types tend to post charts that cut the y-axis off above zero (inflating the relative size of changes), or the x-axis after the 90s (rates of depression and suicide appear to have changed more significantly over 1990–2010 than from 2010–2020, in ways that would drown out any proposed increase if included), or before 2020 (COVID-19 bumps tend to put modest increases over the 2010s in perspective).
Do you have a sense of the acceptability rates (i.e. what proportions of the treatment population moderately decreased their meat consumption)? Additionally, how did you account for selection effects (i.e. if a study includes vegetarians, those participants presumably wouldn’t see behaviour change)?
My mental model right now is that some small proportion of Western populations are amenable to meat reductions, with a sharp fall-off after this. Using these techniques on less aware populations might work, but we could assume that most high-income Western populations have already been exposed to these techniques and made up their minds. Averaged over a study, seeing a handful of participants change their minds in moderate ways would show a small effect size, or none at all, depending on the recruited population.
But I know very little about this area, so I assume the above is wrong. I just wanted to know in what ways, and what’s borne out by the data you have.
2 weeks out from the new GiveWell/GiveDirectly analysis, I was wondering how GHD charities are evaluating the impact of these results.
For Kaya Guides, this has got us thinking much more explicitly about what we’re comparing to. GiveWell and GiveDirectly have a lot more resources, so they can do things like go out to communities and measure second order and spillover effects.
On the one hand, this has got us thinking about other impacts we can incorporate into our analyses. Like GiveDirectly, we probably also have community spillover effects, we probably also avert deaths, and we probably also increase our beneficiaries’ incomes by improving productivity. I suspect this is true for many GHD charities!
On the other, it doesn’t seem fair to compare our analysis on individual subjective wellbeing to GiveDirectly’s analysis that incorporates many more things. Unless we believed that GiveDirectly is likely to be systematically better, it’s not the case that many GHD charities got 3–4× less cost-effective relative to cash transfers overnight, they may just count 3–4× less things! So I wonder if the standard cash transfers benchmark might have to include more nuance in the near-term. Kaya Guides already only makes claims about cost-effectiveness ‘at improving subjective wellbeing’ to try and cover for this.
Are other GHD charities starting to think the same way? Do people have other angles on this?
I’ve been going through the evaluation reports and it seems like GWWC might not be as confident in Longview’s Emerging Challenges Fund or the EA Long-Term Future Fund as they are in their choices for GHD and Animal Welfare. The reports for these funds often include some uncertainties, like:
On the other hand, the Founders Pledge GHD fund wasn’t fully recommended due to more specific methodological issues:
Until I read various posts around the forum and personally looked into what LTFF in particular was funding, I was under the impression—partly from GWWC’s messaging—that the LTFF was at least comparable to a GiveWell or even an ACE. This is partly because GWWC usually recommend their GCR funds at the same time as these other funds.
It might be on me for having the wrong assumptions, so I wrote out my chain of thinking, and I’m keen to find out where we disagree:
(I originally posted this to the 2024 recommendations but thought it might be more constructive / less likely to cause any issues over in this thread)