I'd argue that EA is quite bad at something like: "Engaging a broad group of relevant stakeholders for increased impact". So getting loads of non-EA people on your side, and finding ways to work together with multiple, potentially misaligned orgs, governments and individuals.
Don't want to overstate this- some EA orgs do this well. Charity Entrepreneurship include stakeholder engagement in their program, for example. But it seems neglected in the EA space more generally.
Yeah, makes sense. I just don't know why it's not just: "It's conceivable, therefore, that EA community building has net negative impact." If you think that EA is/ EAs are net-negative value, then surely the more important point is that we should disband EA totally / collectively rid ourselves of the foolish notion that we should ever try to optimise anything/ commit seppuku for the greater good, rather than ease up on the community building.
I think I'm not following the first stage of your argument. Why would the FTX fiasco imply that community building specifically (rather than EA generally) might be net-negative?
I definitely don't think it's too much to expect from a self-reflection exercise, and I'm sure they've considered these issues. For no. 1, I wouldn't actually credit growth so much. Most of the rapid increases in life expectancy in poor countries over the last century have come from factors not directly related to economic growth (edit: growth in the countries themselves), including state capacity, access to new technology (vaccines), and support from international orgs/ NGOs. China pre- and post- 1978 seems like one clear example here- the most significant health improvements came before economic growth. Can you identify the 'growth miracles' vs. countries that barely grew over the last 20 years in the below graph?
I'd also say that reliably improving growth (or state capacity) is considerably more difficult than reliably providing a limited slice of healthcare. Even if GiveWell had a more reliable theory of change for charitably-funded growth interventions, they probably aren't going to attract donations- donating to lobbying African governments to remove tariffs doesn't sound like an easy sell, even for an EA-aligned donor. For 2, I think you're making two points- supporting dictators and crowding out domestic spending.
On the dictator front, there is a trade-off, but there are a few factors:
On the 'crowding out' front, I don't have a good sense of the data, but I'd suspect that the issue might be worse in non-dictatorships- countries/ regions that are easier/ more desirable for western NGOs to set up shop, but where local authorities might provide semi-decent care in the absence of NGOs. This article illustrates some of the problems in rural Kenya and Uganda (where I think there's a particularly high NGO-to-local people ratio).
I suspect GiveWell's response to this is that the GiveWell-supported charities target a very specific health problem- they may sometimes try to work with local healthcare providers to make both actors more effective, but, if they don't, the interventions should be so much more effective per marginal dollar than domestic healthcare spending that any crowding effect is more than canceled out. Many crowding problems are more macro than micro (affecting national policy), so the marginal impact of a new effective NGO on, say, a decision whether or not to increase healthcare spending, is probably minimal. When you've got major donors (UN, Gates) spending billions in your country, AMF spending a few extra million is unlikely to have a major effect. But I'm open to arguments here.
This definitely crossed my mind. Assuming he expected it to be published and he'd guess how bad his responses would look, this would be one of the few rational explanations for his sudden repudiation of ethics.
But it also seems fairly likely that his mind is in a pretty chaotic place and his actions aren't particularly rational, though.
Surely everyone on this thread realises that there should be a relevant distinction between being some random hack and 'the EA journalist'. We're holding her to higher standards than general journalistic norms.
Thanks for writing this, great to hear that you're feeling better.
I'm usually a fan of self-experimentation, and the upside of finding an antidepressant with few side-effects (and that you can take a lower dose of) is definitely valuable. This seems especially true if you can stop taking it during better mental health periods, then have it in your arsenal for future use. But I still have a few doubts about this process, and I'm a little concerned that some of the premises behind your experiment need a bit more scrutiny. I hope someone with a bit more domain-specific knowledge can correct me if I'm wrong, or improve my arguments if I have a point. I'm also aware that there's no such thing as a 'perfect self-experiment', and I don't think there are obvious ways that you could have improved the experiment. But here are a few things that I'd like to hear your thoughts on:
Firstly, the depression episode was triggered by an disruptive external factor- the pandemic. This would probably invalidate any observational study held in the same period. As this external factor improved, and people could start travelling/ socialising normally, you might expect symptoms to lift naturally from mid-2021 onwards. From what you've mentioned here, you don't seem to have disproved this hypothesis. I gather that depressive episodes seem to last a median of about 6 months ( see pic below), with treatment not making a huge difference for duration within the first year (some obvious caveats about selection effects here). How do you consider the possibility that you would have recovered without antidepressants?
Secondly, the process of switching between 5/6 antidepressants seems to be a significant confounding factor here. I don't know how good the evidence base for the guidelines link you sent was, but it seems likely that the multiple (start, side-effects, ending, potential relapse) effects of antidepressants are significant enough to really mess up any attempts to have a 'clean slate' between treatments, and to therefore make it a unfair comparison. It seems possible that what you thought was a negative reaction to x medicine was actually contingent on having just tapered off y medicine and/ or experiencing a relapse. Does that seem plausible, or do you think that there was a stable enough baseline for comparisons to be valid?
Third, just a bit of concern about the downsides of the experiment. There are some long-term side-effects to antidepressants, and they seem understudied for fairly obvious reasons (most clinical studies only last for 6 months, no long-term RCTs). There seems to be a few studies that point to longer-term risks and 'oppositional effects' being underestimated. Unknown confounding factors and additional health risks from going on and off antidepressants would make me very concerned. Obviously, untreated depression also has a range of health risks, so I don't want to discount the other side of the ledger, but I would definitely not be confident that I was doing something safe. How confident do you feel in your comparison of these risks? And did you feel that you had to convince yourself against (potentially irrational) fear of over-medication?
Finally, a bit unrelated, there's a meta question that often comes to mind when I read posts about more rational/ self-experimenting approaches to health issues, which is: "How strong should our naturalistic bias/heuristic be when approaching mental health/ general health issues?" Particularly for my own health, I have a moderate bias against less 'natural' (obviously a very messy term, but I think it's useful) health solutions. I often feel EAs have the opposite bias, preferring pharmacological solutions, perhaps because they can be tested with a nice clean RCT. I'm interested what level of bias you, (and forum readers), think is optimal.
"If we want to draw in more experienced people, it'd be much easier to just spin up another brand, rather than try to rebrand something that already has particular connotations."
This strikes me as probably incorrect. Creating a new brand is really hard, and minor shifts in branding to de-emphasise students would be fairly simple. In my experience, the EA brand and EA ideas are sufficiently appealing to a fairly broad range of older people. The problem is that loads of older people are really interested in EA ideas- think Sam Harris' audience or the median owner of a Peter Singer book- but they find that: a) It's socially weird being around uni students; b) Few of the materials, from 80k to Intro fellowships, seem targeted to them; c) It's way harder to commit to a social movement. I've facilitated for EA intro programs with diverse ages, and the 'next steps' stage at the end of an intro fellowship is way different for 20 year olds to 40 year olds- for a 20 year old, basically "Just go to your uni EA group and get more involved" is a good level of commitment, whereas a 40 year old has to make far more difficult choices. But I also feel that if this 40 year-old is willing to commit time to EA, this is a more costly signal than a student doing so, so I often feel bullish about their career impact.
My preferred solutions are fairly marginal, just making it a bit easier and more comfortable for older people to get involved: 1) Groups like 80k put a bit more effort into advice for later career people; 2) Events targeting older high-impact professionals (and more 'normal' older people; EA for parents is a good idea); 3) Highlight a few 'role models' (on the EA intro course, for example, or an 80k podcast guest)- people who've become high-impact EAs in later life.
The claim that we wouldn't see similar evolution of moral reasoning a second time doesn't seem weird to me at all. The claim that we should assume that we've been exceptionally / top 10%- lucky might be a bit weird. Despite a few structural factors (more complex, more universal moral reasoning develops with economic complexity), I see loads of contingency and path dependence in the way that human moral reasoning has evolved. If we re-ran the last few millennia 1000 times, I'm pretty convinced that we'd see significant variation in norms and reasoning, including:
The argument that we've been exceptionally lucky is more difficult to examine using a longer timeline. We can imagine much better and much worse scenarios, and I can't think of a strong reason to assume either way. But with a shorter timeline we can make some meaningful claims about things that could have gone better or worse. It does feel like there are many ways that the last few hundred years could have led to much worse moral philosophies becoming more globally prominent- particularly if other empires (Qing, Spanish, Ottoman, Japanese, Soviet, Nazi) had become more dominant.
I'm fairly uncertain about this later claim, so I'd like to hear from people with more expertise in world history/ history of moral thought to see if they agree with my intuitions about potential counterfactuals.
Agree with this completely.
The fact that this same statistical manoeuvre could be used to downplay nuclear war, vaccines for diseases like polio, climate change or AI risk, should also be particularly worrying.
Another angle is that the number of deaths is directly influenced by the amount of funding- the article says that "the scale of this issue differs greatly from pandemics", but it could plausibly be the case that terrorism isn't an inherently less significant/ deadly issue, but counterterrorism funding works extremely well- that's why deaths are so low.