It doesn't seem like mere pedantry if it requires substantial revision of the view to retain the same action recommendations. Symmetric person-affecting total utilitarianism does look to be dominated by these sorts of possibilities of large stocks of necessary beings without some other change. I'm curious what your take on the issues raised in that post is.
Plus the Soviet bioweapons program was actively at work to engineer pathogens for enhanced destructiveness during the 70s and 80s using new biotechnology (and had been using progessively more advanced methods through the 20th century.
I think that kind of spikiness (1000, 200, 100 with big gaps between) isn't the norm. Often one can proceed to weaker and indirect versions of a top intervention (funding scholarships to expand the talent pipelines for said think-tanks, buying them more Google Ads to publicize their research) with lower marginal utility that smooth out the returns curve, as you do progressively less appealing and more ancillary versions of the 1000-intervention until they start to get down into the 200-intervention range.
I agree that carefully-vetted institutional solutions are probably where one would like to end up.
I agree that the EMH-consistent version of this still suggests the collective EA portfolio should be more leveraged, and more managing correlations across donors now and over time, and that there is a large factor literature in support of these factors (although in general academic finance suffers from datamining/backtesting and EMH dissipation of real factors that become known).
Re the text you quoted, I just mean that if EAs damage their portfolios (e.g. by taking large amounts of leverage and not properly monitoring it so that leverage ratios explode, taking out the portfolio) that's fewer EA dollars donated (aside from the reputational effects), and I would want to do more to ensure readers don't go half-cocked and blow up their portfolios without really knowing what they are doing.
I appreciate many important points in this essay about the additional considerations for altruistic investing, including taking more risk for return than normal because of lower philanthropic risk aversion, attending to correlations with other donors (current and future), and the variations in diminishing returns curve for different causes and interventions popular in effective altruism. I think there are very large gains that could be attained by effective altruists better making use of these considerations.
But at the same time I am quite disconcerted by the strong forward-looking EMH violating claims about massively outsized returns to the specific investment strategies, despite the limited disclaimers (and the factor literature). Concern for relative performance only goes so far as an explanation for predicting such strong inefficiencies going forward: the analysis would seem to predict that, e.g. very wealthy individuals investing on their own accounts will pile into such strategies if their advantage is discernible. I would be much less willing to share the article because of the inclusion of those elements.
I would also add some of the special anti-risk considerations for altruists and financial writing directed at them, e.g.
On net, it still looks like EA should be taking much more risk than is commonly recommended for individual retirement investments, and I'd like to see active development of this sort of thinking, but want to emphasize the importance of caution and rigor in doing so.
Interactive Brokers allows much higher leverage for accounts with portfolio margin enabled, e.g. greater than 6:1. That requires options trading permissions, in turn requiring some combination of options experience and an online (easy) test.
I would be more worried about people blowing up their life savings with ill-considered extreme leverage strategies and the broader fallout of that.
Agreed (see this post for an argument along these lines), but it would require much higher adoption and so merits the critique relative to alternatives where the donations can be used more effectively.
I have reposted the comment as a top-level post.
My sense of what is happening regarding discussions of EA and systemic change is:
My read is that Millenarian religious cults have often existed in nontrivial numbers, but as you say the idea of systematic, let alone accelerating, progress (as opposed to past golden ages or stagnation) is new and coincided with actual sustained noticeable progress. The Wikipedia page for Millenarianism lists ~all religious cults, plus belief in an AI intelligence explosion.
So the argument seems, first order, to reduce to the question of whether credence in AI growth boom (to much faster than IR rates) is caused by the same factors as religious cults rather than secular scholarly opinion, and the historical share/power of those Millenarian sentiments as a share of the population. But if one takes a narrower scope (not exceptionally important transformation of the world as a whole, but more local phenomena like the collapse of empires or how long new dynasties would last) one sees smaller distortion of relative importance for propaganda frequently (not that it was necessarily believed by outside observers).