I agree that carefully-vetted institutional solutions are probably where one would like to end up.
I agree that the EMH-consistent version of this still suggests the collective EA portfolio should be more leveraged, and more managing correlations across donors now and over time, and that there is a large factor literature in support of these factors (although in general academic finance suffers from datamining/backtesting and EMH dissipation of real factors that become known).
Re the text you quoted, I just mean that if EAs damage their portfolios (e.g. by taking large amounts of leverage and not properly monitoring it so that leverage ratios explode, taking out the portfolio) that's fewer EA dollars donated (aside from the reputational effects), and I would want to do more to ensure readers don't go half-cocked and blow up their portfolios without really knowing what they are doing.
I appreciate many important points in this essay about the additional considerations for altruistic investing, including taking more risk for return than normal because of lower philanthropic risk aversion, attending to correlations with other donors (current and future), and the variations in diminishing returns curve for different causes and interventions popular in effective altruism. I think there are very large gains that could be attained by effective altruists better making use of these considerations.
But at the same time I am quite disconcerted by the strong forward-looking EMH violating claims about massively outsized returns to the specific investment strategies, despite the limited disclaimers (and the factor literature). Concern for relative performance only goes so far as an explanation for predicting such strong inefficiencies going forward: the analysis would seem to predict that, e.g. very wealthy individuals investing on their own accounts will pile into such strategies if their advantage is discernible. I would be much less willing to share the article because of the inclusion of those elements.
I would also add some of the special anti-risk considerations for altruists and financial writing directed at them, e.g.
On net, it still looks like EA should be taking much more risk than is commonly recommended for individual retirement investments, and I'd like to see active development of this sort of thinking, but want to emphasize the importance of caution and rigor in doing so.
Interactive Brokers allows much higher leverage for accounts with portfolio margin enabled, e.g. greater than 6:1. That requires options trading permissions, in turn requiring some combination of options experience and an online (easy) test.
I would be more worried about people blowing up their life savings with ill-considered extreme leverage strategies and the broader fallout of that.
Agreed (see this post for an argument along these lines), but it would require much higher adoption and so merits the critique relative to alternatives where the donations can be used more effectively.
I have reposted the comment as a top-level post.
My sense of what is happening regarding discussions of EA and systemic change is:
My read is that Millenarian religious cults have often existed in nontrivial numbers, particularly but as you say the idea of systematic, let alone accelerating, progress (as opposed to past golden ages or stagnation) is new and coincided with actual sustained noticeable progress. The Wikipedia page for Millenarianism lists ~all religious cults, plus belief in an AI intelligence explosion.
So the argument seems, first order, to reduce to the question of whether credence in AI growth boom (to much faster than IR rates) is caused by the same factors as religious cults rather than secular scholarly opinion, and the historical share/power of those Millenarian sentiments as a share of the population. But if one takes a narrower scope (not exceptionally important transformation of the world as a whole, but more local phenomena like the collapse of empires or how long new dynasties would last) one sees smaller distortion of relative importance for propaganda frequently (not that it was necessarily believed by outside observers).
She’s unsure whether this speeds up or slows down AI development; her credence is imprecise, represented by the interval [0.4, 0.6]. She’s confident, let’s say, that speeding up AI development is bad.
That's an awfully (in)convenient interval to have! That is the unique position for an interval of that length with no distinguishing views about any parts of the interval, such that integrating over it gives you a probability of 0.5 and expected impact of 0.
The standard response to that is that you should weigh all these and do what is in expectation best, according to your best-guess credences. But maybe we just don’t have sufficiently fine-grained credences for this to work,
If the argument from cluelessness depends on giving that kind of special status to imprecise credences, then I just reject them for the general reason that coarsening credences leads to worse decisions and predictions (particularly if one has done basic calibration training and has some numeracy and skill at prediction). There is signal to be lost in coarsening on individual questions. And for compound questions with various premises or contributing factors making use of the signal on each of those means your views will be moved by signal.
Chapter 3 of Jeffrey Friedman's book War and Chance: Assessing Uncertainty in International Politics presents data and arguments showing large losses from coarsening credences instead of just giving a number between 0 and 1. I largely share his negative sentiments about imprecise credences, especially.
[VOI considerations around less investigated credences that are more likely to be moved by investigation are fruitful grounds to delay action to acquire or await information that one expects may be actually attained, but are not the same thing as imprecise credences.]
(In contrast, it seems you thought I was referring to AI vs some other putative great longtermist intervention. I agree that plausible longtermist rivals to AI and bio are thin on the ground.)
That was an example of the phenomenon of not searching a supposedly vast space and finding that in fact the # of top-level considerations are manageable (at least compared to thousands), based off experience with other people saying that there must be thousands of similarly plausible risks. I would likewise say that the DeepMind employee in your example doesn't face thousands upon thousands of ballpark-similar distinct considerations to assess.
I think that is basically true in practice, but I am also saying that even absent those pragmatic considerations constraining utilitarianism, I still would hold these other non-utilitarian normative views and reject things like not leaving some space for existing beings for a tiny proportional increase in resources for utility monsters.
The first words of my comment were "I don't identify as a utilitarian" (among other reasons because I reject the idea of things like feeding all existing beings to utility monsters for a trivial proportional gains to the latter, even absent all the pragmatic reasons not to; even if I thought such things more plausible it would require extreme certainty or non-pluralism to get such fanatical behavior).
I don't think a 100% utilitarian dictator with local charge of a society on Earth removes pragmatic considerations, e.g. what if they are actually a computer simulation designed to provide data about and respond to other civilizations, or the principle of their action provides evidence about what other locally dominant dictators on other planets will do including for other ideologies, or if they contact alien life?
But you could elaborate on the scenario to stipulate such things not existing in the hypothetical, and get a situation where your character would commit atrocities, and measures to prevent the situation hadn't been taken when the risk was foreseeable.
That's reason for everyone else to prevent and deter such a person or ideology from gaining the power to commit such atrocities while we can, such as in our current situation. That would go even more strongly for negative utilitarianism, since it doesn't treat any life or part of life as being intrinsically good, regardless of the being in question valuing it, and is therefore even more misaligned with the rest of the world (in valuation of the lives of everyone else, and in the lives of their descendants). And such responses give reason even for utilitarian extremists to take actions that reduce such conflicts.
Insofar as purely psychological self-binding is hard, there are still externally available actions, such as visibly refraining from pursuit of unaccountable power to harm others, and taking actions to make it more difficult to do so, such as transferring power to those with less radical ideologies, or ensuring transparency and accountability to them.