I think this post contains many errors/issues (especially for a post with >300 karma). Many have been pointed out by others, but I think at least several still remain unmentioned. I only have time/motivation to point out one (chosen for being relatively easy to show concisely):
Using the 3x levered TTT with duration of 18 years, a 3 percentage point rise in rates would imply a mouth-watering cumulative return of 162%.
Levered ETFs exhibit path dependency, or "volatility drag", because they reset their leverage daily, which means you can't calculate the return without knowing what the interest rate does in between the 3% rise. TTT's website acknowledges this with a very prominent disclaimer:
Important Considerations
This short ProShares ETF seeks a return that is -3x the return of its underlying benchmark (target) for a single day, as measured from one NAV calculation to the next.
Due to the compounding of daily returns, holding periods of greater than one day can result in returns that are significantly different than the target return, and ProShares' returns over periods other than one day will likely differ in amount and possibly direction from the target return for the same period. These effects may be more pronounced in funds with larger or inverse multiples and in funds with volatile benchmarks."
You can also compare 1 and 2 and note that from Jan 1, 2019 to Jan 1, 2023, the 20-year treasury rate went up ~1%, but TTT is down ~20% instead of up (ETA: and has paid negligible dividends).
A related point: The US stock market has averaged 10% annual returns over a century. If your style of reasoning worked, we should instead buy a 3x levered S&P 500 ETF, get 30% return per year, compounding to 1278% return over a decade, handily beating out 162%.
Pure selfishness can't work, since if everyone is selfish, why would anyone believe anyone else's PR? I guess there has to be some amount of real altruism mixed in, just that when push comes to shove, people who will make decisions truly aligned with altruism (e.g., try hard to find flaws in one's supposedly altruistic plans, give up power after you've gained power for supposedly temporary purposes, forgo hidden bets that have positive selfish EV but negative altruistic EV) may be few and far between.
Ignaz Semmelweis
This is just a reasonable decision (from a selfish perspective) that went badly, right? I mean if you have empirical evidence that hand-washing greatly reduced mortality, it seems pretty reasonable that you might be able to convince the medical establishment of this fact, and as a result gain a great deal of status/influence (which could eventually be turned into power/money).
The other two examples seem like real altruism to me, at least at first glance.
The best you can do is “egoism, plus virtue signalling, plus plain insanity in the hard cases”.
Question is, is there a better explanation than this?
Do you know any good articles or posts exploring the phenomenon of "the road to hell is paved in good intentions"? In the absence of a thorough investigation, I'm tempted to think that "good intentions" is merely a PR front that human brains put up (not necessarily consciously), and that humans deeply aligned with altruism don't really exist, or are even rarer than it looks. See my old post A Master-Slave Model of Human Preferences for a simplistic model that should give you a sense of what I mean... On second thought, that post might be overly bleak as a model of real humans, and the truth might be closer to Shard Theory where altruism is a shard that only or mainly gets activated in PR contexts. In any case, if this is true, there seems to be a crucial problem of how to reliably do good using a bunch of agents who are not reliably interested in doing good, which I don't see many people trying to solve or even talk about.
(Part of "not reliably interested in doing good" is that you strongly want to do things that look good to other people, but aren't very motivated to find hidden flaws in your plans/ideas that only show up in the long run, or will never be legible to people whose opinions you care about.)
But maybe I'm on the wrong track and the main root cause of "the road to hell is paved in good intentions" is something else. Interested in your thoughts or pointers.
Over time, I've come to see the top questions as:
In one of your charts you jokingly ask, "What even is philosophy?" but I'm genuinely confused why this line of thinking doesn't lead a lot more people to view metaphilosophy as a top priority, either in the technical sense of solving the problems of what philosophy is and what constitutes philosophical progress, or in the sociopolitical sense of how best to structure society for making philosophical progress. (I can't seem to find anyone else who often talks about this, even among the many philosophers in EA.)
Would be interested in your (eventual) take on the following parallels between FTX and OpenAI:
just felt like SBF immediately became a highly visible EA figure for no good reason beyond $$$.
Not exactly. From Sam Bankman-Fried Has a Savior Complex—And Maybe You Should Too:
It was his fellow Thetans who introduced SBF to EA and then to MacAskill, who was, at that point, still virtually unknown. MacAskill was visiting MIT in search of volunteers willing to sign on to his earn-to-give program. At a café table in Cambridge, Massachusetts, MacAskill laid out his idea as if it were a business plan: a strategic investment with a return measured in human lives. The opportunity was big, MacAskill argued, because, in the developing world, life was still unconscionably cheap. Just do the math: At $2,000 per life, a million dollars could save 500 people, a billion could save half a million, and, by extension, a trillion could theoretically save half a billion humans from a miserable death.
MacAskill couldn’t have hoped for a better recruit. Not only was SBF raised in the Bay Area as a utilitarian, but he’d already been inspired by Peter Singer to take moral action. During his freshman year, SBF went vegan and organized a campaign against factory farming. As a junior, he was wondering what to do with his life. And MacAskill—Singer’s philosophical heir—had the answer: The best way for him to maximize good in the world would be to maximize his wealth.
SBF listened, nodding, as MacAskill made his pitch. The earn-to-give logic was airtight. It was, SBF realized, applied utilitarianism. Knowing what he had to do, SBF simply said, “Yep. That makes sense.” But, right there, between a bright yellow sunshade and the crumb-strewn red-brick floor, SBF’s purpose in life was set: He was going to get filthy rich, for charity’s sake. All the rest was merely execution risk.
To give some additional context, China emitted 11680 MT of Co2 in 2020, out of 35962 MT globally. In 2022 it plans to mine 300 MT more coal than the previous year (which also added 220 MT of coal production), causing an additional 600 MT of Co2 from this alone (might be a bit higher or lower depending on what kind of coal is produced). Previously, China tried to reduce its coal consumption, but that caused energy shortages and rolling blackouts, forcing the government to reverse direction.
Given this, it's really unclear how efforts like persuading Canadian voters to take climate change more seriously can make enough difference to be considered "effective" altruism. (Not sure if that line in your conclusions is targeted at EAs, or was originally written for a different audience.) Perhaps EAs should look into other approaches (such as geoengineering) that are potentially more neglected and/or tractable?
To take a step back, I'm not sure it makes sense to talk about "technological feasibility" of lock-in, as opposed to say its expected cost, because suppose the only feasible method of lock-in causes you to lose 99% of the potential value of the universe, that seems like a more important piece of information than "it's technologically feasible".
(On second thought, maybe I'm being unfair in this criticism, because feasibility of lock-in is already pretty clear to me, at least if one is willing to assume extreme costs, so I'm more interested in the question of "but can it be done at more acceptable costs", but perhaps this isn't true of others.)
That aside, I guess I'm trying to understand what you're envisioning when you say "An extreme version of this would be to prevent all reasoning that could plausibly lead to value-drift, halting progress in philosophy." What kind of mechanism do you have in mind for doing this? Also, you distinguish between stopping philosophical progress vs stopping technological progress, but since technological progress often requires solving philosophical questions (e.g., related to how to safely use the new technology), do you really see much distinction between the two?
Consider a civilization that has "locked in" the value of hedonistic utilitarianism. Subsequently some AI in this civilization discovers what appears to be a convincing argument for a new, more optimal design of hedonium, which purports to be 2x more efficient at generating hedons per unit of resources consumed. Except that this argument actually exploits a flaw in the reasoning processes of the AI (which is widespread in this civilization) such that the new design is actually optimized for something different from what was intended when the "lock in" happened. The closest this post comes to addressing this scenario seems to be "An extreme version of this would be to prevent all reasoning that could plausibly lead to value-drift, halting progress in philosophy." But even if a civilization was willing to take this extreme step, I'm not sure how you'd design a filter that could reliably detect and block all "reasoning" that might exploit some flaw in your reasoning process.
Maybe in order to prevent this, the civilization tried to locked in "maximize the quantity of this specific design of hedonium" as their goal instead of hedonistic utilitarianism in the abstract. But 1) maybe the original design of hedonium is already flawed or highly suboptimal, and 2) what if (as an example) some AI discovers an argument that they should engage in acausal trade in order to maximize the quantity of hedonium in the multiverse, except that this argument is actually wrong.
This is related to the problem of metaphilosophy, and my hope that we can one day understand "correct reasoning" well enough to design AIs that we can be confident are free from flaws like these, but I don't know how to argue that this is actually feasible.
The point of my comment was that even if you're 100% sure about the eventual interest rate move (which of course nobody can be), you still have major risk from path dependency (as shown by the concrete example). You haven't even given a back-of-the-envelope calculation for the risk-adjusted return, and the "first-order approximation" you did give (which both uses leverage and ignores all risk) may be arbitrarily misleading, even for the purpose of "gives an idea of how large the possibilities are". (Because if you apply enough leverage and ignore risk, there's no limit to how large the possibilities are of any given trade.)
I thought about not writing that sentence, but figured that other readers can benefit from knowing my overall evaluation of the post (especially given that many others have upvoted it and/or written comments indicating overall approval). Would be interested to know if you still think I should not have said it, or should have said it in a different way.