Thank you for the detailed response. Some responses to your points:
Our values might get locked in this century through technology or totalitarian politics, in which case we need to rush to reach something tolerable as quickly as possible;
I'm having a hard time thinking of how technology could lock in our values. One possibility is that AGI would be programmed to value what we currently value with no ability to have moral growth. However, it's not clear to me why anyone would do this. People, as best as I can tell, value moral growth and thus wou...
I'm wondering what you mean when you say, "I think there are many 'AI catastrophes' that would be quite compatible with alien civs." Do you think that there are relatively probably existential catastrophes from rogue AI that would allow for alien colonization of Earth? I'm having a hard time thinking of any and would like to know your thoughts on the matter.
I'm not really considering AI ending all life in the universe. If I understand correctly, it is unlikely that we or future AI will be able to influence the universe outside of our Hubble sphere. However, there may be aliens that exist or in the future will in exist in our Hubble sphere, and I think it would be more likely than not nice if they are able to make use of our galaxy and the ones surrounding it.
As a simplified example, suppose there is on average one technologically advanced civilization for every group of 100 galaxies. And each civilizatio...
It takes a certain degree of investment knowledge and time to form an opinion about the historical performance of different factors and expected future performance.
People who are knowledgeable about investing, e.g. Ben Todd and Bayesian Investor, have already formed opinions about the future expected performance of different factors. Is there something wrong with non-advanced investors just following their advice? Perhaps this wouldn't be optimal, but I'm having a hard time seeing how it could be worse than not adding any tilts.
It also requires...
I'm wondering why Todd suggests that "adding tilts to the portfolio for value, momentum and low volatility (either through security selection or asset selection or adding a long-short component) and away from assets owned for noneconomic reasons" should only been done if you know what you're doing. Bayesian Investor's recommendations seem to do this without requiring you to be very knowledgeable about investing.
What sort of other advice is out there that's somewhat conflicting but equally plausible? The only one I can think of is that you should basically just stick your money in whatever diversified index funds have the lowest feeds. But even if this advice is just as plausible as your advice, your advice still seems worth taking. This is because if you're wrong and I follow your strategy anyways, pretty much the only cost I'm bearing is decreasing my returns by only a small amount due to increased management fees. But if you're right and I don't follow your strategy, I'd miss out on a much less small amount of returns.
Is there any empirical reason to think that knowledge about 'rationality' is particularly helpful for investing?
Yes. Rationalists are likely to know about, and adjust for, overconfidence bias, and to avoid the base-rate fallacy. Presumably Bayesian Investor already knows that most people who thought they could beat the market were wrong, and thus took this into account when forming their belief that the strategy can beat the market.
And it's not necessarily the case that Bayesian Investor's strategy is worth doing for everyone and that p...
I agree that Bayesian Investor's strategy has a high chance of not beating the market, has somewhat higher risk, and would probably result in you occasionally rebalancing your portfolio, but it still seems like it's very much worth using or at least worth having someone look into.
The funds Bayesian Investor suggests you invest in are ETFs, which I think decreases the need for doing much rebalancing. And rebalancing takes little time. All you need to do is buy and sell from a handful of ETFs; I doubt this would take much more than an hour or so. T...
I think that in the future, people will eliminate acquiring food by suffering animals in factory farms anyways. This is because people will presumably be able to live in virtual realities and efficiently create virtual food without causing any suffering. Thoughts?