Bayesian Investor, previous Overcoming Bias co-blogger, made this claim here. However, Eliezer Yudkowsky in Inadequate Equilibra seems to think beating the markets in such a way is impossible. I don't know who to believe. More information on Eliezer's critisisms and Bayesian Investor's replies to them can be found in the comments on the linked page.

Whether or not Bayesian Investor's strategy works is extremely important to know, because if it does work, effective altruists have the potential to substantially increase their wealth and thus effectiveness at altruism. To give an extremely rough sense of the importance, note that if 1,000 effective altruists each with $100,000 in investments earned an extra 3% returns, they would collectively make and extra $3,000,000 per year. Of course, this is ignoring the possibility of re-investing the money, which could further increase earnings. We could all follow Bayesian Investors advice just in case it turns out the be right, but the funds he recommends investing in have significant management fees, and paying them for no benefit would be quite costly in the long run.

I would very much like to know if Bayesian Investor's investment advice is worth following, and if it is, how we should go about spreading word of it to other effective altruists.

Edit: I've been researching the validity of Bayesian Investor's advice, and the advice seems to be reasonable.

8

0
0

Reactions

0
0
Comments14


Sorted by Click to highlight new comments since:

I think that the post's points on markets not being fully efficient are reasonable, but I also think that any reliable strategy will see its value get eaten pretty quickly, unless it relies on something that only a few people know about.

It seems like the author agrees:

All of the approaches I’ve mentioned are likely to outperform by less than history suggests, due to the increased amount of investment attempting to exploit them. That increased investment over the past decade or two doesn’t look large compared to the magnitude of the inefficiencies, but if trading volume on these etf’s becomes comparable to that of leading etf’s, I’d expect the benefits of these etf’s to go to zero.

The 3% number may already be accounting for this trend, but even if it does, this approach comes with (I assume) a bit more risk than a standard index-fund strategy, plus the need to occasionally rebalance one's portfolio, the fear that comes with sometimes underperforming the market, etc.

And of course, the outside view lets us notice that nearly every person who ever thought they could reliably beat the market was wrong, including many people who were at least as well-informed as the author. That reduces my expectation for the benefits of this strategy. I'd put higher credence on it working that I would for most investor-recommended strategies, since I have a pretty high opinion of the author's past work, but I wouldn't go so far as to advocate that any particular EA follow the strategy to the letter (especially since each person has their own financial strategy/goals).

My approach, as someone who doesn't want to spend a lot of time thinking about small percentage gains/messing around with rebalancing and such, is to just use Betterment (Wealthfront is equally good), an index-fund startup that handles rebalancing and tax benefits for you. I've had solid, slightly-above-market returns from them for years, and the interface is really good.

I agree that Bayesian Investor's strategy has a high chance of not beating the market, has somewhat higher risk, and would probably result in you occasionally rebalancing your portfolio, but it still seems like it's very much worth using or at least worth having someone look into.

The funds Bayesian Investor suggests you invest in are ETFs, which I think decreases the need for doing much rebalancing. And rebalancing takes little time. All you need to do is buy and sell from a handful of ETFs; I doubt this would take much more than an hour or so. There's also the transaction cost of buying and selling stocks, but these costs are low, too, and probably under $100 per year.

I understand many people knowledgable about about investing have thought they could beat the market and were wrong, but how many people were both knowledgeable about investing and about rationality but were still wrong? Given how few rationalists there are, I doubt there have been many.

If we assign a 1/3 chance of the strategy beating the market by 3% and otherwise matches the market, then with $100,000 in investments the strategy would in expectation increase our earnings by $1,000 per year. I extremely roughly estimate the annual cost of rebalancing to be $100 per year, including lost earnings due to opportunity cost, which results in the strategy giving you a net profit of $900 per year. This sounds like a good deal to me. And with $1,000,000 invested it would net $9,900 per year.

There's also the potentially increased risk, but for the amounts of money we're dealing with, I think we should be pretty much risk neutral. This is because individuals who aren't extremely wealthy generally can only the amount of funding a non-tiny organization gets by a small amount, and when changing the amount of funding by only a small amount diminishing marginal returns to funding would have little effect.

Am I missing something?

[anonymous]1
0
0

I don't know much about investing, but a couple of quick comments might be helpful:

I understand many people knowledgable about about investing have thought they could beat the market and were wrong, but how many people were both knowledgeable about investing and about rationality but were still wrong? Given how few rationalists there are, I doubt there have been many.

Is there any empirical reason to think that knowledge about 'rationality' is particularly helpful for investing?

If we assign a 1/3 chance of the strategy beating the market by 3% and otherwise matches the market

1/3 chance seems possibly orders of magnitude too high to me.

Is there any empirical reason to think that knowledge about 'rationality' is particularly helpful for investing?

Yes. Rationalists are likely to know about, and adjust for, overconfidence bias, and to avoid the base-rate fallacy. Presumably Bayesian Investor already knows that most people who thought they could beat the market were wrong, and thus took this into account when forming their belief that the strategy can beat the market.

And it's not necessarily the case that Bayesian Investor's strategy is worth doing for everyone and that people just don't do it because they're stupid. The strategy carries more risk than average strategies, and this alone is possibly a good enough reason for most people to avoid it. Effective altruists, however, should probably be less risk averse and thus the strategy is more likely to be useful for them.

Also, we've been saying most people who think they can beat the market are wrong, but on reflection I'm not sure that's true. My understanding is that using leverage can, in expectation, result in you beat the market, and I suspect this is well know among those knowledgeable about investing. People just avoid doing it because it's very risky.

Hi, I'm Bayesian Investor.

I doubt that following my advice would be riskier than the S&P 500 - the low volatility funds reduce the risk in important ways (mainly by moving less in bear markets) that roughly offset the features which increase risk.

It's rational for most people to ignore my advice, because there's lots of other (somewhat conflicting) advice out there that sounds equally plausible to most people.

I've got lots of evidence about my abilities (I started investing as a hobby in 1980, and it's been my main source of income for 20 years). But I don't have an easy way to provide much evidence of my abilities in a single blog post.

What sort of other advice is out there that's somewhat conflicting but equally plausible? The only one I can think of is that you should basically just stick your money in whatever diversified index funds have the lowest feeds. But even if this advice is just as plausible as your advice, your advice still seems worth taking. This is because if you're wrong and I follow your strategy anyways, pretty much the only cost I'm bearing is decreasing my returns by only a small amount due to increased management fees. But if you're right and I don't follow your strategy, I'd miss out on a much less small amount of returns.

Here are a few examples of strategies that look (or looked) equally plausible, from the usually thoughtful blog of my fellow LessWronger Colby Davis .

This blog post recommends:
- emerging markets, which overlaps a fair amount with my advice
- put-writing, which sounds reasonable to me, but he managed to pick a bad time to advocate it
- preferred stock, which looks appropriate today for more risk-averse investors, but which looked overpriced when I wrote my post.

This post describes one of his failures. Buying XIV was almost a great idea. It was a lot like shorting VXX, and shorting VXX is in fact a good idea for experts who are cautious enough not to short too much (alas, the right amount of caution is harder to know than most people expect). I expect the rewards in this area to go only to those who accept hard-to-evaluate risks.

This post has some strategies that require more frequent trading. I suspect they're good, but I haven't given them enough thought to be confident.

Bayesian Investor's recommendations are actually pretty similar to the more advanced portfolio recommended in Ben Todd's post Common investing mistakes in the effective altruism community. The article recommends "[adding] tilts to the portfolio for value, momentum and low volatility (either through security selection or asset selection or adding a long-short component) and away from assets owned for noneconomic reasons" as an advanced move that should only be done if "you know what you’re doing."

Likewise, Bayesian Investor's recommended portfolio heavily involves low-volatility and fundamentally weighted (value-tilted) ETFs.

These articles reach fairly similar conclusions because academic research indicates that these strategies have historically outperformed market capitalization–weighted indexes (commonly known as "the market"). Various theories exist about why these strategies outperformed historically and whether they can be expected to outperform in the future.

Your observation that investing is important for EA because it can significantly increase funding for the EA community is why I'm working on Antigravity Investments, a social enterprise with the goal of improving investment returns in the EA community. Right now, we're picking the lowest hanging fruit by recommending that EA organizations move low-interest cash reserves into high-interest and low-risk savings options (see my EA forum article), which is essentially a guaranteed 2.5% improvement in returns every year at current interest rates. If we shift $15 million in cash, that's another $1 million in direct funding for high impact charities over three years.

Interestingly enough our most recommended option is both safer and higher interest than storing large amounts of cash in a checking account.

While there may be obvious things that EAs should be doing, unfortunately it is very difficult to invoke behavior change. My current approach to behavior change with regards to investing is to have an organization with specific expertise in investing help other EAs and EA organizations implement sensible recommendations. This approach seems to be more effective than writing articles online since it removes the prerequisites of having adequate expertise and time to learn about and implement sensible investing practices.

I wrote the high-yielding cash equivalents article because the recommendation seems particularly easy and obvious to implement. So far, although my article was well received, I haven't heard from any EA organization that has attempted to implement the recommendation based on reading the article, although organizations I've directly reached out to in the past have implemented the recommendation. I'm currently in the (very slow) process of doing more direct outreach to EA organizations to determine for them (and for us) whether our recommendation is worth implementing.

To answer your question about whether the advice is worth following, my personal opinion is that some of Bayesian Investor's recommendations are worth diversifying (tilting) into at a level that reflects each investor's confidence about how likely the anomaly will persist into the future. The low volatility factor in particular has achieved very high out-of-sample risk-adjusted and absolute returns, which is promising, but of course a prolonged period of underperformance could be on the horizon—hence the importance of diversifying.

I'm wondering why Todd suggests that "adding tilts to the portfolio for value, momentum and low volatility (either through security selection or asset selection or adding a long-short component) and away from assets owned for noneconomic reasons" should only been done if you know what you're doing. Bayesian Investor's recommendations seem to do this without requiring you to be very knowledgeable about investing.

It takes a certain degree of investment knowledge and time to form an opinion about the historical performance of different factors and expected future performance. It also requires knowledge and time to determine how to appropriately incorporate factors into a portfolio and how to adjust exposure over time. For example, what should be done if a factor underperforms the market for a noticeable period of time? An investor needs to decide whether to reduce or eliminate exposure to a factor or not. Holding an investment that will continue to underperform is bad, but selling an investment that is experiencing cyclical underperformance is a bad timing decision which will worsen performance each time such an error is made.

As a concrete example, the momentum factor has had notable crashes throughout history that could cause concern and uncertainty among investors that were not expecting that behavior. Decisions to add factors to portfolios need to take into account maintaining an appropriate level of diversification, tax concerns (selling a factor fund could incur capital gains taxes, and factor mutual funds will pass capital gains the fund incurs while following factors onto investors whereas factor ETFs almost definitely won't), and the impact of fees, among other considerations.

It takes a certain degree of investment knowledge and time to form an opinion about the historical performance of different factors and expected future performance.

People who are knowledgeable about investing, e.g. Ben Todd and Bayesian Investor, have already formed opinions about the future expected performance of different factors. Is there something wrong with non-advanced investors just following their advice? Perhaps this wouldn't be optimal, but I'm having a hard time seeing how it could be worse than not adding any tilts.

It also requires knowledge and time to determine how to appropriately incorporate factors into a portfolio and how to adjust exposure over time.

If a non-advanced investor using the recommended tilts merely maintains their current level of exposure and they shouldn't have, it seems unlikely to me that such an investor would end up under-performing a strategy that uses no tilts by much; even if the tilts no longer provide excess returns, I don't see why they would end up doing *worse* than the market. And perhaps eventually some knowledgeable investor would make a blog post saying you should stop adding tilts towards those factors.

The potential downside (and upside) of diversifying by adding some tilts and consistently sticking with them is limited, so I don’t see a major problem with “non-advanced investors” following the advice. Investors should be aware of things like rebalancing and capital gains tax; perhaps “intermediate investor” is a better term.

Interesting. Do you have any thoughts on why it's so hard to invoke behavior change?

There are various frameworks like the transtheoretical model (TTM) that try to explain why individual behavior change is difficult. There are many prerequisites to change, like making people aware there may be an issue in the first place, convincing them that the possible problem is an actual problem, persuading them that the issue is urgent enough they should work on it in the near future, and helping them develop an effective plan of action. There are reasons why people do not proceed ahead at every step of change, like smokers believing that smoking is not harmful to them, or a perceived lack of urgency or time to implement changes in the near future. This problem may be magnified within organizations because multiple people within an organization often need to agree that change is necessary and should be implemented before anything gets done, and anyone that disagrees in the chain of command could prevent the intended change from happening.

Curated and popular this week
 ·  · 20m read
 · 
Advanced AI could unlock an era of enlightened and competent government action. But without smart, active investment, we’ll squander that opportunity and barrel blindly into danger. Executive summary See also a summary on Twitter / X. The US federal government is falling behind the private sector on AI adoption. As AI improves, a growing gap would leave the government unable to effectively respond to AI-driven existential challenges and threaten the legitimacy of its democratic institutions. A dual imperative → Government adoption of AI can’t wait. Making steady progress is critical to: * Boost the government’s capacity to effectively respond to AI-driven existential challenges * Help democratic oversight keep up with the technological power of other groups * Defuse the risk of rushed AI adoption in a crisis → But hasty AI adoption could backfire. Without care, integration of AI could: * Be exploited, subverting independent government action * Lead to unsafe deployment of AI systems * Accelerate arms races or compress safety research timelines Summary of the recommendations 1. Work with the US federal government to help it effectively adopt AI Simplistic “pro-security” or “pro-speed” attitudes miss the point. Both are important — and many interventions would help with both. We should: * Invest in win-win measures that both facilitate adoption and reduce the risks involved, e.g.: * Build technical expertise within government (invest in AI and technical talent, ensure NIST is well resourced) * Streamline procurement processes for AI products and related tech (like cloud services) * Modernize the government’s digital infrastructure and data management practices * Prioritize high-leverage interventions that have strong adoption-boosting benefits with minor security costs or vice versa, e.g.: * On the security side: investing in cyber security, pre-deployment testing of AI in high-stakes areas, and advancing research on mitigating the ris
 ·  · 32m read
 · 
Summary Immediate skin-to-skin contact (SSC) between mothers and newborns and early initiation of breastfeeding (EIBF) may play a significant and underappreciated role in reducing neonatal mortality. These practices are distinct in important ways from more broadly recognized (and clearly impactful) interventions like kangaroo care and exclusive breastfeeding, and they are recommended for both preterm and full-term infants. A large evidence base indicates that immediate SSC and EIBF substantially reduce neonatal mortality. Many randomized trials show that immediate SSC promotes EIBF, reduces episodes of low blood sugar, improves temperature regulation, and promotes cardiac and respiratory stability. All of these effects are linked to lower mortality, and the biological pathways between immediate SSC, EIBF, and reduced mortality are compelling. A meta-analysis of large observational studies found a 25% lower risk of mortality in infants who began breastfeeding within one hour of birth compared to initiation after one hour. These practices are attractive targets for intervention, and promoting them is effective. Immediate SSC and EIBF require no commodities, are under the direct influence of birth attendants, are time-bound to the first hour after birth, are consistent with international guidelines, and are appropriate for universal promotion. Their adoption is often low, but ceilings are demonstrably high: many low-and middle-income countries (LMICs) have rates of EIBF less than 30%, yet several have rates over 70%. Multiple studies find that health worker training and quality improvement activities dramatically increase rates of immediate SSC and EIBF. There do not appear to be any major actors focused specifically on promotion of universal immediate SSC and EIBF. By contrast, general breastfeeding promotion and essential newborn care training programs are relatively common. More research on cost-effectiveness is needed, but it appears promising. Limited existing
 ·  · 11m read
 · 
Our Mission: To build a multidisciplinary field around using technology—especially AI—to improve the lives of nonhumans now and in the future.  Overview Background This hybrid conference had nearly 550 participants and took place March 1-2, 2025 at UC Berkeley. It was organized by AI for Animals for $74k by volunteer core organizers Constance Li, Sankalpa Ghose, and Santeri Tani.  This conference has evolved since 2023: * The 1st conference mainly consisted of philosophers and was a single track lecture/panel. * The 2nd conference put all lectures on one day and followed it with 2 days of interactive unconference sessions happening in parallel and a week of in-person co-working. * This 3rd conference had a week of related satellite events, free shared accommodations for 50+ attendees, 2 days of parallel lectures/panels/unconferences, 80 unique sessions, of which 32 are available on Youtube, Swapcard to enable 1:1 connections, and a Slack community to continue conversations year round. We have been quickly expanding this conference in order to prepare those that are working toward the reduction of nonhuman suffering to adapt to the drastic and rapid changes that AI will bring.  Luckily, it seems like it has been working!  This year, many animal advocacy organizations attended (mostly smaller and younger ones) as well as newly formed groups focused on digital minds and funders who spanned both of these spaces. We also had more diversity of speakers and attendees which included economists, AI researchers, investors, tech companies, journalists, animal welfare researchers, and more. This was done through strategic targeted outreach and a bigger team of volunteers.  Outcomes On our feedback survey, which had 85 total responses (mainly from in-person attendees), people reported an average of 7 new connections (defined as someone they would feel comfortable reaching out to for a favor like reviewing a blog post) and of those new connections, an average of 3