This is a special post for quick takes by MichaelDickens. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
Sorted by Click to highlight new quick takes since:

Quick thoughts on investing for transformative AI (TAI)

Some EAs/AI safety folks invest in securities that they expect to go up if TAI happens. I rarely see discussion of the future scenarios where it makes sense to invest for TAI, so I want to do that.

My thoughts aren't very good, but I've been sitting on a draft for three years hoping I develop some better thoughts and that hasn't happened, so I'm just going to publish what I have. (If I wait another 3 years, we might have AGI already!)

When does investing for TAI work?

Scenarios where investing doesn't work:

  1. Takeoff happens faster than markets can react, or takeoff happens slowly but is never correctly priced in.
  2. Investment returns can't be spent fast enough to prevent extinction.
  3. TAI creates post-scarcity utopia where money is irrelevant.
  4. It turns out TAI was already correctly priced in.

Scenarios where investing works:

  1. Slow takeoff, market correctly anticipates TAI after we do but before it actually happens, and there's a long enough time gap that we can productively spend the earnings on AI safety.
  2. TAI is generally good, but money still has value and there are still a lot of problems in the world that can be fixed with money.

(Money seems much more valuable in scenario #5 than #6.)

What is the probability that we end up in a world where investing for TAI turns out to work? I don't think it's all that high (maybe 25%, although I haven't thought seriously about this).

You also need to be correct about your investing thesis, which is hard. Markets are famously hard to beat.

Possible investment strategies

  1. Hardware makers (e.g. NVIDIA)? Anecdotally this seems to be the most popular thesis. This is the most straightforward idea but I am suspicious that a lot of EA support for investing in AI looks basically indistinguishable from typical hype-chasing retail investor behavior. NVIDIA already has a P/E of 56. There is a 3x levered long NVIDIA ETP. That is not the sort of thing you see when an industry is overlooked. Not to say NVIDIA is definitely a bad investment, it could be even more valuable than the market already thinks, I'm just wary.
  2. AI companies? This doesn't seem to be a popular strategy, the argument against is that it's a crowded space with a lot of competition which will drive margins down. (Whereas NVIDIA has a ~monopoly on AI chips.) Plus I am concerned that giving more money to AI companies will accelerate AI development.
  3. Energy companies? It's looking like AI will consume quite a lot of energy. But it's not clear that AI will make a noticeable dent on global energy consumption. This is probably the sort of thing you could make reasonable projections for.
  4. Out-of-the-money call options on a broad index (e.g. S&P 500 or NASDAQ)? This strategy avoids making a bet about which particular companies will do well, just that something will do much better than the market anticipates. But I'd also expect that unusually high market returns won't start showing up until TAI is close (even in a slow-takeoff world), so you have less time to use the extra returns to prevent AI-driven extinction.
  5. Commodities? The idea is that anything complicated will become much easier to produce thanks to AI, but commodities won't be much easier to get, so their prices will go up a lot. This is an interesting idea that I heard recently, I have no idea if it's correct.
  6. Momentum funds (e.g. VFMO or QMOM)? The general theory of momentum investing is that the market under-reacts to slow news. The pro of this strategy is that it should work no matter which stocks/industries benefit from AI. The con is that it's slower—you don't buy into a stock until it's already started going up. (I own both VFMO and QMOM (mostly QMOM), a bit because of AI but mainly because I think momentum is a good idea in general.)

There is some discussion of strategy 4 on LW at the moment: https://www.lesswrong.com/posts/JotRZdWyAGnhjRAHt/tail-sp-500-call-options

I sold all my NVIDIA stock, since their moat looks weak to me:

https://forum.effectivealtruism.org/posts/rBx9RmJdBJgHkjL4j/will-openai-s-o3-reduce-nvidia-s-moat

I think your reasoning is generally correct. Another argument: If you believe things look sufficiently grim under short timelines, maybe you should invest under the assumption that a recession, or something else, will pop the AI bubble and gives us longer timelines.

Re: Possible investment strategies there is a dialogue on LessWrong from November 2023 which I think still holds up. Quoting from the takeaways:


Invest like 50% of my portfolio into pretty broad index funds with really no particular specialization

  • Take like 20% of my portfolio and throw it into some more tech/AI focused index fund. Maybe look around for something that covers some of the companies listed here on the brokerage interface that is presented to me (probably do a bit more research here)
  • Invest like 3-5% of my portfolio into each of Nvidia, TSMC, Microsoft, Google, ASML and Amazon
  • Take like 2-5% of my portfolio and use it to buy some options (probably some long-term call options on some of the stocks above), making really sure I buy ones that have limited downside, and see whether I can successfully not blow up that part of my portfolio for like 2 years before I do any more here

And then I probably wouldn't bother much with rebalancing and basically forget about it unless I feel like paying much extra attention.

About energy companies, I think the investment idea is less about general global energy consumption via AI, but rather the companies that are helping to build out and power these large data centres.

Microsoft have been investing in nuclear energy, xAI's Colossus cluster was positioned right next to a natural gas plant, Sam Altman invested in and is now chair of the board of nuclear startup Oklo. And my understanding is that power substation equipment is a bottleneck with equipment like transformers now having a lead time of years

Why does distributing malaria nets work? Why hasn't everyone bought a bednet already?

  • If it's because they can't afford bednets, why don't more GiveDirectly recipients buy them?
  • Is it because nobody in the local area sells bednets? If so, why doesn't anyone sell them?
  • Is it because people don't think bednets are worth it? If so, why do they use the bednets when given them for free?

Merely subsidizing nets, as opposed to free distribution, used to be a much more popular idea. My understanding is that that model was nuked by this paper showing that demand for nets falls discontinuously at any positive price (60 percentage points reduction in demand when going from 100% subsidy to 90% subsidy). So unless people's value for their children's lives are implausibly low, people are making mistakes in their choice of whether or not to purchase a bednet.

New Incentives, another GiveWell top charity, can move people to vaccinate their children with very small cash transfers (I think $10). The fact that $10 can mean the difference between whether people protect their children from life threatening diseases or not is crazy if you think about it.

This is not a rare finding. This paper found very low household willingness to pay for cleaning up contaminated wells, which cause childhood diarrhea and thus death. Their estimates imply that households in rural Kenya are willing to pay at most $770 to prevent their child's death, which just doesn't seem plausible. Ergo, another setting where people are making mistakes. Another; demand for motorcycle helmets is stupidly low and implies that Nairobi residents value a statistical life at $220, less than 10% of annual income. Unless people would actually rather die than give up 10% of their income for a year, this is clearly another case where people's decisions do not reflect their true value.

This is not that surprising if you think about it. People in rich countries and poor countries alike are really bad at investing in preventative health. Each year I dillydally on getting the flu vaccine, even though I know the benefits are way higher than the costs, because I don't want to make the trip to CVS (an hour out of my day, max). My friend doesn't wear a helmet when cycling, even at night or in the rain, because he finds it inconvenient. Most of our better health in the rich world doesn't come from us actively making better health decisions, but from our environment enabling us to not need to make health decisions at all.

I think this is the best explanation I've seen, it sounds likely to be correct.

I'm pretty sure the personal benefits of getting the flu vaccine for a male in their 20-30s is not much higher than the costs. Agree on the bike helmet thing though. 

Alexander Berger answered pretty much this exact question on a old 80k episode

Felt a little scared realizing that that episode is over 3 years old. It's such a great one and I return to it often!

huw
14
2
0

I don’t know enough about AMF to answer your question directly, but I can shed some light on market failures by way of analogy to my employer, Kaya Guides, which provides free psychotherapy in India:

  1. Our beneficiaries usually can’t afford psychotherapy outright
  2. They sometimes live rurally, and can’t travel to places that do psychotherapy in person
  3. There are not enough psychotherapists in India for everyone to receive it
  4. The government, equally, don’t have the capacity or interest to develop the mental health sector enough (against competing health priorities) to make free treatment available
  5. Our beneficiaries usually don’t know what psychotherapy is, or that they have a problem at all, nor that it can be treated
  6. We are incentivised to make psychotherapy as cheap as possible to reach the worst-served portion of the market, while for-profits are incentivised to compete in more lucrative parts of the market

I can see how many, if not all, of these would be analogous to AMF. The market doesn’t and can’t solve every problem!

That sounds pretty reasonable for why psychotherapy wouldn't be as widespread as it should. It looks to me like most of these reasons wouldn't apply to AMF. Training new psychotherapists takes years and tens of thousands of dollars (at developing-world wages). Getting more malaria nets requires buying more $5 malaria nets, and distributing malaria nets is much easier than distributing psychotherapists. So reasons 1–3 and #6 don't carry over (or at least not to nearly the same extent). #4 doesn't seem relevant to my original question so I think #5 is the only one that carries over—recipients might not know that they should be concerned about malaria.

Effective bednets have a relatively short shelf life due to both loss of insecticide and physical damage.

People in target regions can and do buy bednets, though for much of the target market the cost might still represent a day's income so they won't necessarily be inclined to replace them at optimal intervals. (On the other hand, it's a tiny fraction of a typical GiveDirectly handout, which is probably why "people buy bednets with it" isn't a major feature of their research even in regions with significant malaria). Consumers see [not necessarily as effective] alternative products purporting to achieve mosquito control in the same shops , and won't necessarily prioritise purchasing replacement nets when it represents a large spend for them, their reason for doing so is the existing bednet doesn't seem to be working, and people who are relatively informed about malaria prevention are also informed that governments and NGOs tend to dispense bednets for free...      Programmes dispensing free nets tend to provide advice on using them properly too.

Bednets on sale in some local markets are often untreated, so buying replacements locally isn't necessarily even a good decision.

How strong is the evidence for bednets being effective?

A priori there is a not unsurprising mistake the researchers could have made in reaching this conclusion & they would have an incentive to make such a mistake.

A priori bednets being very effective is a bit surprising.

What is the strongest study that supports this conclusion?

The evidence is quite strong. You can most likely get more detail than you ever wanted from the GiveWell review.

Thanks.

It seems like there are 4 studies with extended follow up -- Binka et al https://doi.org/10.1016/S0035-9203(02)90321-4 , Diallo et al https://pmc.ncbi.nlm.nih.gov/articles/PMC2585912/ , Lindblade et al https://doi.org/10.1001/jama.291.21.2571 , Louis et al https://doi.org/10.1111/j.1365-3156.2012.02990.x -- but not of the type that would be directly informative.

As Binka et al say “The original trials ran for only 1-2 years each. At the end of these periods, the efficacy of the intervention was considered proven and the control groups were provided with nets or curtains, thus these trials could not be used to demonstrate the effects of long-term transmission control.”.

"Are Ideas Getting Harder to Find?" (Bloom et al.) seems to me to suggest that ideas are actually surprisingly easy to find.

The paper looks at the difficulty of finding new ideas in a variety of fields. It finds that in all cases, effort on finding new ideas is growing exponentially over time, while new ideas are growing exponentially but at a lower rate. (For a summary, see Table 7 on page 31.) This is framed as a surprising and bad thing.

But it actually seems surprisingly good to me. My intuition is that the number of ideas should grow logarithmically with effort, or possibly even sub-logarithmically. If effort is growing exponentially, we'd expect to see linear or sub-linear growth in ideas. But instead we see exponential growth in ideas.

I don't have a great understanding of the math used in this paper, so I might be misinterpreting something.

Bloom et al. do report exponential growth of various metrics, but I don't think these metrics are well-characterized by 'ideas'. They are things like price-performance of transistors or crop yields per area.

If we instead attempt to measure progress by something like 'number of ideas', there is some evidence in favor of your guess that "ideas should grow logarithmically with effort". E.g., in a review of the 'science of science', Fortunato et al. (2018) say (emphases mine):

Early studies discovered an exponential growth in the volume of scientific literature, a trend that continues with an average doubling period of 15 years. Yet, it would be naïve to equate the growth of the scientific literature with the growth of scientific ideas. [...] Large-scale text analysis, using phrases extracted from titles and abstracts to measure the cognitive extent of the scientific literature, have found that the conceptual territory of science expands linearly with time. In other words, whereas the number of publications grows exponentially, the space of ideas expands only linearly.

Bloom et al. also report a linear increase in life expectancy in sc. 6. I vaguely remember that there are many more examples where exponential growth becomes linear once evaluated on some other 'natural' metrics, but I don't remember where I saw them. Possibly in the literature on logarithmic returns to science. Let me know if it'd be useful if I try to dig up some references.

ETA: See e.g. here, number of known chemical elements. Possibly there are more example in that SSC post.

Looking at the Decade in Review, I feel like voters systematically over-rate cool but ultimately unimportant posts, and systematically under-rate complicated technical posts that have a reasonable probability of changing people's actual prioritization decisions.

Example: "Effective Altruism is a Question (not an ideology)", the #2 voted post, is a very cool concept and I really like it, but ultimately I don't see how it would change anyone's important life decisions, so I think it's overrated in the decade review.

"Differences in the Intensity of Valenced Experience across Species", the #35 voted post (with 1/3 as many votes as #2), has a significant probability of changing how people prioritize helping different species, which is very important, so I think it's underrated.

(I do think the winning post, "Growth and the case against randomista development", is fairly rated because if true, it suggests that all global-poverty-focused EAs should be behaving very differently.)

This pattern of voting probably happens because people tend to upvote things they like, and a post that's mildly helpful for lots of people is easier to like than a post that's very helpful for a smaller number of people.

(For the record, I enjoy reading the cool conceptual posts much more than the complicated technical posts.)

Curated and popular this week
 ·  · 10m read
 · 
I wrote this to try to explain the key thing going on with AI right now to a broader audience. Feedback welcome. Most people think of AI as a pattern-matching chatbot – good at writing emails, terrible at real thinking. They've missed something huge. In 2024, while many declared AI was reaching a plateau, it was actually entering a new paradigm: learning to reason using reinforcement learning. This approach isn’t limited by data, so could deliver beyond-human capabilities in coding and scientific reasoning within two years. Here's a simple introduction to how it works, and why it's the most important development that most people have missed. The new paradigm: reinforcement learning People sometimes say “chatGPT is just next token prediction on the internet”. But that’s never been quite true. Raw next token prediction produces outputs that are regularly crazy. GPT only became useful with the addition of what’s called “reinforcement learning from human feedback” (RLHF): 1. The model produces outputs 2. Humans rate those outputs for helpfulness 3. The model is adjusted in a way expected to get a higher rating A model that’s under RLHF hasn’t been trained only to predict next tokens, it’s been trained to produce whatever output is most helpful to human raters. Think of the initial large language model (LLM) as containing a foundation of knowledge and concepts. Reinforcement learning is what enables that structure to be turned to a specific end. Now AI companies are using reinforcement learning in a powerful new way – training models to reason step-by-step: 1. Show the model a problem like a math puzzle. 2. Ask it to produce a chain of reasoning to solve the problem (“chain of thought”).[1] 3. If the answer is correct, adjust the model to be more like that (“reinforcement”).[2] 4. Repeat thousands of times. Before 2023 this didn’t seem to work. If each step of reasoning is too unreliable, then the chains quickly go wrong. Without getting close to co
 ·  · 11m read
 · 
My name is Keyvan, and I lead Anima International’s work in France. Our organization went through a major transformation in 2024. I want to share that journey with you. Anima International in France used to be known as Assiettes Végétales (‘Plant-Based Plates’). We focused entirely on introducing and promoting vegetarian and plant-based meals in collective catering. Today, as Anima, our mission is to put an end to the use of cages for laying hens. These changes come after a thorough evaluation of our previous campaign, assessing 94 potential new interventions, making several difficult choices, and navigating emotional struggles. We hope that by sharing our experience, we can help others who find themselves in similar situations. So let me walk you through how the past twelve months have unfolded for us.  The French team Act One: What we did as Assiettes Végétales Since 2018, we worked with the local authorities of cities, counties, regions, and universities across France to develop vegetarian meals in their collective catering services. If you don’t know much about France, this intervention may feel odd to you. But here, the collective catering sector feeds a huge number of people and produces an enormous quantity of meals. Two out of three children, more than seven million in total, eat at a school canteen at least once a week. Overall, more than three billion meals are served each year in collective catering. We knew that by influencing practices in this sector, we could reach a massive number of people. However, this work was not easy. France has a strong culinary heritage deeply rooted in animal-based products. Meat and fish-based meals remain the standard in collective catering and school canteens. It is effectively mandatory to serve a dairy product every day in school canteens. To be a certified chef, you have to complete special training and until recently, such training didn’t include a single vegetarian dish among the essential recipes to master. De
 ·  · 1m read
 · 
 The Life You Can Save, a nonprofit organization dedicated to fighting extreme poverty, and Founders Pledge, a global nonprofit empowering entrepreneurs to do the most good possible with their charitable giving, have announced today the formation of their Rapid Response Fund. In the face of imminent federal funding cuts, the Fund will ensure that some of the world's highest-impact charities and programs can continue to function. Affected organizations include those offering critical interventions, particularly in basic health services, maternal and child health, infectious disease control, mental health, domestic violence, and organized crime.