Bio

Participation
7

Programme Director at ML4Good. I also do some work on China policy, AI governance, and animal advocacy in Asia.

Also interested in effective giving (mainly animal charities), economic development (and how AI will affect it), AI x Animals, wild animal welfare, cause prioritisation, and various meta-EA topics.

Comments
42

I wrote a (draft amnesty) post on a very small subset of this - what global health charities you should donate to given different worldviews.

I agree with the "you don't have to debate on their terms" point here, but I think for 99% of your readers/listeners, it cuts far more strongly in a different way than that you're implying. 

The debate has generally been set in terms of "Anthropic vs. DoW", and, while I know zero people in our community who have taken the government's side on this, I've seen many EAs and adjacent people become increasingly uncritical supporters of Anthropic, just because they're standing against the obviously bad actor in this situation. 

I think it's important to remember:

  1. If you thought Anthropic was untrustworthy before this, you shouldn't update too much the other way - especially when they backtracked on their RSP over the same period.
  2. If you thought that Anthropic's decision to join the race towards AGI was perilous, you shouldn't really update your view on this based on the Pentagon being absurd and unpredictable.
  3. Regardless of the intention and character of the government actors, it's potentially still a worrying sign that the most powerful state in the world has tried to shut down a frontier AI lab and failed spectacularly.
  4. The growth and popularity of Anthropic and Claude Code have since caused the AI 2027 team to shorten their AGI timelines. 

Thanks for this post, it's super valuable to get a better sense of this ecosystem.

On the apparent lack of Chinese companies, I think this is a methodological thing; a few possible blind spots:

  1. Most obviously, English-language, web-based search is probably going to miss some Chinese AI-aquaculture innovators that would otherwise meet the paper’s inclusion criteria. Using Chinese-language platforms might be necessary.
  2. AI for aquaculture in China is often embedded within broader, more integrated "smart aquaculture” systems rather than marketed as standalone AI products. e.g. I'm not sure if Limap 励图高科 came up on your search, but it's a huge aquaculture and fisheries innovation company, deploying integrated aquaculture platforms across China, covering over 50 species etc. Some of their products and systems are explicitly AI-enabled & welfare relevant (e.g. computer-vision-based fish health and disease detection, visible in Chinese-language demos and videos), but they might have been excluded/missed because they're so broad.
  3. China's innovation system is more state-led than elsewhere, and a lot of innovation happens through Universities, Agricultural Science and Technology Parks, local government programmes, "demonstration bases" 示范基地 etc. For example, China’s Ministry of Agriculture and Rural Affairs recently released a 主推技术 ("main promoted technology") notice, announcing that AI-enabled “smart aquaculture factory” technologies (including behaviour recognition, automated feeding, inspection robots, and large-model-based decision systems) are being supported through national programmes, implying a state-led deployment process. So you might be missing Chinese AI innovation in aquaculture that's not strictly commercial.

I'd lean towards the World Happiness Report results here. IPSOS uses a fully online sample, which means you end up losing the "bottom half" of the population. World Happiness Report is phone and in-person.

Hi Klara, thanks for the response.

I don't think I am entering the abortion debate by assigning moral value to unborn lives any more than I'm entering any other debate that considers unborn or potential lives (e.g. the ethics of moderate drinking while pregnant, the ethics of having children in space, or the repugnant conclusion). 

I think I'm comfortable with having mostly sidestepped the maternal health issues, given that I was focusing on interventions that are robustly good for the mother. If I were to do a stronger and more robust cost-effectiveness analysis, or tackle more controversial interventions where the interests of the mother and child clearly diverged, I would consider maternal health outcomes separately. I hope my piece makes it clear that we should prioritise uncontroversial and neglected interventions that treat or prevent painful conditions that women suffer from.

Although I do recognise that the ethics of pregnancy, lived experience of the mother, and autonomy trade-offs are important considerations, I'm afraid that attempting to tackle these here would have made this an impossibly long post!

When I say “the economics are looking good,” I mean that the conditions for capital allocation towards AGI-relevant work are strong. Enormous investment inflows, a bunch of well-capitalised competitors, and mass adoption of AI products means that, if someone has a good idea to build AGI within or around these labs, the money is there. It seems this is a trivial point - if there were significantly less capital, then labs couldn’t afford extensive R&D, hardware or large-scale training runs. 

WRT Scaling vs. fundamental research, obviously "fundamental research" is a bit fuzzy, but it's pretty clear that labs are doing a bit of everything. DeepMind is the most transparent about this, they're doing Gemini-related model research, Fundamental science, AI theory and safety etc. and have published thousands of papers. But I'm sure a significant proportion of OpenAI & Anthropic's work can also be classed as fundamental research. 

I think there are two categories of answer here: 1) Finance as an input towards AGI, and 2) Finance as an indicator of AGI.

1) is the idea that the bubble popping would make it more difficult to develop AGI, because of financial constraints. Regardless of whether you think current LLM-based AI has fundamental flaws or not, the fact that insane amounts of capital are going into 5+ competing companies providing commonly-used AI products should be strong evidence that   economics are looking good  (edit: this avenue of R&D is perceived by multiple independent actors as providing promising returns) and that if AGI is technically possible using something like current tech, then all the incentives and resources are in place to find the appropriate architectures. If suddenly the bubble were to completely burst, even if we believed strongly that LLM-based AGI is imminent, there might be no more free money, so we'd now have an economic bottleneck to training new models. 

In this scenario, we'd have to update our timelines/estimates significantly (especially if you think straightforward scaling is a our likely pathway to AGI). 

2) is the idea that the level of investment in AI provides signals about the near-term possibility of AGI, therefore the bubble bursting would be evidence that it's less possible.
Whether or not this would change my mind depends on the situation. Financial markets are fickle enough that the bubble could pop for a bunch of reasons unrelated to current model trends - rare-earth export controls having an impact, something TSMC-related in Taiwan, slightly lower uptake figures, the decision of one struggling player (e.g. Meta) to leave the LLM space, or one highly-hyped but ultimately disappointing application, for example. If I was unsure of the reason, would I assume that the market knows something I don't? I might update slightly, but I'm not sure to what extent I'd trust the market to provide valuable information about AGI more than direct information about model capabilities and diffusion.

But of course, if we do update on market shifts, it has to be at least somewhat symmetrical. If a market collapse would slow down your timelines, insane market growth should accelerate your timelines for the same reason.

I definitely identify with where you're coming from here, but these insights might also imply a potential partner post on "How to avoid EA senescence (if you want to)".

Based on your examples, this might look like:

  • Specialise, even if it's not your job - dive very deep into at least one relevant EA area. If you can find something you find interesting and neglected, can you become top 1% knowledgeable (within EA) in an obscure sub-field?
  • Develop (and share) a niche perspective on where to donate based on your specific worldview. If you're very convinced about insect sentience, or you lean negative utilitarian, you will very quickly realise that EA Funds are not the highest EV option for you!
  • Prioritise boosting/maintaining your "EA energy"
  • Host more parties
Load more