Programme Director at ML4Good. I also do some work on China policy, AI governance, and animal advocacy in Asia.
Also interested in effective giving (mainly animal charities), economic development (and how AI will affect it), AI x Animals, wild animal welfare, cause prioritisation, and various meta-EA topics.
I agree with the "you don't have to debate on their terms" point here, but I think for 99% of your readers/listeners, it cuts far more strongly in a different way than that you're implying.
The debate has generally been set in terms of "Anthropic vs. DoW", and, while I know zero people in our community who have taken the government's side on this, I've seen many EAs and adjacent people become increasingly uncritical supporters of Anthropic, just because they're standing against the obviously bad actor in this situation.
I think it's important to remember:
Thanks for this post, it's super valuable to get a better sense of this ecosystem.
On the apparent lack of Chinese companies, I think this is a methodological thing; a few possible blind spots:
Hi Klara, thanks for the response.
I don't think I am entering the abortion debate by assigning moral value to unborn lives any more than I'm entering any other debate that considers unborn or potential lives (e.g. the ethics of moderate drinking while pregnant, the ethics of having children in space, or the repugnant conclusion).
I think I'm comfortable with having mostly sidestepped the maternal health issues, given that I was focusing on interventions that are robustly good for the mother. If I were to do a stronger and more robust cost-effectiveness analysis, or tackle more controversial interventions where the interests of the mother and child clearly diverged, I would consider maternal health outcomes separately. I hope my piece makes it clear that we should prioritise uncontroversial and neglected interventions that treat or prevent painful conditions that women suffer from.
Although I do recognise that the ethics of pregnancy, lived experience of the mother, and autonomy trade-offs are important considerations, I'm afraid that attempting to tackle these here would have made this an impossibly long post!
When I say “the economics are looking good,” I mean that the conditions for capital allocation towards AGI-relevant work are strong. Enormous investment inflows, a bunch of well-capitalised competitors, and mass adoption of AI products means that, if someone has a good idea to build AGI within or around these labs, the money is there. It seems this is a trivial point - if there were significantly less capital, then labs couldn’t afford extensive R&D, hardware or large-scale training runs.
WRT Scaling vs. fundamental research, obviously "fundamental research" is a bit fuzzy, but it's pretty clear that labs are doing a bit of everything. DeepMind is the most transparent about this, they're doing Gemini-related model research, Fundamental science, AI theory and safety etc. and have published thousands of papers. But I'm sure a significant proportion of OpenAI & Anthropic's work can also be classed as fundamental research.
I think there are two categories of answer here: 1) Finance as an input towards AGI, and 2) Finance as an indicator of AGI.
1) is the idea that the bubble popping would make it more difficult to develop AGI, because of financial constraints. Regardless of whether you think current LLM-based AI has fundamental flaws or not, the fact that insane amounts of capital are going into 5+ competing companies providing commonly-used AI products should be strong evidence that economics are looking good (edit: this avenue of R&D is perceived by multiple independent actors as providing promising returns) and that if AGI is technically possible using something like current tech, then all the incentives and resources are in place to find the appropriate architectures. If suddenly the bubble were to completely burst, even if we believed strongly that LLM-based AGI is imminent, there might be no more free money, so we'd now have an economic bottleneck to training new models.
In this scenario, we'd have to update our timelines/estimates significantly (especially if you think straightforward scaling is a our likely pathway to AGI).
2) is the idea that the level of investment in AI provides signals about the near-term possibility of AGI, therefore the bubble bursting would be evidence that it's less possible.
Whether or not this would change my mind depends on the situation. Financial markets are fickle enough that the bubble could pop for a bunch of reasons unrelated to current model trends - rare-earth export controls having an impact, something TSMC-related in Taiwan, slightly lower uptake figures, the decision of one struggling player (e.g. Meta) to leave the LLM space, or one highly-hyped but ultimately disappointing application, for example. If I was unsure of the reason, would I assume that the market knows something I don't? I might update slightly, but I'm not sure to what extent I'd trust the market to provide valuable information about AGI more than direct information about model capabilities and diffusion.
But of course, if we do update on market shifts, it has to be at least somewhat symmetrical. If a market collapse would slow down your timelines, insane market growth should accelerate your timelines for the same reason.
I definitely identify with where you're coming from here, but these insights might also imply a potential partner post on "How to avoid EA senescence (if you want to)".
Based on your examples, this might look like:
I wrote a (draft amnesty) post on a very small subset of this - what global health charities you should donate to given different worldviews.