I'm an independent researcher working on various projects in cause prioritization and global conflict research.
Previously I've been a Research Fellow at the Forethought Foundation, where I worked on What We Owe The Future with Will MacAskill; an Applied Researcher at Founders Pledge; and a Program Analyst for UNDP.
I first got interested in civilizational collapse and global catastrophic risks by working on a Maya archaeological excavation in Guatemala.
I didn't know this, and it's awesome.
What did your work on the Mayans teach you about civilizational collapse?
I'm curious who you've seen recommending starting with Mearsheimer? That seems like an unbalanced starting point to me.
I'd personally recommend a textbook, like an older edition of World Politics.
Thanks for writing this. I think a lot of it is pointing at something important. I broadly agree that (1) much of the current AI governance and safety chat too swiftly assumes an us-v-them framing, and that (2) talking about countries as actors obscures a huge amount of complexity and internal debate.
On (2), I think this tendency leads to analysis that assumes more coordination among governments, companies, and individuals in other countries than is warranted. When people talk about "the US" taking some action, readers of this Forum are much more likely to be aware of the nuance this ignores (e.g. that some policy may have emerged from much debate and compromise among different government agencies, political parties, or ideological factions). We're less likely to consider such nuances when people talk about "China" doing something.
That said, I think your claim that governments don't influence AI development [via semiconductor progress] is too strong. For example, this sentence:
It seems plain that nations are not currently meaningful players in AI development and deployment, absent conspiracy-level secrecy.
seems likely wrong to me. The phrasing ("it seems plain") also suggests to me that you should be somewhat less confident in your views on these issues overall.
Some examples of government being meaningful players:
There are also historical examples of government action shaping outcomes in the semiconductor (and therefore AI space).
For example, early demand for semiconductors was driven by the US government's military and space program. And TSMC got started when the Taiwanese administration invited Morris Chang to start a semiconductor company in Taiwan (and provided half his start up funding) (source: I read this in Chip War, and that take is summarized on the Wikipedia page).
You write also that "Google and Microsoft really care about each other's chip access in a way that they only do to a weaker degree about Alibaba's." That may be true, I don't really know. But I'm pretty confident that the US government does care a lot about whether Google or Alibaba have access to more chips. Hence the export controls, subsidies, and regulations discussed above.
I disagree fwiw. The benefits of transparency seem real but ultimately relatively small to me, whereas there could be strong personal reasons for some people to decline to publicise their participation.
More country-specific content could be really interesting. I'd be interested in broad interviews covering:
This is a tangent, but I think it's important to consider predictors' entire track records, and on the whole I don't think Mearsheimer's is very impressive. Here's a long article on that.
I think this is a ridiculous idea, but the linked article (and headline of this post) is super clickbait-y. This idea is mentioned in two sentences in the court documents (p. 20 of docket 1886, here). All we know is that Gabriel, Sam's brother, sent a memo to someone at the FTX Foundation mentioning the idea. We have no idea if Sam even heard about this or if anyone at the Foundation "wanted" to follow through with it. I'm sure all sorts of wild possibilities got discussed around that time. Based on the evidence it's a a huge leap to say there were desires or plans to act on them.
Thanks for this! I agree interventions in this direction would be worth looking into more, though I'd also say that tractability remains a major concern. I'm also just really uncertain about the long-term effects.
I think the Quincy Institute is interesting but want to note that it's also very controversial. Seems like they can be inflammatory and dogmatic about restraint policies. From an outside perspective I found it hard to evaluate the sign of their impact, much less its magnitude. I don't think I'd recommend 80K put them on the job board right now.
Asssume "Philanthropy to the Right-of-Boom" is a roaring success (say, a 95th-percentile good outcome for that report). In a few years, how does the world look different? (Pick any number of years you'd like!)