I live for a high disagree-to-upvote ratio
Mmm, it is not merely the case that finance is drying up, but that according to OECD data, in 2023 net financial flows to the Global South were actually negative (i.e. they paid more in repayments than they received in new finance).
Nvidia’s moat comes from a few things. As you pointed out, they have CUDA, which is a proprietary set of APIs for running parallelised math operations. But they also have the best performing chips on the market by a long way. This is not merely a function of having strong optimisation on the software side (possibly replicable by o3 but I would need to see more evidence to be convinced that an LLM would be good at optimisation), or on the hardware side (much, MUCH trickier for an LLM given that a lot of the hardware has to operate on nanometre scale, which can be hard to simulate), but also because having the most money and a strong track record & relationship means they can get preferential access to next-gen fabs at TSMC.
It is also true that the recent boom has increased investment into running CUDA code on other GPUs. The SCALE project is one such example. This implies (a) the bottleneck is not about replicating CUDA’s functionality (which it does), but more about replicating its performance (they might have gains to make there) and/or (b) that the actual moat really does lie in the hardware. Again, probably a mix of both.
However, this hasn’t stopped other companies from making progress here. I think it’s indicative that Deepseek v3 was allegedly trained for less than $10m. If this is true, it suggests to me that:
Frontier labs might be currently using their hardware very inefficiently, and if these efficiencies were to be capitalised on, demand for Nvidia hardware would reduce (both by using less of their GPUs, but also because you wouldn’t need the best of the best to do well)
If it turns out to be cheap to train good LLMs, captured value might shift back to frontier labs, or even to downstream applications. This would reduce Nvidia’s pricing power.
Also, it looks like the competition is catching up anyway. It seems like it’s very reasonable to do inference on Apple or Google chips (Apple Intelligence runs on M2-series chips, these also have top TSMC node access; Google run a lot of inference on their own TPUs). I was particularly impressed that you can run a 600B+ parameter model on 8 Mac Minis, not even running Apple’s best chips. Even if it’s only inference, that’s a huge chunk of the market that might fall to competitors soon.
So I’m not exactly counting on Nvidia to hold, but I think it will be for other reasons than automation. Even if you are very AI-pilled, we still live in the world where market dynamics are much stronger than labour automation effects. For now :)
I don’t know enough about AMF to answer your question directly, but I can shed some light on market failures by way of analogy to my employer, Kaya Guides, which provides free psychotherapy in India:
I can see how many, if not all, of these would be analogous to AMF. The market doesn’t and can’t solve every problem!
Heya, I’m not an AI guy anymore so I find these posts kinda tricky to wrap my head around. So I’m earnestly interested in understanding: If AGI is that close, surely the outcomes are completely overdetermined already? Or if they’re not, surely you only get to push the outcomes by at most 0.1% on the margins (which is meaningless if the outcome is extinction/not extinction)? Why do you feel like you have agency in this future?
I’ve written about this before here, but I think this book actually gives bad strategic advice:
I have a friend who likes to criticize this book by noting that, although Operation Desert Storm was 'good' strategy, it[1]:
- Took out 96% of civilian electricity production
- Took out most of the civilian dams & sewage treatment
- Took out civilian telecommunications, ports, bridges, railroads, highways, and oil refineries
- Killed 2,278 civilians and wounded 5,965, including 408 sheltering in an air raid shelter
The Gulf War more generally directly killed ~100k Iraqis, of which ~25k were civilians[2]. The subsequent uprisings killed another ~50k, mostly civilians. And then, because basically all infrastructure was gone and the US imposed trade sanctions, hundreds of thousands more died from starvation and inadequate health, of which ~47k were children[3].
Okay, but aside from the gotcha with the obvious moral wrongs, this friend argues that Desert Storm was terrible strategy because obliterating an entire country's infrastructure might have looked cool on TV, but we're still seeing the destabilizing effects that had on the region, essentially taking economic value the U.S. could've slurped up offline for decades and also costing them $8,000,000,000,000 in future wars[4].
It is unlikely that these externalities were counterfactually necessary. In fact, it is probably the most salient example of 'winning the battle but losing the war' in all of human history.
My friend wraps it up by arguing that this exemplifies the book's blind spots:
- It ignores essentially all externalities of 'good' strategies on an object level
- It ignores those externalities for their moral harms
- It ignores those externalities when they produce blowback that affects your own goals
- In particular, it advocates for strategies that produce more harms than counterfactually necessary by ignoring the above
We can extend the book's definition of good strategy by adding precision to the goals; to generally pick the strategy with the least externalities in order to minimise unnecessary moral harms and blowback. (And I would also advocate for not having any 'necessary' moral harms, but that seems out of scope for this post).
I think that’s a false dichotomy. It should be possible to have uncomfortable/weird ideas here while treating them with nuance and respect. (Are you instead trying to argue that having a higher bar for these kinds of posts is a bad idea?)
Equally, the original post doesn’t try to understand the perspective that abortion might be net good for the world. So I think the crux might actually be more about who you think should shoulder the burden of attempting-to-understand.