huw

Co-Founder & CTO @ Kaya Guides
1451 karmaJoined Working (6-15 years)Sydney NSW, Australia
huw.cool

Bio

Participation
2

I live for a high disagree-to-upvote ratio

Comments
213

Answer by huw6
1
0

Nvidia’s moat comes from a few things. As you pointed out, they have CUDA, which is a proprietary set of APIs for running parallelised math operations. But they also have the best performing chips on the market by a long way. This is not merely a function of having strong optimisation on the software side (possibly replicable by o3 but I would need to see more evidence to be convinced that an LLM would be good at optimisation), or on the hardware side (much, MUCH trickier for an LLM given that a lot of the hardware has to operate on nanometre scale, which can be hard to simulate), but also because having the most money and a strong track record & relationship means they can get preferential access to next-gen fabs at TSMC.

It is also true that the recent boom has increased investment into running CUDA code on other GPUs. The SCALE project is one such example. This implies (a) the bottleneck is not about replicating CUDA’s functionality (which it does), but more about replicating its performance (they might have gains to make there) and/or (b) that the actual moat really does lie in the hardware. Again, probably a mix of both.

However, this hasn’t stopped other companies from making progress here. I think it’s indicative that Deepseek v3 was allegedly trained for less than $10m. If this is true, it suggests to me that:

  1. Frontier labs might be currently using their hardware very inefficiently, and if these efficiencies were to be capitalised on, demand for Nvidia hardware would reduce (both by using less of their GPUs, but also because you wouldn’t need the best of the best to do well)

  2. If it turns out to be cheap to train good LLMs, captured value might shift back to frontier labs, or even to downstream applications. This would reduce Nvidia’s pricing power.

Also, it looks like the competition is catching up anyway. It seems like it’s very reasonable to do inference on Apple or Google chips (Apple Intelligence runs on M2-series chips, these also have top TSMC node access; Google run a lot of inference on their own TPUs). I was particularly impressed that you can run a 600B+ parameter model on 8 Mac Minis, not even running Apple’s best chips. Even if it’s only inference, that’s a huge chunk of the market that might fall to competitors soon.

So I’m not exactly counting on Nvidia to hold, but I think it will be for other reasons than automation. Even if you are very AI-pilled, we still live in the world where market dynamics are much stronger than labour automation effects. For now :)

huw
14
2
0

I don’t know enough about AMF to answer your question directly, but I can shed some light on market failures by way of analogy to my employer, Kaya Guides, which provides free psychotherapy in India:

  1. Our beneficiaries usually can’t afford psychotherapy outright
  2. They sometimes live rurally, and can’t travel to places that do psychotherapy in person
  3. There are not enough psychotherapists in India for everyone to receive it
  4. The government, equally, don’t have the capacity or interest to develop the mental health sector enough (against competing health priorities) to make free treatment available
  5. Our beneficiaries usually don’t know what psychotherapy is, or that they have a problem at all, nor that it can be treated
  6. We are incentivised to make psychotherapy as cheap as possible to reach the worst-served portion of the market, while for-profits are incentivised to compete in more lucrative parts of the market

I can see how many, if not all, of these would be analogous to AMF. The market doesn’t and can’t solve every problem!

Heya, I’m not an AI guy anymore so I find these posts kinda tricky to wrap my head around. So I’m earnestly interested in understanding: If AGI is that close, surely the outcomes are completely overdetermined already? Or if they’re not, surely you only get to push the outcomes by at most 0.1% on the margins (which is meaningless if the outcome is extinction/not extinction)? Why do you feel like you have agency in this future?

I’ve written about this before here, but I think this book actually gives bad strategic advice:

I have a friend who likes to criticize this book by noting that, although Operation Desert Storm was 'good' strategy, it[1]:

  • Took out 96% of civilian electricity production
  • Took out most of the civilian dams & sewage treatment
  • Took out civilian telecommunications, ports, bridges, railroads, highways, and oil refineries
  • Killed 2,278 civilians and wounded 5,965, including 408 sheltering in an air raid shelter

The Gulf War more generally directly killed ~100k Iraqis, of which ~25k were civilians[2]. The subsequent uprisings killed another ~50k, mostly civilians. And then, because basically all infrastructure was gone and the US imposed trade sanctions, hundreds of thousands more died from starvation and inadequate health, of which ~47k were children[3].

Okay, but aside from the gotcha with the obvious moral wrongs, this friend argues that Desert Storm was terrible strategy because obliterating an entire country's infrastructure might have looked cool on TV, but we're still seeing the destabilizing effects that had on the region, essentially taking economic value the U.S. could've slurped up offline for decades and also costing them $8,000,000,000,000 in future wars[4].

It is unlikely that these externalities were counterfactually necessary. In fact, it is probably the most salient example of 'winning the battle but losing the war' in all of human history.

My friend wraps it up by arguing that this exemplifies the book's blind spots:

  • It ignores essentially all externalities of 'good' strategies on an object level
  • It ignores those externalities for their moral harms
  • It ignores those externalities when they produce blowback that affects your own goals
  • In particular, it advocates for strategies that produce more harms than counterfactually necessary by ignoring the above

We can extend the book's definition of good strategy by adding precision to the goals; to generally pick the strategy with the least externalities in order to minimise unnecessary moral harms and blowback. (And I would also advocate for not having any 'necessary' moral harms, but that seems out of scope for this post).


I am seeing here that they already work closely with Open Philanthropy and were involved in drafting the Executive Order on AI. So this does not seem like a neglected avenue.

huw
7
1
0
2

Thank you—that’s very helpful to have all spelled out like that! Once I get my finances in order you may see me pledge ;)

huw
13
2
0

I asked this in-person, but I figure it’d be nice for a broader audience to hear: How should I navigate pledging if I have taken a very low salary to do direct work? In my case, I have taken a salary that roughly covers my expenses without leaving much margin for error. Of course, ‘my expenses’ buries the lede a little bit, because I believe I could make more sacrifices to take 10% off the top, but I think doing so might make me much more anxious or hurt my productivity.

In my case, my organisation doesn’t really have much more budget to pay me; that money would be better spent elsewhere. And the market rate for my skills is much higher, even in the non-profit sector, even in India, where we operate (still probably +50% at a minimum).

If I pledged 10%, would I have to take a higher salary or donate it out of my existing salary? Or is there another way to account for this?

Answer by huw11
1
0

I agree with you that EA often implicitly endorses conclusions, and that this can be pernicious and sometimes confusing to newcomers. Here’s a really interesting debate on whether biodiversity loss should be an EA cause area, for example.

A lot of forms of global utilitarianism do seem to tend to converge on the ‘big 3’ cause areas of Global Health & Development, Animal Welfare, and Global Catastrophic Risks. If you generally value things like ‘saving lives’ or ‘reducing suffering’, you’ll usually end up at one of these (and most people seem to decide between them based on risk tolerance, assumptions about non-human moral values, or tractability—rather than outcome values). Under this perspective, it could be reasonable to dismiss cause areas that don’t fit into this value framework.

But this highlights where I think part of the problem lies, which is that value systems that lie outside of this can be good targets for effective altruism. If you value biodiversity for its own sake, it’s not unreasonable to ask ‘how can we save the greatest number of valuable species from going extinct?’. Or you might be a utilitarian, but only interested in a highly specific outcome, and ask ‘how can I prevent the most deaths from suicide?’. Or ‘how can I prevent the most suffering in my country?’—which you might not even do for value-system reasons, but because you have tax credits to maximise!

I wish EA were more open to this, especially as a movement that recognises the value of moral uncertainty. IMHO, some people in that biodiversity loss thread are bit too dismissive, and I think we’ve probably lost some valuable partners because of it! But I understand the appeal of wanting easy answers, and not spending too much time overthinking your value system (I feel the same!).

huw
12
2
0

Similarly, I wonder if one of the major activities this group could do together is joint funding, possibly by forming a funding circle. When I was EtG, I just donated broadly to GiveWell Top Charities because I found cause selection overwhelming, but a community of similar funders with some hobbyist-type research into causes and charities might’ve engaged me more.

Load more