Follow me on hauke.substack.com
I'm an independent researcher working on EA topics (Global Priorities Research, Longtermism, Global Catastrophic Risks, and Economics).
Looking for collaborators, hires, job offers, or grant funding.
I can give advice and offer research collaborations.
Relatedly: "without the gains of stocks that are possible AI winners, the S&P 500 would now be down 2 per cent this year, rather than up 8 per cent." https://archive.ph/KFMJU
This might suggest that the gains from AI might be distributed more evenly amongst different Big Tech companies and that economies of scope are more important than relatively small technical leads.
Private R&D cannot be protected perfectly because patents expire or industry know-how diffuses to other firms and not all rents from investments can be captured. There was a leaked memo out of Google recently that said that Open source foundation models are very good and don't need much compute to run. Recently, OpenAI's CEO Altman has often highlighted that their models are not based on any one fundamental technical breakthrough, but thousands of little hacks from tinkering- but perhaps this is wrong and a strategic statement to boost the valuation of the company.
Daniel's Heavy Tail Hypothesis (HTH) vs. this recent comment from Brian saying that he thinks that classic piece on 'Why Charities Usually Don't Differ Astronomically in Expected Cost-Effectiveness' is still essentially valid.
Seems like Brian is arguing that there are at most 3-4 OOM differences between interventions whereas Daniel seems to imply there could be 8-10 OOM differences?
And Ben Todd just tweeted about this as well.
What is your production function?