[Idea to reduce investment in large training runs]
OpenAI is losing lots of money every year. They need continuous injections of investor cash to keep doing large training runs.
Investors will only invest in OpenAI if they expect to make a profit. They only expect to make a profit if OpenAI is able to charge more for their models than the cost of compute.
Two possible ways OpenAI can charge more than the cost of compute:
* Uniquely good models. This one's obvious.
* Switching costs. Even if OpenAI's models are just OK, if your AI application is already programmed to use OpenAI's API, you might not want to bother rewriting it.
Conclusion: If you want to reduce investment in large training runs, one way to do this would be to reduce switching costs for LLM users. Specifically, you could write a bunch of really slick open-source libraries (one for every major programming language) that abstract away details of OpenAI's API and make it super easy to drop in a competing product from Anthropic, Meta, etc. Ideally there would even be a method to abstract away various LLM-specific quirks related to prompts, confabulation, etc.
This pushes LLM companies closer to a world where they're competing purely on price, which reduces profits and makes them less attractive to investors.
The plan could backfire by accelerating commercial adoption of AI a little bit. My guess is that this effect wouldn't be terribly large.
There is this library. Seems like adoption is a bit lower than you might expect. It has ~13K stars on Github, whereas Django (venerable Python web framework that lets you abstract away your choice of database, among other things) has ~80K. So concrete actions might take the form of:
* Publicize litellm. Give talks about it, tweet about it, mention it on StackOverflow, etc. Since it uses the OpenAI format, in theory it should be easy for existing OpenAI users to drop in litellm?
* Make improvements to litellm so it is even more agnostic to LLM-specific q