Hey everyone! My name is Jacob Haimes, and I host the Into AI Safety podcast. At this point I have released 15 episodes of various length, and if you're interested you can check out my initial motivation in a post I made to LessWrong.
The important part, though, is that this week's episode of the podcast is an interview with Dr. Peter Park. Along with Harry Luk and one other cofounder, he started StakeOut.AI, a non-profit with the goal of making AI go well, for humans.
Unfortunately, due to funding pressures, the organization recently had to dissolve, but the founders continue to contribute positively towards society in their respective roles.
Nonetheless, the interview gives great coverage of some of the first struggles and accomplishments that have happened since "AI" hit the main stream.
Note that the interview will be broken up into 3 episodes, and this one is only the first in the series. Additionally, note that this interview was made possible through the 2024 Winter AI Safety Camp.
As I have mentioned previously, any feedback, advice, comments, etc. is greatly appreciated.
Hi CAISID, thanks for letting me know that you think the podcast is interesting!
(For context to others, the original comment is referring to an aside that I give at 27:04 about "foundation models")
I definitely did have a difficult time finding concrete answers regarding training costs of current cutting edge foundation models. The only numbers I could find from the big companies that were primary sources, i.e. from people within the company/press releases, were very loose numbers that included salaries, which basically makes them meaningless (at least, for the information I want out of them).
There has been some work done on estimating training costs (e.g., Epoch or the AI Index Report), but it seemed that I would need to spend a significant amount of time collecting data and even doing some forecasting to actually get approximations for current state of the art.
Would love to hear your thoughts on this either here or whatever messaging format you prefer.