Hide table of contents

Check out the Into AI Safety podcast on Spotify, Apple Podcasts, Amazon Music, YouTube Podcasts, and many other podcast listening platforms!

As always, the best things come in 3s: dimensions, musketeers, pyramids, and... 3 installments of my interview with Dr. Peter Park, an AI Existential Safety Post-doctoral Fellow working with Dr. Max Tegmark at MIT.

The sub-series runs approximately 3 hours and 40 minutes in total, during which Dr. Park and I discuss StakeOut.AI, a nonprofit which he cofounded  along with Harry Luk and one other cofounder whose name has been removed due to requirements of her current position. 

The non-profit had a simple but important mission: make the adoption of AI technology go well, for humanity, but unfortunately, StakeOut.AI had to dissolve in late February of 2024 because no granter would fund them. Although it certainly is disappointing that the organization is no longer functioning, all three cofounders continue to contribute positively towards improving our world in their current roles.

If you would like to investigate further into Dr. Park’s work, view his website, Google Scholar, or follow him on Twitter!


Since the interview is so long, I totally get wanting to jump right to the parts you are most interested in. To assist with this, I have included chapter timestamps in the show notes, which should allow you to quickly find the content you're looking for. In addition, will give a brief overview of each episode here. You can find even more sources on the Into AI Safety website.

Episode 1  |  StakeOut.AI  Milestones

  • Milestones
    • StakeOut.AI's AI governance scorecard [1] (go to Pg. 3)
    • Hollywood informational webinar
    • Amplifying public voice through open letters [2] [3] and regulation suggestions [4] [5]
    • Assisting w/ passing the EU AI Act
    • Calling out E/ACC in the New York Times [6]
  • Last minute lobbying to water down the EU AI Act
  • Divide-and-Conquer Dynamics in AI-Driven Disempowerment [7]
  • AI "art"

Episode 2  |  The Next AI Battlegrounds

  • Battlegrounds
    • Copyright
    • Moral critique of AI collaborationists
    • Contract negotiations for AI ban clauses
    • Establishing new laws and policies
    • Whistleblower protections [8] [9]
  • OpenAI Drama
    • Zvi Mowshowitz's substack series [10] [11] [12] [13] [14]
      (if you're only gonna read one, read [13])
  • Corporate influence campaigns
    • Tesla/Autopilot/FSD [15] [16]
    • Recycling

Episode 3  |  Freeform

  • The power grab of rapid development
  • Provable safety
  • The great Open Source debate
    • NOTE: Stop calling Visible Weights models, or as I prefer to call them, Run-a-Weights models, Open Source! [17] [18]
  • AIxBio and scientific rigor in AI
    • The post you're probably thinking about [19]
    • New framework for thinking about risk [20]
    • A key takeaway: blame academic publishing
  • Deception about AI deception [21]
  • "I'm sold, next steps?" -you

Acknowledgements

This work was made possible by AI Safety Camp

Special thanks to individuals that helped along the way: 
Dr. Peter Park; Chase Precopia; Brian Penny; Leah Selman; Remmelt Ellen; Pete Wright

1

2
0
2

Reactions

2
0
2

More posts like this

Comments
No comments on this post yet.
Be the first to respond.
Curated and popular this week
Relevant opportunities