I'm delighted to welcome Assoc. Prof. Roman Yampolskiy to Episode 2 of Cozy Catchups!
This episode, we will delve into his most provocative and insightful research. He's most notably known for his claim that alignment is IMPOSSIBLE
After I've asked my questions, then it'll be your turn to inquire about anything that's been on your mind with Roman 
If you’re keen on doing a bit of homework before the episode, I recommend checking out Roman’s Time article (https://time.com/6258483/uncontrollable-ai-agi-risks/) ,his explanation of his work on IAI news (https://iai.tv/.../the-hard-problem-of-ai-safety-auid-1773), or his survey paper on impossibility results (https://dl.acm.org/doi/10.1145/3603371).
Event Access:
No registration required! Simply click on the provided link on the day of the event and you're in.
Mark your calendars so that you don't miss out!
