H

HenningB

-3 karmaJoined Aug 2021

Bio

MSc in Artificial Intelligence & Life Coach I'm working as a machine learning engineer in Amsterdam and started personalised coaching 6+ years ago. My personal and professional development benefited considerably from coaching (e.g., switching from electrical engineering to machine learning). Consequently, I'm a strong advocate for motivating people to experiment with coaching. I have typically time for 1-2 clients. Avid interest in AI Alignment, (practical) philosophy and meditation.

Feel free to reach out to discuss perspectives on AI, AI projects or coaching.

Comments
5

Thanks for the effort you have invested to research and write up the constructive feedback. How much time did you roughly spend on this? 

Thanks for sharing your perspective in this well-written form. I agree that naive trust and idolising people can be hurtful and dangerous. Since even the people we consider virtuous and admire the most are just humans and thus imperfect, it is important to keep that in mind. 
 

On the other hand, I believe that a nuanced view of possible idols or role models can be very useful for inspiration, guidance and growth. Despite having shortcomings, those we admire can provide a lot we that can learn from and that we might want to cultivate ourselves. 

As Seneca points out: “Without a ruler to do it against, you can’t make crooked straight.”
 I think he argues well for the importance of having (nuanced) reference points to compare the line/quality of our character with.


So rather than disposing of idols and role models altogether, I propose to be more nuanced and to pick and choose the admirable character traits and qualities that are helpful to you. 

Besides the typical (over-dramatised) killer robot scenarios, I would add the perspective of looking for infrastructure breakdowns or societal chaos. Imagine a book like Blackout with a disruptive, national blackout but that was caused by powerful intelligent (control) systems. 
Or a movie like Don't Look Up where AI is used to or actively spreads misinformation that severely impact public opinion and taking effective action. 
In movies and books it might commonly be portrayed (in part) as human failure; but it could have been the result of correlated automation failure or goal mis-specification or a power-seeking AI system. 

The consequences that individual humans and society at large suffer from critical infrastructure breaking down can be quite realistic and visceral. 

Thanks for sharing your pragmatic overview here! I like the idea a lot. 

Despite well-known shortcomings of narrowly optimising for metrics/benchmarks, I believe that curated benchmark datasets can be very helpful for progress on AI safety. To expand, following value propositions also seem promising: 

  • Get more specific: try to encapsulate certain qualities of AI systems that we care about in a benchmark making that quality more specific and tractable
  •  Make it more accessible: probably lower entry point to the field and can facilitate communication among the community