I'm currently doing a Ph.D. in ML at the International Max-Planck research school in Tübingen. My focus is on Bayesian ML and I'm exploring its role in AI alignment but I'm also exploring non-Bayesian approaches. I want to become an AI safety researcher/engineer. If you think I should work for you, please reach out.
For more see https://www.mariushobbhahn.com/aboutme/
I subscribe to Crocker's Rules