mariushobbhahn

1387Joined Dec 2020

Bio

I'm currently doing a Ph.D. in ML at the International Max-Planck research school in Tübingen. My focus is on Bayesian ML and I'm exploring its role in AI alignment but I'm also exploring non-Bayesian approaches. I want to become an AI safety researcher/engineer. If you think I should work for you, please reach out.

For more see https://www.mariushobbhahn.com/aboutme/

Comments
60

I want to be replaced

I think that while this is hard, the person I want to be would want to be replaced in both cases you describe. 
a) Even if you stay single, you should want to be replaced because it would be better for all three involved. Furthermore, you probably won't stay single forever and find a new (potentially better fitting) partner.
b) If you had very credible evidence that someone else was not hired who is much better than you, you should want to be replaced IMO. But I guess it's very implausible that you can make this decision better since you have way less information than the university or employer. So this case is probably not very applicable in real life.

AI safety starter pack

Great. Thanks for sharing. I hope it increases accountability and motivation!

What success looks like

No, it's a random order and does not have an implied ranking.

What success looks like

I don't think "dealing with it when we get there" is a good approach to AI safety. I agree that bad outcomes could be averted in unstable futures but I'd prefer to reduce the risk as much as possible nonetheless. 

What success looks like

I'm not sure why this should be reassuring. It doesn't sound clearly good to me. In fact, it sounds pretty controversial. 

What success looks like

I think this is a very important question that should probably get its own post. 

I'm currently very uncertain about it but I imagine the most realistic scenario is a mix of a lot of different approaches that never feels fully stable. I guess it might be similar to nuclear weapons today but on steroids, i.e. different actors have control over the technology, there are some norms and rules that most actors abide by, there are some organizations that care about non-proliferation, etc. But overall, a small perturbation could still blow up the system. 

A really stable scenario probably requires either some very tough governance, e.g. preventing all but one actor from getting to AGI, or high-trust cooperation between actors, e.g. by working on the same AGI jointly. 

Overall, I currently don't see a realistic scenario that feels more stable than nuclear weapons seem today which is not very reassuring. 

What success looks like

Yes, that is true. We made the decision to not address all possible problems with every approach because it would have made the post much longer. It's a fair point of criticism though. 

What success looks like

We thought about including such a scenario but decided against it. We think it might give the EA community a bad rep even if some people have already publically talked about it. 

What is the right ratio between mentorship and direct work for senior EAs?

Agree. I guess most EA orgs have thought about this. Some superficially and some extensively. If someone who feels like they have a good grasp on these and other management/prioritization questions, writing a "Basic EA org handbook" could be pretty high impact. 
 

Something like "please don't repeat these rookie mistakes" would already save thousands of EA hours. 

Load More