mariushobbhahn

I'm currently doing a Ph.D. in ML at the International Max-Planck research school in Tübingen. My focus is on Bayesian ML and I'm exploring its role in AI alignment but I'm also exploring non-Bayesian approaches. I want to become an AI safety researcher/engineer. If you think I should work for you, please reach out.

For more see https://www.mariushobbhahn.com/aboutme/

Topic Contributions

Comments

EA needs to understand its “failures” better

Thanks for the pointer. I hadn't seen it at the time. Will link to it in the post.

The biggest risk of free-spending EA is not optics or motivated cognition, but grift

I think I'm sympathetic to the criticism but I still feel like EA has sufficiently high hurdles to stop the grifters.
a) It's not like you get a lot of money just by saying the right words. You might be able to secure early funds or funds for a local group but at some point, you will have to show results to get more money.
b) EA funding mechanisms are fast but not loose. I think the meme that you can get money for everything now is massively overblown. A lot of people who are EA aligned didn't get funding from the FTX foundation, OpenPhil or the LTFF. The internal bars for funders still seem to be hard to cross and I expect this to hold for a while. 
c) I'm not sure how the grifters would accumulate power and steer the movement off the rails. Either they start as grifters but actually get good results and then rise to power (at that point they might not be grifters anymore) or they don't get any results and don't rise to power. Overall, I don't see a strong mechanism by which the grifters rise to power without either stopping being grifters or blowing their cover. Maybe you could expand on that.  I think the company analogy that you are making is less plausible in an EA context because (I believe) people update stronger on negative evidence. It's not just some random manager position that you're putting at risk, there are lives at stake. But maybe I'm too naive here. 

How many EAs failed in high risk, high reward projects?

Thanks for sharing. 
I think writing up some of these experiences might be really really valuable, both for your own closure and for others to learn.  I can understand, though, that this is a very tough ask in your current position. 

Calling for Student Submissions: AI Safety Distillation Contest

That sounds very reasonable. Thanks for the swift reply.

Calling for Student Submissions: AI Safety Distillation Contest

Hi, are PhD students also allowed to submit? I would like to submit a distillation and would be fine with not receiving any money in case I win a prize. In case this complicates things too much, I could understand if you don't want that. 

EA Forum's interest in cause-areas over time and other statistics

Thanks for the write-up. If you still have the time, could you increase the font sizes of the labels and replace the figures? If not, don't worry but it's a bit hard to read. It should take 5 minutes or so. 

AI safety starter pack

There is no official place yet. Some people might be working on a project board. See comments in my other post: https://forum.effectivealtruism.org/posts/srzs5smvt5FvhfFS5/there-should-be-an-ai-safety-project-board

Until then, I suggest you join the slack I linked in the post and ask if anyone is currently searching. Additionally, if you are at any of the EAGs and other conferences, I recommend asking around. 

Until we have something more official, projects will likely only be accessible through these informal channels. 

Where would we set up the next EA hubs?

I think this is true for EA orgs but 
a) Some people want to contribute within the academic system
b) Even EA orgs can be constrained by weird academic legal constraints. I think FHI is currently facing some problems along these lines (low confidence, better ask them). 

EA should learn from the Neoliberal movement

Fair, I'll just remove the first sentence. It's too confusing. 

Load More