All info and application here: https://existence.org/jobs/ml-engineer-seldonian

BERI is seeking a full-time machine learning software engineer as part of our collaboration with the Autonomous Learning Laboratory at UMass Amherst.

The engineer will create a software library that makes it easier for researchers and practitioners to apply and create Seldonian algorithms. This library would facilitate and advance academic research on safe machine learning and would provide a practical tool for corporations to responsibly apply machine learning to high-risk high-reward applications. The end goal is to get more safety constraints built into ML systems.

We’ve secured funding for this position for at least one year, which we believe will be enough time to implement the core functionality. Extensions will depend on our ability to secure more funding during the first year of work.

2 comments, sorted by Highlighting new comments since Today at 1:32 PM
New Comment

Hey, this sounds like something that I (or someone I might know) could do.

But I'm unable to evaluate how useful this would be for AI Safety. I can make an uninformed guess at best.

Is there some open conversation about this somewhere, with other AI safety people, that I could look at and maybe even ask questions?

Great question Yonatan! Unfortunately I'm not aware of such a discussion. The job posting links to the website and academic paper for you to read, if that would help you. Also, this work is funded by a project-specific grant from the Long-Term Future Fund. So to the extent that you trust LTFF's judgment on these sorts of things, that might be evidence for it being good for AI Safety.