I

IanEisenberg

57 karmaJoined Sep 2020

Comments
5

Credo AI is an AI governance company focused on the assessment and governance of AI systems. In addition to our Governance Platform, we develop Lens, an open-source assessment framework. Find a longer poster about the company here

Roles

We are expanding our data science team and hiring applied AI practitioners! If you believe you have the skills and passion for contributing to the nascent world of AI governance, we want to hear from you!

To help you figure out if that’s you, I’ll describe some of the near-term challenges we are facing:

  • How can general principles of Responsible AI be operationalized?
  • How can we programmatically assess AI systems for principles like fairness, transparency, etc?
  • How can we make those assessments understandable and actionable for a broad range of stakeholders?

And some characteristics we look for:

  • Have an “owner” mindset. This word gets tossed around a lot, but at a startup our size it truly is a requirement. The ground is fertile and we need people who have the vision and follow through to develop wonderful things.
  • Have an existing passion and knowledge in this space. You don’t have to have previously worked in “AI safety” or “responsible AI”, but this post shouldn’t be the first time you are thinking about these issues!
  • Technical skills and experience in data science, AI development, or similar fields. This is a senior position, so some work experience is needed.

Hiring process and details

Our hiring process starts with you reaching out. We are looking for anyone who read the above section and thinks “that’s me!”. If that’s you, send a message to me at ian@credo.ai. Please include “Effective Altruism Forum” in the subject line so I know where you heard of us. 

Thanks for the perspective Yonatan! I've rearranged the post to better conform to your suggestions.

Because we have multiple roles, here I am trying to balance brevity and answering the question "am a specifically a good fit for this". My last post went more in to detail about the company and one type of role in particular. Couldn't post all the roles in the title but tried to make them as understandable as possible in few words, while linking to more information.

Can you expand? I wouldn't say we are looking for a particular technique. We are looking for people who believe they can support building a scalable Responsible AI product.

See above for a more general response about existential risk.

To a "concrete intervention" - the current state of AI assessment is relatively poor. Many many models are deployed with the barest of adequacy assessment. Building a comprehensive assessment suite and making it easy to deploy on all productionizing ML systems is hugely important. Will it guard against issues related to existential risk? I don't know honestly. But if someone comes up with good assessments that will probe such an ambiguous risk, we will incorporate it into the product!

Credo AI is not specifically targeted at reducing existential risk from AI. We are working  with companies and policy makers who are converging on a set of responsible AI principles that need to be thought out better and implemented.

-

Speaking for myself now - I became interested in AI safety and governance because of the existential risk angle. As we have talked to companies and policy makers it is clear that most groups do not think about AI safety in that way. They are concerned with ethical issues like fairness - either for moral reasons, or, more likely, financial reasons (no one wants to have an article written about their unfair AI system!)

So what to do? I believe supporting companies to incorporate "ethical" principles like fairness into their development process is a first step to incorporating other more ambiguous values into their AI systems. In essence, Fairness is the first non-performance ethical value most governments and companies are realizing they want their AI systems to adhere to. It isn't generic "value alignment", but it is a big step from just minimizing a traditional loss function.

Moving beyond Fairness, there are so many components of AI development process, infrastructure and government understanding that need to be moved. Building a tool that can be incorporated into the heart of the development process provides an avenue to support companies on a host of responsible dimensions - some of which our customers will ask for (supporting fair AI systems), and some they won't (reducing existential risk of our systems). All of this will be important for existential risk, particularly in a slow-takeoff scenario.

All that said, if the existential risk of AI systems is your specific focus (and you don't believe in a slow-takeoff scenario where the interventions Credo AI will support could be helpful), then Credo AI may not be the right place for you.