All of IanEisenberg's Comments + Replies

Credo AI is an AI governance company focused on the assessment and governance of AI systems. In addition to our Governance Platform, we develop Lens, an open-source assessment framework. Find a longer poster about the company here

Roles

We are expanding our data science team and hiring applied AI practitioners! If you believe you have the skills and passion for contributing to the nascent world of AI governance, we want to hear from you!

To help you figure out if that’s you, I’ll describe some of the near-term challenges we are facing:

  • How can general p
... (read more)

Thanks for the perspective Yonatan! I've rearranged the post to better conform to your suggestions.

Because we have multiple roles, here I am trying to balance brevity and answering the question "am a specifically a good fit for this". My last post went more in to detail about the company and one type of role in particular. Couldn't post all the roles in the title but tried to make them as understandable as possible in few words, while linking to more information.

Can you expand? I wouldn't say we are looking for a particular technique. We are looking for people who believe they can support building a scalable Responsible AI product.

See above for a more general response about existential risk.

To a "concrete intervention" - the current state of AI assessment is relatively poor. Many many models are deployed with the barest of adequacy assessment. Building a comprehensive assessment suite and making it easy to deploy on all productionizing ML systems is hugely important. Will it guard against issues related to existential risk? I don't know honestly. But if someone comes up with good assessments that will probe such an ambiguous risk, we will incorporate it into the product!

Credo AI is not specifically targeted at reducing existential risk from AI. We are working  with companies and policy makers who are converging on a set of responsible AI principles that need to be thought out better and implemented.

-

Speaking for myself now - I became interested in AI safety and governance because of the existential risk angle. As we have talked to companies and policy makers it is clear that most groups do not think about AI safety in that way. They are concerned with ethical issues like fairness - either for moral reasons, or, more ... (read more)