Hide table of contents

Credo AI is hiring technical folks versed in responsible AI; if you are interested, please apply! If you aren’t a data scientist or other technical professional, but are inspired by our mission, please reach out. We are always looking for talented, passionate folks.

What is Credo AI?

Credo AI is a ventured-backed Responsible AI (RAI) company focused on the assessment and governance of AI systems. Our goal is to move RAI development from an “ethical” choice to an obvious one. We aim to do this both by making it easier for organizations to integrate RAI practices into their AI development and by collaborating with policy makers to set up appropriate ecosystem incentives. The ultimate goal is to reduce the risk of deploying AI systems, allowing us to capture the AI's benefits while mitigating its costs.

We make RAI easier with our governance, risk & compliance (GRC) product and an open-source AI assessment framework called Lens. Our data science team focuses on the latter with the goal of creating the most approachable tool for comprehensive RAI assessment for any AI system. We take a “what you can’t observe, you can’t control” approach to this space, and believe that assessment lays the foundation for all other aspects of a RAI ecosystem (e.g., auditing, mitigation, regulation). Here's a notebook showing some of Len's capabilities in code.

A particular focus of our governance product is involving diverse stakeholders in the governance of AI systems. Technical teams obviously have important perspectives, but so do compliance, governance, product, social scientists, etc. We aim to provide the forum for their effective collaboration in our GRC software, and provide technical outputs via Lens that are useful for everyone.

Our collaboration with policy organizations is just beginning, but we are already contributing our perspective to the broader policy conversation. For instance, see our comments to NIST on Artificial Intelligence Risks. Our CEO and technical policy advisors have been part of the World Economic Forum, The Center for AI & Digital Policy, the Mozilla Foundation and the Biden Administration. 

 

Who is Credo AI?

We are a small, ambitious team committed to RAI. We are a global, remote company with expertise in building amazing products, technical policy, social science, and, of course, AI. We are a humble group, and are focused on learning from the policy community, academia, and, most critically, our customers. Find a bit more about us and our founder here.

The data science team is currently 2 people (Ian Eisenberg and Amin Rasekh). The needs in this space are immense, so early hires will have the opportunity (and indeed the responsibility!) for owning significant components of our assessment framework.

 

Relationship to Effective Altruism

The EA community has argued that AI governance is an important cause area for a while. A great starter can be found here, many other posts are found here. The majority of work is either being pursued by particular governments, academia, or a few non-profits.

However, making principles of AI governance a reality requires a broader ecosystem approach, consisting of governments enacting regulations, customers and businesses demanding AI accountability from AI service providers, academic institutions exploring evidentially-backed governance approaches, independent auditors focused on evaluating AI systems, and more. There are many interacting parts that must come together to change the development of the AI systems affecting our lives - most of which are developed in the corporate sector.

Credo AI specifically engages with the corporate sector, and plays a role that is sometimes described as Model Ops. We are the bridge between theory, policy and implementation that can connect with corporate decision making. We think of ourselves as creating a responsible “choice architecture” that promotes responsible practices. For better or worse the bar for RAI development is very low right now, which means there is a ton we can do to improve the status quo, whether that’s by making relatively well researched approaches to “fair AI” easy to incorporate into model development, making existing regulations more understandable, or being the first to practically operationalize bleeding-edge RAI approaches.

There is plenty of low hanging fruit for us at these early stages, but our ambitions are great. In the medium term we would like to build the most comprehensive assessment framework for AI systems and help all AI-focused companies improve their RAI processes. At the longer time scale, we would love to inform an empirical theory of AI policy. Others have pointed out the difficulty AI policies will have in keeping up with the speed of technical innovation. Building a better science of effective AI governance requires knowledge of the policies corporations are employing, and their relative effectiveness. We are far (far!) away from having this kind of detail, but it’s the kind of long-term ambition we have.

 

Who should apply?

If you believe you have the skills and passion for contributing to the nascent world of AI governance, we want to hear from you!

To help you figure out if that’s you, I’ll describe some of the near-term challenges we are facing:

  • How can general principles of Responsible AI be operationalized?
  • How can we programmatically assess AI systems for principles like fairness, transparency, etc?
  • How can we make those assessments understandable and actionable for a broad range of stakeholders?
     

The data science team’s broader goal is to build an assessment framework that connects AI teams with RAI tools developed in academia, the open source world, and at Credo AI. We want this framework to make employing best practices easy so that “responsible AI development” becomes an obvious choice for any developer. Creating this assessment framework lays the groundwork for Credo AI’s other missions, which is generally to ensure that AI is developed responsibly.
 

To be a bit more concrete we are looking for people who:

  • Have an existing passion and knowledge in this space. You don’t have to have previously worked in “AI safety” or “responsible AI”, but this post shouldn’t be the first time you are thinking about these issues!
  • For the data science team you need to know how to program in python, and familiarity with the process of AI development is a definite plus.
  • If you aren’t interested in the data science team, but believe you can contribute, please reach out anyway!
  • Have an “owner” mindset. This word gets tossed around a lot, but at a startup our size it truly is a requirement. The ground is fertile and we need people who have the vision and follow through to develop wonderful things.

Hiring process and details

Our hiring process starts with you reaching out. We are looking for anyone who read the above section and thinks “that’s me!”. If that’s you, send a message to me at ian@credo.ai. Please include “Effective Altruism Forum” in the subject line so I know where you heard of us. 

Specific jobs and requirements are posted here

Q&A

We welcome any questions about what working at Credo AI is like, more details about our product, the hiring process, what we're looking for, or whether you should apply. You can reach out to jobs@credo.ai, or reach out directly to me at ian@credo.ai.

Who am I?

My name is Ian Eisenberg. I’m a cognitive neuroscientist who moved into machine learning after finishing my PhD. While working in ML, I quickly realized that I was more interested in the socio-technical challenges of responsible AI development than AI capabilities, first becoming inspired by the challenges of building aligned AI systems. I am an organizer of Effective Altruism San Francisco, and spend some of my volunteer time with the pro-bono data science organization DataKind


 

16

0
0

Reactions

0
0

More posts like this

Comments6
Sorted by Click to highlight new comments since: Today at 4:50 PM
mic
2y10
0
0

Is work at Credo AI targeted at trying to reduce existential risk from advanced AI (whether from misalignment, accident, misuse, or structural risks)?

Credo AI is not specifically targeted at reducing existential risk from AI. We are working  with companies and policy makers who are converging on a set of responsible AI principles that need to be thought out better and implemented.

-

Speaking for myself now - I became interested in AI safety and governance because of the existential risk angle. As we have talked to companies and policy makers it is clear that most groups do not think about AI safety in that way. They are concerned with ethical issues like fairness - either for moral reasons, or, more likely, financial reasons (no one wants to have an article written about their unfair AI system!)

So what to do? I believe supporting companies to incorporate "ethical" principles like fairness into their development process is a first step to incorporating other more ambiguous values into their AI systems. In essence, Fairness is the first non-performance ethical value most governments and companies are realizing they want their AI systems to adhere to. It isn't generic "value alignment", but it is a big step from just minimizing a traditional loss function.

Moving beyond Fairness, there are so many components of AI development process, infrastructure and government understanding that need to be moved. Building a tool that can be incorporated into the heart of the development process provides an avenue to support companies on a host of responsible dimensions - some of which our customers will ask for (supporting fair AI systems), and some they won't (reducing existential risk of our systems). All of this will be important for existential risk, particularly in a slow-takeoff scenario.

All that said, if the existential risk of AI systems is your specific focus (and you don't believe in a slow-takeoff scenario where the interventions Credo AI will support could be helpful), then Credo AI may not be the right place for you.

Is there a concrete intervention that Credo AI might deploy in order to prevent existential risk from misaligned AI? If not, in which ways could RAI get us broadly closer to this goal?

See above for a more general response about existential risk.

To a "concrete intervention" - the current state of AI assessment is relatively poor. Many many models are deployed with the barest of adequacy assessment. Building a comprehensive assessment suite and making it easy to deploy on all productionizing ML systems is hugely important. Will it guard against issues related to existential risk? I don't know honestly. But if someone comes up with good assessments that will probe such an ambiguous risk, we will incorporate it into the product!

So you're looking more for ML data scientists, and not causal inference?

Can you expand? I wouldn't say we are looking for a particular technique. We are looking for people who believe they can support building a scalable Responsible AI product.

Curated and popular this week
Relevant opportunities