Credo AI is an AI governance company focused on operationalizing Responsible AI. We are continuing to grow and looking to hire more people. In a previous post seeking technical folks, I talked about the company in some detail - check it out for more information.
- Technology Policy Manager - this role combines policy expertise and product management acumen. You'd be responsible for understanding the important policy changes happening in the RAI ecosystem and translating that understanding into a form actionable by companies.
- Principal User Researcher - this role is about improving our learning speed through user research. We firmly believe that growing this capability will be critical to both our mission and company's success.
- Sr. Data Scientist - I've written about this role before on the forum. If you are passionate about RAI and/or building tooling for ML practitioners, please contact me!
Hiring process and details
Our hiring process starts with you reaching out. If you have questions about whether you are a good fit for a specific role, or think you are a fit for a role that we haven't defined yet, please let us know!
Reach out to firstname.lastname@example.org. Please include “Effective Altruism Forum” in the subject line so I know where you heard of us. You can also submit an application through the job portals linked for each role. All jobs and requirements are posted here.
More about Credo AI
We help operationalize RAI development for our customers with our governance, risk & compliance (GRC) product and an open-source AI assessment framework called Lens. NIST's initial draft for an AI Risk Management Framework is highly aligned with our product and vision and can give you more context on the regulatory environment we operate in.