Hide table of contents

Interpretability is the ability for the decision processes and inner workings of AI and machine learning systems to be understood by humans or other outside observers.[1]

Present-day machine learning systems are typically not very transparent or interpretable. You can use a model's output, but the model can't tell you why it made that output. This makes it hard to determine the cause of biases in ML models.[1]

Interpretability is a focus of Chris Olah and Anthropic's work, though most AI alignment organisations work on interpretability to some extent, such as Redwood Research[2].

...

(Read more)

Posts tagged AI interpretability

Relevance