Aligned AI is an Oxford based startup focused on applied alignment research. Our goal is to implement scalable solutions to the alignment problem, and distribute these solutions to actors developing powerful transformative artificial intelligence (related Alignment Forum post here).
In the tradition of AI safety startups, Aligned AI will be doing an AMA this week, from today, Tuesday the 1st of March, till Friday the 4th, inclusive. It will be mainly me, Stuart Armstrong, answering these questions, though Rebecca Gorman and Oliver Daniels-Koch may also answer some of them. GPT-3 will not be invited.
From our post introducing Aligned AI:
We think AI poses an existential risk to humanity, and that reducing the chance of this risk is one of the most impactful things we can do with our lives. Here we focus not on the premises behind that claim, but rather on why we're particularly excited about Aligned AI's approach to reducing AI existential risk.
- We believe AI Safety research is bottle-necked by a core problem: how to extrapolate values from one context to another.
- We believe solving value extrapolation is necessary and almost sufficient for alignment.
- Value extrapolation research is neglected, both in the mainstream AI community and the AI safety community. Note that there is a lot of overlap between value extrapolation and many fields of research (e.g. out of distribution detection, robustness, transfer learning, multi-objective reinforcement learning, active reward learning, reward modelling...) which provide useful research resources. However, we've found that we've had to generate our most of the key concepts ourselves.
- We believe value extrapolation research is tractable (and we've had success generating the key concepts).
- We believe distributing (not just creating) alignment solutions is critical for aligning powerful AIs.