Aligned AI is an Oxford based startup focused on applied alignment research. Our goal is to implement scalable solutions to the alignment problem, and distribute these solutions to actors developing powerful transformative artificial intelligence (related Alignment Forum post here).
In the tradition of AI safety startups, Aligned AI will be doing an AMA this week, from today, Tuesday the 1st of March, till Friday the 4th, inclusive. It will be mainly me, Stuart Armstrong, answering these questions, though Rebecca Gorman and Oliver Daniels-Koch may also answer some of them. GPT-3 will not be invited.
From our post introducing Aligned AI:
We think AI poses an existential risk to humanity, and that reducing the chance of this risk is one of the most impactful things we can do with our lives. Here we focus not on the premises behind that claim, but rather on why we're particularly excited about Aligned AI's approach to reducing AI existential risk.
- We believe AI Safety research is bottle-necked by a core problem: how to extrapolate values from one context to another.
- We believe solving value extrapolation is necessary and almost sufficient for alignment.
- Value extrapolation research is neglected, both in the mainstream AI community and the AI safety community. Note that there is a lot of overlap between value extrapolation and many fields of research (e.g. out of distribution detection, robustness, transfer learning, multi-objective reinforcement learning, active reward learning, reward modelling...) which provide useful research resources. However, we've found that we've had to generate our most of the key concepts ourselves.
- We believe value extrapolation research is tractable (and we've had success generating the key concepts).
- We believe distributing (not just creating) alignment solutions is critical for aligning powerful AIs.
Most of the alignment research pursued by other EA groups (eg Anthropic, Redwood, ARC, MIRI, the FHI,...) would be useful to us if successful (and vice versa: our research would be useful for them). Progress in inner alignment, logical uncertainty, and interpretability is always good.
Fast increase in AI capabilities might result in a superintelligence before our work is ready. If the top algorithms become less interpretable than they are today, this might make our work harder.
Whole brain emulations would change things in ways that are hard to predict, and could make our approach either less or more successful.