S

shauryachandravanshi

Student @ JGU
0 karmaJoined Pursuing an undergraduate degreeNew Delhi, Delhi, India

Comments
1

It may seem non-obvious at first which one of these two structures is better, but that is because we are considering AI Character as a set of very broad recommendations about very broad situations. If given substantively more information about these scenarios, a non-trivial number would likely have clearer resolutions to support.

I would guess that AI constitutions - much like actual national constitutions - would likely be more fine grained and specific with their moral directives for AIs. 

Naturally, there’s a level of specificity that would solve fewer cases through clarity than it would hurt by imposing inapplicable rigid standards. But this sounds analogous to the actual legal systems that strive to iteratively resolve this exact issue. Legal tests, fictions, and precedent might find its analogies in the ethical “training” of future AIs. 

Still, whether trained, prescribed/directed, or highly fine-grained, AI character would create a non-trivial effect on the decisions made in these cases. 

Edit: If I understood her correctly, it seems like Amanda Askell says this is quite similar to how Claude was actually trained to follow its constitution