KW

Keira Wiechecki

4 karmaJoined

Comments
1

My proposal is to build a minimal language model that can be used for inferring  symbol manipulation algorithms. Though minimal models exist for NLP and modulo arithmetic, I was surprised not to find one for typed lambda calculus. I believe such a model would be particularly useful as it could serve as a universal grammar for a language acquisition device for inductively building interpretable models.

My idea is to use a GAN to train a model that outputs valid proof terms for propositions in simply typed lambda calculus. The generator would attempt to find proof terms that type check when fed to a proof assistant. The discriminator would attempt to produce propositions that confused the discriminator.

The architecture of the discriminator shouldn't really matter. The generator would ideally be an attention-only model with the most informative number of heads and layers.