Post drafted and edited by the author. Claude and Grammarly were used for a light copyedit
As AI tools become increasingly useful for communicating research, opinions, or simply sharing ideas, it is becoming important to proactively disclose their use when we communicate with others. Being transparent demands AI-use disclosure. Organizations that communicate externally should embed strong AI-disclosure norms, and so should the forum. If you use AI tools for interpersonal communication, disclose their use in conversation.
The ability of LLMs to draft, re-draft, code, and analyse has been wonderful to see. As a researcher at AIM (views my own), I am certainly using AI tools in my work and exploring the domains in which they may make me more (and less) effective.
I personally enjoy writing, so I doubt I will ever use LLMs for extensive drafting. However, producing "content" (read: anything from a tweet to a book) is becoming increasingly cheaper. Lower barriers are leading to a steady increase in production. AI use speeds up research. Alarmingly, it also makes it easy to produce research that looks legit, but is, to all intents and purposes, slop.
Interpersonal communications can also become stilted, creating an underlying unease that your conversations with humans are being intermediated by LLMs. Someone with a very fun writing style now writes like a Roomba; the cold emails you receive are long and well-written but rife with inaccuracies.
I am not arguing for a Luddite retrenchment. AI tools are certainly helpful, and we should keep exploring how they can help us become better and more transparent communicators.
Making it a norm among transparent communicators to disclose the extent and nature of AI use is a matter of both principle and consequence. From a principled perspective, I think we owe it to colleagues and strangers alike to tell them when we are speaking with our own voice and when we are not.
Beyond this, LLMs make constant mistakes and hallucinations. Further, as highly complex prediction machines, they are really good at making something look legit when it isn't.
Disclosing the extent of AI use in a research or communication output — upfront and prominently — and making it an expectation that others do the same can help readers calibrate their scepticism. In technical work, it would encourage careful reading of the details or support replication efforts.
Despite usually steering clear of the forum, I chose to write this piece because I think some of the transparency practices EAs have are great models for my work and represent a commendable characteristic of the community. In the same way that it has become the norm to disclose the time spent on research and the depth of research or thinking, we should integrate AI disclosures into our communications.
Further reading and guidance:

I would love to have a discreet mandatory way to disclose the level of AI use in the forum. Not sure how it could look in practice but I am in favour of normalizing AI use in writing and at the same time being honest on how much AI got into the text.
I agree with that, could even be a built in checkbox on posting?