This was first published on my Substack. No changes were made; it's already good as is.
If you thought the one in the above image is an AI-rendition of yours truly, you’d be absolutely correct. That’s exactly where I am right now - in one of Dante’s circles called “Publishing my Research”.
I thought since the field of AI governance is relatively new, meaningful contributions would find a place where they’d be accessible, maybe downloadable, and definitely free. That is the point, right?
Wrong. Let me tell you why.
Catch 22
Take arXiv, for example, a treasure trove for anyone who wants open-access, downloadable papers. I use it all the time in my own research, and many governance programs use it as a source for their reading materials. To publish there, the guidelines stipulate some rules. I'm good with rules, love 'em.
The easiest method is to mention your affiliation with a university, for example, and that's your golden ticket. Maybe your thesis supervisor would help you (glitter on the ticket). I'm not affiliated with an institution (yet), so the next best thing is a common practice found on their website:
Guess how many authors I contacted, take a guess. Not one, not two, but three. I used the request email, introduced myself and politely requested endorsement. The response? Silence. Complete radio silence from all three. I understand. No hard feelings (OK, maybe a little).
The rule of thumb is: you need to publish to build credibility and make a name for yourself. But to publish, you need to have credibility and a name for yourself. If that is not the definition of Catch 22, I'll eat the keyboard I'm typing this on.
The Case of the Invisible Research
This summer, I participated in two AI governance programs. I produced a research paper and an advocacy piece. And yet, you can't read any of it.
The first was ENAIS, which requires participants to produce a project applying what they've learned. It's actually the condition for earning the certificate. I enjoyed every second of those weeks - the readings and the cohort discussions with our facilitator (shout out to Kyle Gracey). The joy of writing research again, after a hiatus, was like manna. But they don't publish these projects on their website. So where would people read mine? How would anyone hire me if they can't see this body of work that demonstrates my talent? (Self-promotion never hurt anyone).
The second was AIS Collab (check out their website for the next iteration, you won’t regret it), where I won the Excellence award for my advocacy piece. Again, good times spent with an amazing cohort and facilitator (shout out to Valerie Bollen). Once again, it's not published on their website. I sent them an email suggesting that to make projects more visible, they should be published. I'm happy that they found the idea promising and would call me to publish my winning piece when they launch it.
But I don't want just winners published. I want everyone's work published. That's how it works: research, publish, repeat. If you baked a cake and it turned out a masterpiece (yes, you read that correctly), wouldn't you want people to eat it and enjoy it and shower you with praise? (C'mon, we all want that, can't be just me.)
Lost in Translation
Then what did you do, Kariema? You might ask. I adapted by creating a shorter, funnier version of the 24-page research paper in order to publish it here on Substack and on the EA Forum. Granted, it shows how diverse my writing style is and how I can cater to varied audiences. But, by reformatting the original, a lot of substance was lost. There's a section dedicated to tracing censorship through time that I'm really proud of that was condensed into a single paragraph.
I also posted it on LinkedIn as a whole. But how long did that last in people's feeds? Not long, I assure you. Gone from feeds and minds by the time you say "↑ New posts". My consolation is that I added it to my 'Featured' section, so it's not forgotten.
None of these workarounds are effective in creating a unique database for AI governance research. I want the work we do to get out there for developers, AI companies, policymakers, and fellow researchers to view and build upon for as long as it is relevant.
Does it Have to Be This Way?
No, absolutely not. Why? Because I’ve seen it done … in Linguistics.
Rutgers Optimality Archive is what I call an institution dedicated to the public good. It’s where I published my MA thesis and a conference article. All you had to do was upload your work, provide an abstract and keywords, et voila, your hard work is there for anyone to download and credit you for it without your being affiliated with an institution.
I need a minute to bask in that glow for a bit ... OK, enough basking. My work is still cited today. Why? Because anyone looking for it can easily find it. Because someone created an index for anything related to that theory of Linguistics. Because you don't need to 'know' someone to upload your work.
Enough About Me
Think about it for a minute. All we want in the governance field is to make AI safe, ethical, responsible, and guide its implementation towards a better future. How are we going to do that if most of what we produce is living on an algorithmic shelf gathering cyber dust. How will anyone benefit from what we do unless there's a place for them to see the work, study it, and maybe change course for the good of the world because of it.
Light Bulb
Who are we? Independent researchers, early-career scholars without institutional backing - people doing interdisciplinary work that doesn't fit neatly into existing categories.
What do we want? Well, that’s simple. Here are a couple of ideas off the top of my head:
- An open archive for AI Governance and Technical Safety similar to Rutgers'.
- Programs publishing cohort work on their websites.
- Existing platforms like arXiv lowering their entry thresholds.
Like I said earlier, the field is new, which means that we could do something about this issue now. We can bake the cake and eat it, too.
Until that happens, I'll be keeping Dante company and maybe start exploring other circles.
