I've been working as a programmer for five years, and I recently made a late application to study International Affairs as a graduate student at Carleton. I have just been accepted and awarded the Joubin-Selig Scholarship. Provided I can complete two online economics courses in August with acceptable grades, I will start in the Fall.

This is a two year masters program with a summer co-op option, and I'd like to use the co-op (and my future career) to help governments with AI. 

I believe I have an unusually strong understanding of machine learning for someone whose interests primarily lean towards the humanities and social sciences (my undergrad is in philosophy from Oxford).

I am fluent in Python, I have built several small neural networks, and I have a reasonable understanding of the attention-based transformer architectures that are used by large language models (LLMs) like ChatGPT (I have yet to actually go through Attention is all you Need line-by-line alongside the pytorch transformer implementation, but it's on my todo list). I have spent time with researchers at Mila (informally), I remain in touch with the field, and I am confident that I can go more deeply into both academic literature and work on practical implementations of models where useful.

I'm unclear on how best I'd be useful at present, but one possibility is to increase technical literacy on a team to a point where they could have useful conversations with computer scientists. Concretely, this might involve leading a group through online courses like Andrew Ng's materials on Coursera, and facilitating dialogue with interested folks from research labs. 

I believe that some grasp of how neural networks are built is necessary to understand the ways they may not act as intended, and that this understanding is crucial to many, perhaps most, major decisions regarding AI that government will face. Outside of perhaps a few industry labs, technical researchers are not primarily selected based on their ability to engage with safety/governance issues, so I believe governments require sufficient technical know-how to critically assess their opinions when making governance decisions. 

I suspect my technical background could plug some important gaps in an AI governance team's knowledge. If you work for the Canadian government and you'd like to work with me, please send me a DM!

6

0
0

Reactions

0
0
Comments2
Sorted by Click to highlight new comments since: Today at 4:09 AM

Congrats on admission to Carleton! I'm finishing my MA in political science there this summer. We'd be happy to have you in the EA Carleton Discord if you haven't joined yet :) I'm not aware of any specific internships, but I can connect you with some people who might be. Feel free to reach out!

If it passes, Canada's proposed AI and Data Act (part of Bill C-27) will almost certainly involve hiring new employees at Innovation Canada. ISED also has staff working to support the AI startup ecosystem in Canada. Effective Altruism Canada is building momentum, and I know AI Governance and Safety (AIGS) Canada is working on advocacy.

Hi there - I work at AIGS Canada (aigs.ca).

I think there's a strong chance we and you could cooperate on something. At the very least, I can likely introduce you to others in Canada you could collaborate with.

I'd be eager to hop on a call with you if you are. You can reach out to me by email at: mario.gibney [at] aigs.ca, or if you join our slack (aigs.ca/newsletter-slack) I am very responsive there.

Hope to talk to you soon!

Curated and popular this week
Relevant opportunities