Hide table of contents

(Also posted on LessWrong)

This is one of my final projects for the Columbia EA Summer 2022 Project Based AI Safety Reading Group (special thanks to facilitators Rohan Subramini and Gabe Mukobi). If you're curious you can find my other project here.

Summary

In this project, I:

  1.  Derive by hand the optimal configurations (architecture and weights) of "vanilla" neural networks (multilayer perceptrons; ReLU activations) that implement basic mathematical functions (e.g. absolute value, minimum of two numbers, etc.)
  2. Identify "features" and "circuits" of these networks that are reused repeatedly across networks modeling different mathematical functions
  3. Verify these theoretical results empirically (in code)

What follows is a brief introduction to this work. For full details, please see:

  • The linked video (also embedded at the bottom of this post)
  • Or if you prefer to go at your own pace, the slides I walk through in that video

Motivation

Olah et al. make three claims about the fundamental interpretability of neural networks:

They demonstrate these claims in the context of image models:

Features / Circuits:

Universality:

This work demonstrates the same concepts apply in the space of neural networks modeling basic mathematical functions.

Results

Specifically, I show that the optimal network for calculating the minimum of two arbitrary numbers is fully constructed from smaller "features" and "circuits" used across even simpler mathematical functions. Along the way, I explore:

  • "Positiveness" and "Negativeness" Detectors
  • Identity Circuits (i.e. f(x) = x)
  • Negative Identity Circuits (i.e. f(x) = -x)
  • Subtraction Circuits (i.e. f(x1, x2) = x1 - x2)
  • "Greaterness" Detectors
  • And More

Minimum Network:

I also demonstrate that each of these theoretical results hold in practice. The code for these experiments can be found on the GitHub page for this project.

Full Details

For full details, please see the PDF presentation in the GitHub repo or watch the full video walkthrough:

23

0
0

Reactions

0
0
Comments2


Sorted by Click to highlight new comments since:

Given that you have just published this on the forum, I have not yet finished watching the video, but it is playing in the background on 1.5x speed. 

Your project is valuable to me since I am not up-to-date with my knowledge of the state of interpretability research and suspect that your project and manner of explanations will help slightly in this regard. Beyond the value, interpretability is simply interesting. I would very likely watch more video explanations of this nature on topics in AI Safety, interpretability, alignment, etc... which leads me to my question: Do you intend to continue to upload videos like the one you've uploaded today? 

I really wish more EAs included video explanations / tutorials to supplement their work. 

Thank you for posting this on the forum, and especially for creating the video. 

Thanks for the comment and for watching! I don't currently have any future videos planned, but I'd definitely consider it if there's interest. I'm also a fan of learning via videos, and you're right that there aren't that many in the AI Safety space. (Robert Miles is the only AI Safety YouTuber, I'm aware of.  Absolutely worth checking out if your interested in this kind of stuff.)

Curated and popular this week
 ·  · 10m read
 · 
I wrote this to try to explain the key thing going on with AI right now to a broader audience. Feedback welcome. Most people think of AI as a pattern-matching chatbot – good at writing emails, terrible at real thinking. They've missed something huge. In 2024, while many declared AI was reaching a plateau, it was actually entering a new paradigm: learning to reason using reinforcement learning. This approach isn’t limited by data, so could deliver beyond-human capabilities in coding and scientific reasoning within two years. Here's a simple introduction to how it works, and why it's the most important development that most people have missed. The new paradigm: reinforcement learning People sometimes say “chatGPT is just next token prediction on the internet”. But that’s never been quite true. Raw next token prediction produces outputs that are regularly crazy. GPT only became useful with the addition of what’s called “reinforcement learning from human feedback” (RLHF): 1. The model produces outputs 2. Humans rate those outputs for helpfulness 3. The model is adjusted in a way expected to get a higher rating A model that’s under RLHF hasn’t been trained only to predict next tokens, it’s been trained to produce whatever output is most helpful to human raters. Think of the initial large language model (LLM) as containing a foundation of knowledge and concepts. Reinforcement learning is what enables that structure to be turned to a specific end. Now AI companies are using reinforcement learning in a powerful new way – training models to reason step-by-step: 1. Show the model a problem like a math puzzle. 2. Ask it to produce a chain of reasoning to solve the problem (“chain of thought”).[1] 3. If the answer is correct, adjust the model to be more like that (“reinforcement”).[2] 4. Repeat thousands of times. Before 2023 this didn’t seem to work. If each step of reasoning is too unreliable, then the chains quickly go wrong. Without getting close to co
 ·  · 11m read
 · 
My name is Keyvan, and I lead Anima International’s work in France. Our organization went through a major transformation in 2024. I want to share that journey with you. Anima International in France used to be known as Assiettes Végétales (‘Plant-Based Plates’). We focused entirely on introducing and promoting vegetarian and plant-based meals in collective catering. Today, as Anima, our mission is to put an end to the use of cages for laying hens. These changes come after a thorough evaluation of our previous campaign, assessing 94 potential new interventions, making several difficult choices, and navigating emotional struggles. We hope that by sharing our experience, we can help others who find themselves in similar situations. So let me walk you through how the past twelve months have unfolded for us.  The French team Act One: What we did as Assiettes Végétales Since 2018, we worked with the local authorities of cities, counties, regions, and universities across France to develop vegetarian meals in their collective catering services. If you don’t know much about France, this intervention may feel odd to you. But here, the collective catering sector feeds a huge number of people and produces an enormous quantity of meals. Two out of three children, more than seven million in total, eat at a school canteen at least once a week. Overall, more than three billion meals are served each year in collective catering. We knew that by influencing practices in this sector, we could reach a massive number of people. However, this work was not easy. France has a strong culinary heritage deeply rooted in animal-based products. Meat and fish-based meals remain the standard in collective catering and school canteens. It is effectively mandatory to serve a dairy product every day in school canteens. To be a certified chef, you have to complete special training and until recently, such training didn’t include a single vegetarian dish among the essential recipes to master. De
 ·  · 1m read
 · 
 The Life You Can Save, a nonprofit organization dedicated to fighting extreme poverty, and Founders Pledge, a global nonprofit empowering entrepreneurs to do the most good possible with their charitable giving, have announced today the formation of their Rapid Response Fund. In the face of imminent federal funding cuts, the Fund will ensure that some of the world's highest-impact charities and programs can continue to function. Affected organizations include those offering critical interventions, particularly in basic health services, maternal and child health, infectious disease control, mental health, domestic violence, and organized crime.
Relevant opportunities