Author: Sam Hilton, AIM Director of Research

In March 2021 I received an odd letter. It was from a guy I didn't know, David Quarrey, the UK's National Security Advisor. The letter thanked me for providing external expertise to the UK government's Integrated Review, which had been published that morning. It turns out that the Integrated Review has made a public commitment to "review our approach to risk assessment" ... "including how we account for interdependencies, cascading and compound risks". This is something I'd been advocating for over the previous few months by writing a policy paper and engaging with politicians and civil servants. It's hard to know how much my input changed government policy but I couldn’t find much evidence of others advocating for this. I had set myself a 10-year goal to "have played a role in making the UK a leader in long-term resilience to extreme risks, championing the issue of extreme risks on the global stage." and I seemed to be making steps in that direction.

After a few years working on, and, I believe, successfully changing UK policy a number of times I came away with the view that policy change is really just not that hard. You think carefully about what you can change, tell policy people what they need to do, network a lot to make sure that they hear you, and then sometimes they listen and sometimes they don’t. But when they do you have pushed on a big lever and the world moves.

It has surprised me a bit being at CE (now AIM) and finding that our incubatees are not that keen on this indirect approach to changing the world. Policy work has slow feedback loops, can be hard to measure, and what are you even doing in a policy role anyway?! And I get that. But it is a damn big lever to just ignore.

So, firstly, I would like to share AIM’s guide to launching a policy NGO. This is a document I and others have been working on internally for AIM to help founders understand what policy roles are like, how to drive change, what works and what does not, how to measure impact, and so on. This is not the full program content but should give you a decent taste of the kind of support we can provide.

Secondly, I would like to note that AIM wants more founders who would be excited to start a policy organisation. If you think you could be plausibly excited about founding a policy organisation (or any of our upcoming recommended ideas!), I encourage you to apply for the Incubation Program here.

136

0
0

Reactions

0
0
Comments1


Sorted by Click to highlight new comments since:

Thank you so much for writing and sharing this resource Sam, and again I can't thank you enough for your support in helping us launch our policy org ORCG, we quite probably could not have done it without you.

Curated and popular this week
 ·  · 10m read
 · 
I wrote this to try to explain the key thing going on with AI right now to a broader audience. Feedback welcome. Most people think of AI as a pattern-matching chatbot – good at writing emails, terrible at real thinking. They've missed something huge. In 2024, while many declared AI was reaching a plateau, it was actually entering a new paradigm: learning to reason using reinforcement learning. This approach isn’t limited by data, so could deliver beyond-human capabilities in coding and scientific reasoning within two years. Here's a simple introduction to how it works, and why it's the most important development that most people have missed. The new paradigm: reinforcement learning People sometimes say “chatGPT is just next token prediction on the internet”. But that’s never been quite true. Raw next token prediction produces outputs that are regularly crazy. GPT only became useful with the addition of what’s called “reinforcement learning from human feedback” (RLHF): 1. The model produces outputs 2. Humans rate those outputs for helpfulness 3. The model is adjusted in a way expected to get a higher rating A model that’s under RLHF hasn’t been trained only to predict next tokens, it’s been trained to produce whatever output is most helpful to human raters. Think of the initial large language model (LLM) as containing a foundation of knowledge and concepts. Reinforcement learning is what enables that structure to be turned to a specific end. Now AI companies are using reinforcement learning in a powerful new way – training models to reason step-by-step: 1. Show the model a problem like a math puzzle. 2. Ask it to produce a chain of reasoning to solve the problem (“chain of thought”).[1] 3. If the answer is correct, adjust the model to be more like that (“reinforcement”).[2] 4. Repeat thousands of times. Before 2023 this didn’t seem to work. If each step of reasoning is too unreliable, then the chains quickly go wrong. Without getting close to co
 ·  · 11m read
 · 
My name is Keyvan, and I lead Anima International’s work in France. Our organization went through a major transformation in 2024. I want to share that journey with you. Anima International in France used to be known as Assiettes Végétales (‘Plant-Based Plates’). We focused entirely on introducing and promoting vegetarian and plant-based meals in collective catering. Today, as Anima, our mission is to put an end to the use of cages for laying hens. These changes come after a thorough evaluation of our previous campaign, assessing 94 potential new interventions, making several difficult choices, and navigating emotional struggles. We hope that by sharing our experience, we can help others who find themselves in similar situations. So let me walk you through how the past twelve months have unfolded for us.  The French team Act One: What we did as Assiettes Végétales Since 2018, we worked with the local authorities of cities, counties, regions, and universities across France to develop vegetarian meals in their collective catering services. If you don’t know much about France, this intervention may feel odd to you. But here, the collective catering sector feeds a huge number of people and produces an enormous quantity of meals. Two out of three children, more than seven million in total, eat at a school canteen at least once a week. Overall, more than three billion meals are served each year in collective catering. We knew that by influencing practices in this sector, we could reach a massive number of people. However, this work was not easy. France has a strong culinary heritage deeply rooted in animal-based products. Meat and fish-based meals remain the standard in collective catering and school canteens. It is effectively mandatory to serve a dairy product every day in school canteens. To be a certified chef, you have to complete special training and until recently, such training didn’t include a single vegetarian dish among the essential recipes to master. De
sawyer🔸
 ·  · 2m read
 · 
Note: This started as a quick take, but it got too long so I made it a full post. It's still kind of a rant; a stronger post would include sources and would have gotten feedback from people more knowledgeable than I. But in the spirit of Draft Amnesty Week, I'm writing this in one sitting and smashing that Submit button. Many people continue to refer to companies like OpenAI, Anthropic, and Google DeepMind as "frontier AI labs". I think we should drop "labs" entirely when discussing these companies, calling them "AI companies"[1] instead. While these companies may have once been primarily research laboratories, they are no longer so. Continuing to call them labs makes them sound like harmless groups focused on pushing the frontier of human knowledge, when in reality they are profit-seeking corporations focused on building products and capturing value in the marketplace. Laboratories do not directly publish software products that attract hundreds of millions of users and billions in revenue. Laboratories do not hire armies of lobbyists to control the regulation of their work. Laboratories do not compete for tens of billions in external investments or announce many-billion-dollar capital expenditures in partnership with governments both foreign and domestic. People call these companies labs due to some combination of marketing and historical accident. To my knowledge no one ever called Facebook, Amazon, Apple, or Netflix "labs", despite each of them employing many researchers and pushing a lot of genuine innovation in many fields of technology. To be clear, there are labs inside many AI companies, especially the big ones mentioned above. There are groups of researchers doing research at the cutting edge of various fields of knowledge, in AI capabilities, safety, governance, etc. Many individuals (perhaps some readers of this very post!) would be correct in saying they work at a lab inside a frontier AI company. It's just not the case that any of these companies as