Hide table of contents

TL;DR Applications are open for Catalyze's upcoming incubation program for technical AI safety research organizations. To join the program, apply by 8 September (extended). 

Apply now

What will this program involve?

The main components of the program are:

  • Finding a co-founder: Thoroughly test your co-founding fit with others who complement your skills and share your values.
  • Mentor & advisors: Access support from the Catalyze Impact team and external industry experts & experienced entrepreneurs.
  • Funding opportunities: Connect with our non-profit seed funding circle and potential investors, while being supported by a requestable stipend.
  • Network & community: Immerse yourself in a group of fellow AI safety founders and grow your network in the London AI safety ecosystem.
  • Building up your organization: Work on the priorities within your up-and-coming organization, while making use of external support whenever you want to.​

Phase 1 of the program (Nov/Dec, online) is focused on testing your co-founding fit with other participants, primarily through collaboration projects. Through these projects, you get a sense of how well you work together while further developing the research organization proposals.

Towards the end of Phase 1 you will evaluate whether you want to commit to moving to Phase 2 with the co-founder you found, and we will assess whether you and your co-founder are a good fit for Phase 2.

Throughout Phase 2 (January, London), you and your co-founder(s) will work together in-person to focus on the very early stages of building your organization. While you focus on taking the next steps in building your organization, preparing to fundraise, and further stress-testing your co-founding fit, we provide various forms of support. This includes: office hours with a network of experienced mentors and advisors, a requestable stipend, networking opportunities, and seed funding opportunities.

Who is this program for?

You would be a great fit for the program if:

  1. You are highly-committed, a self-starter, and have a lot of grit.
  2. You are motivated to contribute to AI Safety: You aspire to create meaningful, positive impact in the field and believe that it is a priority to prevent severely negative outcomes from AI.
  3. You have a scout mindset: You are open to alternative views, have an inclination to explore new information and arguments, and consider these critically to alter your course when new information is available.
  4. You either have a preliminary plan or agenda for an AI Safety research organization, or are willing to collaborate with someone who does. You will be able to develop this further during the program.

 Additionally, you ideally fit one of the following three profiles:​

  • Technical Profile: Technical research or engineering experience, combined with a good understanding of AI Safety (e.g. you have worked as a researcher within AI Safety or a closely related field, completed MATS, and/or conducted several research projects within AI).
  • Operational/Entrepreneurial Generalist: You have an operational and entrepreneurial skill set, such as finance, legal, HR, contracts, systems and processes, or have founded organizations in the past, and you have a reasonable understanding of AI Safety, equivalent to completing BlueDot’s Safety Fundamentals Alignment or Governance track.
  • All-Rounder: You have a mix of the technical and operational skills in the other profiles and at least a reasonable understanding of AI Safety.

​​During the program we will facilitate creating founding teams that possess strong entrepreneurial know-how and technical knowledge. If you are not sure whether you meet all of these characteristics, please apply anyway. The application process is designed to help both you and us gain more clarity on your fit for co-founding an AI safety research organization.

More information

For more information on the program & FAQ, please visit our website

Comments


No comments on this post yet.
Be the first to respond.
Curated and popular this week
 ·  · 1m read
 · 
(Audio version here, or search for "Joe Carlsmith Audio" on your podcast app.) > “There comes a moment when the children who have been playing at burglars hush suddenly: was that a real footstep in the hall?”  > > - C.S. Lewis “The Human Condition,” by René Magritte (Image source here) 1. Introduction Sometimes, my thinking feels more “real” to me; and sometimes, it feels more “fake.” I want to do the real version, so I want to understand this spectrum better. This essay offers some reflections.  I give a bunch of examples of this “fake vs. real” spectrum below -- in AI, philosophy, competitive debate, everyday life, and religion. My current sense is that it brings together a cluster of related dimensions, namely: * Map vs. world: Is my mind directed at an abstraction, or it is trying to see past its model to the world beyond? * Hollow vs. solid: Am I using concepts/premises/frames that I secretly suspect are bullshit, or do I expect them to point at basically real stuff, even if imperfectly? * Rote vs. new: Is the thinking pre-computed, or is new processing occurring? * Soldier vs. scout: Is the thinking trying to defend a pre-chosen position, or is it just trying to get to the truth? * Dry vs. visceral: Does the content feel abstract and heady, or does it grip me at some more gut level? These dimensions aren’t the same. But I think they’re correlated – and I offer some speculations about why. In particular, I speculate about their relationship to the “telos” of thinking – that is, to the thing that thinking is “supposed to” do.  I also describe some tags I’m currently using when I remind myself to “really think.” In particular:  * Going slow * Following curiosity/aliveness * Staying in touch with why I’m thinking about something * Tethering my concepts to referents that feel “real” to me * Reminding myself that “arguments are lenses on the world” * Tuning into a relaxing sense of “helplessness” about the truth * Just actually imagining differ
Garrison
 ·  · 7m read
 · 
This is the full text of a post from "The Obsolete Newsletter," a Substack that I write about the intersection of capitalism, geopolitics, and artificial intelligence. I’m a freelance journalist and the author of a forthcoming book called Obsolete: Power, Profit, and the Race to build Machine Superintelligence. Consider subscribing to stay up to date with my work. Wow. The Wall Street Journal just reported that, "a consortium of investors led by Elon Musk is offering $97.4 billion to buy the nonprofit that controls OpenAI." Technically, they can't actually do that, so I'm going to assume that Musk is trying to buy all of the nonprofit's assets, which include governing control over OpenAI's for-profit, as well as all the profits above the company's profit caps. OpenAI CEO Sam Altman already tweeted, "no thank you but we will buy twitter for $9.74 billion if you want." (Musk, for his part, replied with just the word: "Swindler.") Even if Altman were willing, it's not clear if this bid could even go through. It can probably best be understood as an attempt to throw a wrench in OpenAI's ongoing plan to restructure fully into a for-profit company. To complete the transition, OpenAI needs to compensate its nonprofit for the fair market value of what it is giving up. In October, The Information reported that OpenAI was planning to give the nonprofit at least 25 percent of the new company, at the time, worth $37.5 billion. But in late January, the Financial Times reported that the nonprofit might only receive around $30 billion, "but a final price is yet to be determined." That's still a lot of money, but many experts I've spoken with think it drastically undervalues what the nonprofit is giving up. Musk has sued to block OpenAI's conversion, arguing that he would be irreparably harmed if it went through. But while Musk's suit seems unlikely to succeed, his latest gambit might significantly drive up the price OpenAI has to pay. (My guess is that Altman will still ma
 ·  · 5m read
 · 
When we built a calculator to help meat-eaters offset the animal welfare impact of their diet through donations (like carbon offsets), we didn't expect it to become one of our most effective tools for engaging new donors. In this post we explain how it works, why it seems particularly promising for increasing support for farmed animal charities, and what you can do to support this work if you think it’s worthwhile. In the comments I’ll also share our answers to some frequently asked questions and concerns some people have when thinking about the idea of an ‘animal welfare offset’. Background FarmKind is a donation platform whose mission is to support the animal movement by raising funds from the general public for some of the most effective charities working to fix factory farming. When we built our platform, we directionally estimated how much a donation to each of our recommended charities helps animals, to show users.  This also made it possible for us to calculate how much someone would need to donate to do as much good for farmed animals as their diet harms them – like carbon offsetting, but for animal welfare. So we built it. What we didn’t expect was how much something we built as a side project would capture peoples’ imaginations!  What it is and what it isn’t What it is:  * An engaging tool for bringing to life the idea that there are still ways to help farmed animals even if you’re unable/unwilling to go vegetarian/vegan. * A way to help people get a rough sense of how much they might want to give to do an amount of good that’s commensurate with the harm to farmed animals caused by their diet What it isn’t:  * A perfectly accurate crystal ball to determine how much a given individual would need to donate to exactly offset their diet. See the caveats here to understand why you shouldn’t take this (or any other charity impact estimate) literally. All models are wrong but some are useful. * A flashy piece of software (yet!). It was built as