Hide table of contents


 

Epistemic Status:

This analysis draws from my interactions and experiences in the AI Safety field and the Effective Altruism movement. While rooted in firsthand insights, the validity of the arguments presented is subject to change as the field evolves and should be interpreted with the acknowledgement that there may be nuances and perspectives not covered here. Many of the challenges here might just be non-actionable as typical startup/non-profit concepts don’t translate super well to an early-stage field such as AI safety.

TL;DR:

AI Safety presents immense potential impact but has equally significant challenges for aspiring entrepreneurs. The barriers are manifold, from the need for comprehensive impact plans, difficulty selling to nonprofits, an underdeveloped idea space, a scarcity of specialized talent, and a limited market size. However, the urgent importance of AI Safety necessitates innovative solutions.

Acknowledgements:

Thanks to Luca De Leo and Agustin Covarrubias for providing valuable feedback.

 

Introduction: 

AI Safety is an underserved direction of work. So, organizations like 80k Hours advise people to take up these kinds of jobs since they are important and impactful. However, we also have way more people who want to get into the field than the amount the field can absorb. This leads to high rejection rates, frustration building up, and drop-off rates increasing.

 It doesn’t help that a high amount of EAs/80K Hours advisees are probably high potential and hardworking, which means they also have relatively high opportunity costs.

This underscores the urgent need for more AI safety-aligned organizations to absorb this talent and diversify the field.

 

Types of AI Safety Ventures:

 

There are three overarching types of AI Safety ventures, which can be for-profit or non-profit:

 

  • Infrastructure: Tooling, mentorship, training, or legal support for researchers.
  • New AI Safety Organizations: New labs or fellowship programs.
  • Advocacy Organizations: Raising awareness about the field.



Note that I will flip-flop across three main models for ventures in the challenges below:

 

Challenges:


 

1. Need for a robust path to impact plans:

Entrepreneurs in the Effective Altruism space often find the stringent requirements for impact metrics and background research daunting. While this is not all bad, it usually puts many potential founders off because it conflicts with the entrepreneurial drive of iterating fast and wanting to do things that don’t necessarily scale. 

A more flexible approach, especially in the early stages, could encourage more entrepreneurs to take the plunge. So, should we just not care about such metrics and give founders a clean slate? Absolutely not. A lot of the non-profit ecosystem relies on robust and transparent impact reporting, but I think the bar should be a lot lower in the early stages. Microgrants or exploratory grants are a viable solution here(though, with their own limitations).

 

2.Selling to (or service for) nonprofits isn’t the best idea:

There are only a handful of AI Safety organizations right now, and most of the major ones are structured as non-profits. Nonprofits often operate under tight budgets with limited discretionary spending. They rely on grants, donations, and other forms of funding, which can be unpredictable and subject to fluctuation.

Selling to a small market of nonprofits is risky and financially unappealing. 

 Nonprofits often have stringent procurement processes governed by their boards, donors, and the need to adhere to certain regulations and guidelines. This can result in longer sales cycles and a slower adoption rate of new technologies or services. Entrepreneurs may find these processes cumbersome and time-consuming, potentially delaying revenue and impacting cash flow.


 

3.Infertile idea space:

Startup ideas tend not to work out at early stages, and at that stage, it is common advice to pivot after updating using user interviews and finding new pain points. For pivots to be possible, Y Combinator, a famous accelerator, suggests picking a fertile idea space- i.e., a relatively large field with lots of moving variables that might lead to existing inefficiencies.


 AI Safety doesn't seem like a fertile idea space yet - the number of funders, organizations, and researchers is small, meaning founders would find it hard to pivot into an alternate value proposition model if things don’t work out as planned due to the concentrated nature of the field and other reasons listed here. I feel there is higher than-usual idea risk on top of the execution risk with a rapidly evolving AI field. We just don’t know what the safety field will, or more crucially, should look like in the next 1-2 years, and even if any ideas we execute right now would not end up being net negative. This makes idea development and prioritization quite challenging.


 

4.The right kind of people:

The right kind of founder material is hard to find- they need to be at the intersection of being concerned with AI safety(or social impact) and having a background working preferably in early-stage organizations. In my experience, the ones who are concerned about AI Safety want to run off and work directly on the field instead and/or have short timelines, so they can’t really wait for a venture to work out. The ones who are into entrepreneurship feel put off by the small field, the small pool of funders, and the higher risk compared to the reward.

This talent gap has always seemed innate to the field because of its nascent nature.

5.Limited Market Size: 

AI safety research is a specialized and relatively small field. The target audience is narrow, which limits any revenue opportunities. Even if you want to sustain yourself on grants, it generally seems like the number of potential funders is limited, even relative to the already small amount of funders in the non-profit sector as a whole.

 

6.High Development Costs:

Developing niche tools or solutions in AI Safety often requires specialized knowledge and high development costs, adding another barrier to entry.


Concluding Thoughts:

 

I expect there to be quite a lot of changes to the variables here in the short term, given the recent AI boom(particularly with new regulatory bodies being proposed) and probable cross-industry applications of AI, which might pour some money into the application of AI safety standards.

Effective Altruism as a movement is very thinking-oriented and not doing-oriented. There is some need for spaces within the movement to take risks and foster ideas. 

Especially in the wake of the FTX crisis, I feel EA needs more such organizations to demonstrate reputationally self-sustaining anchors and create inherent mechanisms to attract more funders to the space. For this, there needs to be more focus on creating organizations and opportunities for impact.


 

16

2
0

Reactions

2
0

More posts like this

Comments2
Sorted by Click to highlight new comments since:

You discuss three types of AI safety ventures:
 

  • Infrastructure: Tooling, mentorship, training, or legal support for researchers.
  • New AI Safety Organizations: New labs or fellowship programs.
  • Advocacy Organizations: Raising awareness about the field.

Where would, for example, insurance for AI products fit in this? This is a for-profit idea that creates a natural business incentive to understand & research risks from AI products at a very granular level, and if it succeeds, it puts you into position to influence the entire industry (e.g. "we will lower your premiums if you implement safety measure X").

I agree that if you restrict yourself to either supporting AIS researchers, launching field-building projects or research labs, or doing advocacy, then you will in fact not find good startup ideas, for the structural reasons you do a good job of listing in your post, as well as the fact that these are all things people are already doing.

METR is a very good AIS org. In addition to just being really solid and competent, a lot of why they succeeded was that they started doing something that few people were thinking about at the time. Everyone and their dog is launching an evals startup today, but the real value is finding ideas like METR before they are widespread. If the startup ideas you consider are all about doing the same thing that existing orgs do, you will miss out on the most important ones.

I do agree that the intersection of impact & profit & bootstrappability is small and hard to hit, and there's no law of nature that says something should definitely exist there. But something exists in that corner, it will be a novel type of thing.

(reposted from a Slack thread)

I've been thinking about this for a while now and I agree with all of these points. Another route would be selling AI Safety solutions to non-AI safety firms.  This solves many of the issues raised in the post but introduces new ones.  As you mentioned in the Infertile Idea Space section, companies often start with a product idea, talk to potential customers, and then end up building a different product to solve a bigger problem that the customer segment has.  In this context that could look like offering a product with an alignment tax, customers not being willing to pay it, pivoting to something accelerationist instead.  You might think "I would never do that!" but it can be very easy to fool yourself into thinking you are still having a positive impact when the alternative is the painful process of firing your employees and telling your investors (who are often your friends and family) that you lost their money.

Curated and popular this week
Relevant opportunities