I think it is worth noting that Ambitious Impact (formerly known as Charity Entrepreneurship) has jumped into the extremely competitive space of for-profit startups BEFORE trying to help build the AI Safety space. 

Some quick thoughts/background:

1. The AI Safety space has LOADS of very smart people that can't get jobs because there aren't enough organisations to hire them. It might be the biggest bottleneck in the cause area. Meanwhile, capabilities literally has dozens of billions being thrown into it

2. For-profit entrepreneurship isn't in Ambitious Impact's (AI, lol) top cause area

3. I brought this up in the past, and Joey responded in this post. I think his post was overall a useful start, specific in some ways but vague in others. Vague in a 'hey maybe you should look into this but I won't tell you why' kinda way

Here is what I think is going on: there are people (maybe including Joey) 'in-the-know' about some things that make creating longtermist/AI safety startups really hard, but some of those reasons aren't being discussed publicly out of fear of shaming people for their failures and/or reluctance to put their money where their mouth is on x-risk.

I think we need a public discussion about whats  going on here. Our lives may literally depend on it, even if Ambitious Impact doesn't think so.

-5

1
13

Reactions

1
13
Comments13
Sorted by Click to highlight new comments since: Today at 10:20 AM

"As shown in this table 0% of CE staff (including me) identify AI as their top cause area. I think across the team people's reasons are varied but cluster around something close to epistemic scepticism. My personal perspective is also in line with that."

A quote from Joey replying to your last post. Why would you start an org around something none of your staff have as their top cause area? All CE charities to date have focused on global development or animal welfare, why would they switch focus to AI now? Doesn't seem so mysterious to me anyway.

All CE charities to date have focused on global development or animal welfare

CE incubated Training for Good, which runs two AI-related fellowships. They didn’t start out with an AI focus, but they also didn’t start out with a GHD or animal welfare focus.

AIM simply doesn't rate AI safety as a priority cause area. It's not any particular organisation's job to work on your favourite cause area. They are allowed to have a different prioritisation from you.

I think Yanni isn't writing about personal favourites. Assuming there is such a thing as objective truth, it makes sense to discuss cause prioritization as an objective question.

Hmmm, I think the fact that you felt this was worth pointing out AND that people upvoted it, means that I haven't made my point clear. My major concern is that there are things known about the challenges that come with incubating longtermist orgs that aren't being discussed openly. 

Maybe I misunderstood you.

I think AIM doesn’t constitute evidence for this. Your top hypothesis should be that they don’t think AI safety is that good of a cause area, before positing the more complicated explanation. I say this partly based on interacting with people who have worked at AIM.

Just as the EA community does not own its donors' money -- one of the most upvoted posts ever -- it also doesn't own the financial sacrifices people at A.I. make to do the work they think is important. People who donate to, and work at, A.I. know that it has a neartermist focus.

Looking at funding trends over the past few years, it seems relatively easier for new/newish AI safety organizations to get supported than new/newish global health or animal advocacy organizations. For example, Redwood got over $20MM in funding from EA sources in the first ~2 years of its existence. Although the funding bar may be higher now than when those grants were made, I'm not convinced that the bottleneck here is that new AI safety orgs can't get the support needed to launch.

Sorry, it is so confusing to refer to AIM as 'A.I.', particularly in this context...

Yeah that was me attempting to be a bit cheeky but probably not worth it in exchange for clarity.

The AI Safety space has LOADS of very smart people that can't get jobs because there aren't enough organisations to hire them. It might be the biggest bottleneck in the cause area. Meanwhile, capabilities literally has dozens of billions being thrown into it

Is not enough organizations really the problem? For technical AI safety research, at least, I hear research management capacity is a bottleneck. A new technical AI safety org would compete with the others over the same potential research managers.

Another issue could be that few interventions seem net positive (maybe things have changed since that comment 3 years ago).

That's an interesting hypothesis. I think "seem" is an important word, because it points to me something I see as another issue - inaction leading from conservativeness, leading to capabilities pulling even further away.

TBH this is me putting my tin foil hat on a bit, but even if my most paranoid thoughts are ruled out, it is still a weirdly under-discussed issue in the space and I'm cashing in all my chips for the Amnesty Week thing. Yolo.

Curated and popular this week
Relevant opportunities