Hide table of contents

My intuition is that there are heaps of very talented people interested in AI Safety but 1/100 of the jobs.

A second intuition I have is that the rejected talent WON'T spillover into other cause areas much (biorisk, animal welfare, whatever) and may event spillover into capabilities!

Let's also assume more companies working towards AI Safety is a good thing (I'm not super interested in debating this point).

How do we get more AI Safety companies off the ground??

11

0
0

Reactions

0
0
New Answer
New Comment


4 Answers sorted by

Hey Yanni!

Quick response from CE here as we have some insight on this: 

a) CE is not funding-limited and does not think AI is an area we will work on in the future, regardless of available funding in the space (we have been offered funding for this many times in the past). You can see a little bit about our cause prioritization here and here

b) There are tons of organizations that aim or have aimed to do this, including Rethink PrioritiesImpact AcademyCenter for Effective Altruism and the Longtermist Entrepreneurship Project

c) An interesting question might be why there has not yet been huge output from other incubators, given the substantial funding and unused talent in the space. I think the best two responses on this are the post-mortem from the Longtermist Entrepreneurship Project and a post we wrote about tips and challenges of starting incubators.

You've given lots of reasons here, and cited posts which also give several reasons. However, I feel like this hasn't stated the real & genuine crux - which is that you are sceptical that AI safety is an important area to work on. 

Would you agree this is a fair summary of your perspective? 

As shown in this table 0% of CE staff (including me) identify AI as their top cause area. I think across the team people's reasons are varied but cluster around something close to epistemic scepticism. My personal perspective is also in line with that.

I really want to get to the bottom of this, because it seems like the dominant consideration here (i.e. the crux). 

identify AI as their top cause area.

Not a top cause area ≠ Not important 

At the risk of being too direct, do you as an individual, believe AI safety is an important cause area for EA's to be working on? 

DC
2
0
0

I'm reminded that I'm two years late on leaving an excorciating comment on the Longtermist Entrepreneurship Project postmortem. I have never been as angry at a post on here as I was at that one. I don't even know where to begin.

4
yanni
I'm not sure what to make of your comment but interested to hear more.
1
Robi Rahman
You mean this? https://forum.effectivealtruism.org/posts/z56YFpphrQDTSPLqi/what-we-learned-from-a-year-incubating-longtermist If so, what part of it do you object to?

Hey Joey - this is an extremely helpful response. Thanks for making the effort! 

Nonlinear was this, and then...

Catalyze Impact is a new organization focused on incubating AI Safety research organizations. https://www.catalyze-impact.org/ 

thanks!

If someone reading this wants to give me money I could reach out to CE and see if this is something we could set up.

This comment was confusing - I meant I could set something up with their advice.

Curated and popular this week
 ·  · 1m read
 · 
 ·  · 5m read
 · 
When we built a calculator to help meat-eaters offset the animal welfare impact of their diet through donations (like carbon offsets), we didn't expect it to become one of our most effective tools for engaging new donors. In this post we explain how it works, why it seems particularly promising for increasing support for farmed animal charities, and what you can do to support this work if you think it’s worthwhile. In the comments I’ll also share our answers to some frequently asked questions and concerns some people have when thinking about the idea of an ‘animal welfare offset’. Background FarmKind is a donation platform whose mission is to support the animal movement by raising funds from the general public for some of the most effective charities working to fix factory farming. When we built our platform, we directionally estimated how much a donation to each of our recommended charities helps animals, to show users.  This also made it possible for us to calculate how much someone would need to donate to do as much good for farmed animals as their diet harms them – like carbon offsetting, but for animal welfare. So we built it. What we didn’t expect was how much something we built as a side project would capture peoples’ imaginations!  What it is and what it isn’t What it is:  * An engaging tool for bringing to life the idea that there are still ways to help farmed animals even if you’re unable/unwilling to go vegetarian/vegan. * A way to help people get a rough sense of how much they might want to give to do an amount of good that’s commensurate with the harm to farmed animals caused by their diet What it isn’t:  * A perfectly accurate crystal ball to determine how much a given individual would need to donate to exactly offset their diet. See the caveats here to understand why you shouldn’t take this (or any other charity impact estimate) literally. All models are wrong but some are useful. * A flashy piece of software (yet!). It was built as
Garrison
 ·  · 7m read
 · 
This is the full text of a post from "The Obsolete Newsletter," a Substack that I write about the intersection of capitalism, geopolitics, and artificial intelligence. I’m a freelance journalist and the author of a forthcoming book called Obsolete: Power, Profit, and the Race to build Machine Superintelligence. Consider subscribing to stay up to date with my work. Wow. The Wall Street Journal just reported that, "a consortium of investors led by Elon Musk is offering $97.4 billion to buy the nonprofit that controls OpenAI." Technically, they can't actually do that, so I'm going to assume that Musk is trying to buy all of the nonprofit's assets, which include governing control over OpenAI's for-profit, as well as all the profits above the company's profit caps. OpenAI CEO Sam Altman already tweeted, "no thank you but we will buy twitter for $9.74 billion if you want." (Musk, for his part, replied with just the word: "Swindler.") Even if Altman were willing, it's not clear if this bid could even go through. It can probably best be understood as an attempt to throw a wrench in OpenAI's ongoing plan to restructure fully into a for-profit company. To complete the transition, OpenAI needs to compensate its nonprofit for the fair market value of what it is giving up. In October, The Information reported that OpenAI was planning to give the nonprofit at least 25 percent of the new company, at the time, worth $37.5 billion. But in late January, the Financial Times reported that the nonprofit might only receive around $30 billion, "but a final price is yet to be determined." That's still a lot of money, but many experts I've spoken with think it drastically undervalues what the nonprofit is giving up. Musk has sued to block OpenAI's conversion, arguing that he would be irreparably harmed if it went through. But while Musk's suit seems unlikely to succeed, his latest gambit might significantly drive up the price OpenAI has to pay. (My guess is that Altman will still ma