On LessWrong, where there are some good comments: https://www.lesswrong.com/posts/KnQs55tjxWopCzKsk/the-ai-timelines-scam


The fear is that Others (DeepMind, China, whoever) will develop AGI soon, so We have to develop AGI first in order to make sure it's safe, because Others won't make sure it's safe and We will. Also, We have to discuss AGI strategy in private (and avoid public discussion), so Others don't get the wrong ideas. (Generally, these claims have little empirical/rational backing to them; they're based on scary stories, not historically validated threat models) 
The claim that others will develop weapons and kill us with them by default implies a moral claim to resources, and a moral claim to be justified in making weapons in response.

22

0
0

Reactions

0
0
Comments2
Sorted by Click to highlight new comments since: Today at 10:59 AM

Thank you so much for posting this. It is nice to see others in our community willing to call it like it is.

I was talking with a colleague the other day about an AI organization that claims:
AGI is probably coming in the next 20 years.
Many of the reasons we have for believing this are secret.
They're secret because if we told people about those reasons, they'd learn things that would let them make an AGI even sooner than they would otherwise.

To be fair to MIRI (who I'm guessing are the organization in question), this lie is industry standard even among places that don't participate in the "strong AI" scam. Not just in how any data-based algorithm engineering is 80% data cleaning while everyone pretends the power is in having clever algorithms, but also in how startups pretend use human labor to pretend they have advanced AI or how short self-driving car timelines are a major part of Uber's value proposition.

The emperor has no clothes. Everyone in the field likes to think they are aware of this fact already when told, but it remains helpful to point it out explicitly at every opportunity.

This seems like selective presentation of the evidence. You haven't talked about AlphaZero or generative adversarial networks, for instance.

Not just in how any data-based algorithm engineering is 80% data cleaning while everyone pretends the power is in having clever algorithms

80% by what metric? Is your claim that Facebook could find your face in a photo using logistic regression if it had enough clean data? (If so, can you show me a peer-reviewed paper supporting this claim?)

Presumably you are saying something like: "80% of the human labor which goes into making these systems is data cleaning labor". First, I don't know if this is true. It seems like a hard claim to substantiate, because you'd have to get internal time usage data from a random sample of different organizations doing ML work. Anecdotes from social media are likely to lead us astray in this area, because "humans do most of the work that 'AI' is supposedly doing" is more of a "man bites dog" story and more likely to go viral.

But second... even if 80% of the hours spent are data cleaning hours, it's not totally clear how this is relevant. This could just as easily be a story about how general-purpose and easy-to-use machine learning libraries are, because "once you plug them in and press go, most of the time is spent giving the system examples of what you want it to do. (A child could do it!)"

startups pretend use human labor to pretend they have advanced AI

A friend of mine started a software startup which did not pretend to use any advanced AI whatsoever. However, he still did most email interactions with users by hand in the early days, because he wanted a deep understanding of how people were using his product. The existence of companies not using AI to power their products in no way refutes the existence of companies that do! And if you read the links in your post, their takes are significantly more nuanced than yours (Woebot does in fact use AI, '“Everything was perfect,” Mr. Park said in an interview after conversing with the Google bot. “It’s like a real person talking.”')

I think a common opinion is that current deep learning tech will not get us to AGI, but we have recently acquired important new abilities we didn't have before, we are using those abilities to do cool stuff we couldn't previously do, and it's possible we'll have AGI after acquiring some number of additional key insights.

Even if deep learning is a local maximum which has just gotten us a few more puzzle pieces--my personal view--it's possible that renewed interest in this area will end up producing AGI through some other means. I suspect that hype cycles in AI cause us to be overoptimistic about the ease of AGI during periods with lots of hype, and underoptimistic during periods of little hype. (From an EA perspective, the best outcome might be if the hype dies down but EAs keep working on it, to increase the probability that AGI is built by an EA team.) But at the end of the day, throwing research hours at problems tends to result in progress, and right now lots of research hours are being thrown at AI problems. I also think researchers tend to make more breakthroughs when they are feeling excited and audacious. That's when I get my best ideas, at least.