M

Meefburger

6 karmaJoined Jan 2017

Comments
4

Let's see, 10 parties. If they all simultaneously decide on AI pausing at a 20 percent chance that's 0.2^10 = a number that's basically 0.

 

I don't think you should treat these probabilities as independent. I think the intuition that a global pause is plausible comes from these states' interest in a moratorium being highly correlated, because the reasons for wanting a pause are based on facts about the world that everyone has access to (e.g. AI is difficult to control) and motivations that are fairly general (e.g. powerful, difficult-to-control influences in the world are bad from most people's perspective, and the other things that Matthew mentioned).

I'm a little confused by the focus on a global police state. If someone told me that, in the year 2230, humans were still around and AI hadn't changed much since 2030, my first guess would be that this was mainly accomplished by some combination of very strong norms against building advanced AI and treaties/laws/monitoring/etc that focuses on the hardware used to create advanced AI, including its supply chains and what that hardware is used for. I would also guess that this required improvements in our ability to tell dangerous computing and the hardware that enables it apart from benign computing and its hardware. (Also, hearing this would be a huge update to me that the world is structured such that this boundary can be drawn in a way that doesn't require us to monitor everyone all the time to see who is crossing it. So maybe I just have a low prior on this kind of police state being a feasible way to limit the development of technology.)


Somewhat relatedly:

> Given both hardware progress and algorithmic progress, the cost of training AI is dropping very quickly. The price of computation has historically fallen by half roughly every two to three years since 1945. This means that even if we could increase the cost of production of computer hardware by, say, 1000% through an international ban on the technology, it may only take a decade for continued hardware progress alone to drive costs back to their previous level, allowing actors across the world to train frontier AI despite the ban.

I think if there were a ban that drove up the price of hardware by 10x, wouldn't this be a severe disincentive to keep developing the technology? It seems like the large profitability of computing hardware is a necessary ingredient for the rapid development and decrease in cost.

 

Overall, I thought this was a good contribution. Thanks!

I think it's reasonable to go either way on Starcraft. It's true that the version of Alphastar from  three years ago were not beating the best humans more than half the time, and they did not take screen pixels as inputs. 

But those models were substantially inhibited in their actions per minute, because computers that can beat humans by being fast are boring. Given that the version of Alphastar that beat MaNa was already throttled (albeit not in the right way to play like a human), I don't see why an AI with no APM restrictions couldn't beat the best humans. And I don't see any particular reason you couldn't train an image classifier to get from screen pixels to Alphastar's inputs.

So I think this mostly comes down to whether you think it was implied in the prediction that a realistic APM limit was implied, and what your bar is for "feasible".

I would be surprised if more than 5% of people who do Introductory EA fellowships make a high impact career change. 

 

Do you have a sense of the fraction of people who do introductory fellowships, then make some attempt at a high impact career change? A mundane way for this 5% to happen would be if lots of people apply to a bunch of jobs or degree programs, some of which are high impact, then go with something lower impact before getting an offer for anything high impact.