All of klevanoff's Comments + Replies

Carrick, this is an excellent post. I agree with most of the points that you make. I would, however, like to call attention to the wide consensus that exists in relation to acting prematurely.

As you observe, there are often path dependencies at play in AI strategy. Ill-conceived early actions can amplify the difficulty of taking corrective action at a later date. Under ideal circumstances, we would act under as close to certainty as possible. Achieving this ideal, however, is impractical for several interrelated reasons:

  1. AI strategy is replete with wicked

... (read more)
1
WillPearson
7y
I think it is important to note that in the political world there is the vision of two phases of AI development, narrow AI and general AI. Narrow AI is happening now. The 30+% job loss predictions in the next 20 years, all narrow AI. This is what people in the political sphere are preparing for, from my exposure to it. General AI is conveniently predicted more that 20 years away, so people aren't thinking about it because they don't know what it will look like and they have problems today to deal with. Getting this policy response right to narrow AI does have a large impact. Large scale unemployment could destabilize countries, causing economic woes and potentially war. So perhaps people interested in general AI policy should get involved with narrow AI policy, but make it clear that this is the first battle in a war, not the whole thing. This would place them well and they could build up reputations etc. They could be be in contact with the disentanglers so that when the general AI picture is clearer, they can make policy recommendations. I'd love it if the narrow-general AI split was reflected in all types of AI work.