https://www.fhi.ox.ac.uk/wp-content/uploads/Reframing_Superintelligence_FHI-TR-2019-1.1-1.pdf

This paper produced by Future of Humanity Institute is fairly heavy for me to digest, but I think it reaches conclusions similar to a profound concern I have:

- "Intelligence" does not necessarily need to have anything to do with "our" type of intelligence, where we steadily build on historic knowledge; indeed this approach naturally falls prey to preferring "hedgehogs" (as compared to "foxes" in the hedgehogs v foxes compairson in Tetlock's "superintelligence") - who are worse than random at predicting the future;

- With latest version of Alpha Zero which quickly reached superintelligent levels with no human intervention in 3 different game domains we have to face the uncomfortable truth that AI has already far surpassed our own level of intelligence.

- that corporations as legal person and with profit maximising at their core (a value orthogonal to values that cause humanity to thrive) could rapidly become extremely dominant with this type of AI used across all tasks they are required to perform.

- this represents a real deep and potentially existential threat that the EA community should take extremely seriously. It is also at the core of the increasingly systemic failure of politics

- that this is particularly difficult for the EA community to accept given the high status they place on their intellectual capabilities (and that status is a key driver in our limbic brain so will constantly play tricks on us)

- but that unless EAs are far more intelligent than Kasparov and Sedol and all those that play these games this risk should be taken very seriously.

- that potentially the prime purpose of politics should thus be to ensure that corporations act in a way that is value aligned with the communities they serve, including international coordination as necessary.

- I will give £250 to a charity of the choice of the first party who is able to come up with a flaw in my argument that is not along the lines of "you are too stupid to understand".

-8

0
0

Reactions

0
0
Comments5
Sorted by Click to highlight new comments since: Today at 12:26 AM

To the title question - no it's not, for we have not observed any superintelligent behavior.

"Intelligence" does not necessarily need to have anything to do with "our" type of intelligence, where we steadily build on historic knowledge; indeed this approach naturally falls prey to preferring "hedgehogs" (as compared to "foxes" in the hedgehogs v foxes compairson in Tetlock's "superintelligence")

Foxes absolutely build on historic knowledge. "Our" (i.e. human) intelligence can be either foxlike or hedgehoglike, after all both of them were featured in Tetlock's research, and in any case this is not what FHI means by the idea of a unitary superintelligent agent.

- who are worse than random at predicting the future;

Foxes are not worse than random at predicting the future, they are only modestly less effective than hedgehogs.

AI has already far surpassed our own level of intelligence

Only in some domains, and computers have been better at some domains for decades anyway (e.g. arithmetic).

this represents a real deep and potentially existential threat that the EA community should take extremely seriously.

The fact that corporations maximize profit is an existential threat? Sure, in a very broad sense it might lead to catastrophes. Just like happiness-maximizing people might lead to a catastrophe, career-maximizing politicians might lead to a catastrophe, security-maximizing states might lead to a catastrophe, and so on. That doesn't mean that replacing these things with something better is feasible, all-things-considered. And we can't even talk meaningfully about replacing them until a replacement is proposed. AFAIK the only way to avoid profit maximization is to put business under public control, but that just replaces profit maximization with vote maximization, and possibly creates economic problems too.

It is also at the core of the increasingly systemic failure of politics

Is there any good evidence that politics is suffering an increasing amount of failure, let alone a systemic one?

Before you answer, think carefully about all the other 7 billion people in the world besides Americans/Western Europeans. And what things were like 10 or 20 years ago.

this is particularly difficult for the EA community to accept given the high status they place on their intellectual capabilities

I don't know of any evidence that EAs are irrational or biased judges of the capabilities of other people or software.

potentially the prime purpose of politics should thus be to ensure that corporations act in a way that is value aligned with the communities they serve, including international coordination as necessary.

The prime purpose of politics is to ensure security, law and order. This cannot be disputed as every other policy goal is impossible until governance is achieved, and anarchy is the worst state of affairs for people to be in. Maybe you mean that the most important political activity for EAs, at the margin right now, is to improve corporation behavior. Potentially? Sure. In reality? Probably not, simply because there are so many other possibilities that must be evaluated as well. Foreign aid, defense, climate change, welfare, technology policy, etc.

It appears unfortunate that nobody can be bothered to read the underlying paper, written by Eric Drexler, a senior Oxford Martin fellow which completely reframes the AI debate, to something far from paper clips and much closer to reality. If a human sat down at Chess, Go, and Shogi and simply by playing the game became far better than any other human in a couple of weeks we would all see this as superintelligent. That this achievement is so easily dismissed to me shows an a complete unwillingness to deal with reality as it is.

Context: the report is 190 pages long and was published this month. Those who are reading it seem unlikely to reply with detailed analysis on this particular Forum post.

Object-level response: becoming excellent at chess, go, and shogi is interesting, since it is more general than being excellent at any one alone. My impression is that the AI safety community recognises the importance of milestones like this. It is simply the case that superintelligence typically means something far more general still, such as

an intellect that is much smarter than the best human brains in practically every field, including scientific creativity, general wisdom and social skills

which will not include an AI which can play a specific set of games.

Since we have now discovered that the disagreement is merely a matter of definitions, hostilities can be ceased :)

Hi Kit - Happy New Year!

Thanks for that, yes I hope a more digestible summary will be produced. I am not intending to be hostile at all, I am just very worried about the AI issue, just I see it as a different issue from that highlighted by the EA community, much more like that highlighted in the paper, hence my purpose in highlighting it.

I think humanity are not particularly generally intelligent, just they become programmed/conditioned to be relatively good at a number of tasks necessary to survive in their environment (eg. a baby chucked into the rainforest will not survive as long as all of the much less intelligent animals that live there). Indeed my worry is we are surprisingly stupid and manipulable as a broad group - as a species our driving motivators (fear, status) generally create the narrative in our cognitive consciousness, and our "blind spot" is the belief that we are much smarter than we are. in the US and the UK the political process has become paralysed as seemingly logical statements apparently talking to our conscious brain are actually playing to our deep subconscious motivators, creating a ridiculous tribalism far removed from any form of logic.

We perhaps "feel" intelligent as we create complex intellectual frameworks that explain things in detail, but this is really a process of "mapping the territory". Its hollowness is shown in eg. Shogi, Chess and Go by Alpha Zero, since despite the many thousands of years of academic study poured into these subjects that has repeatedly mapped the territory this was blown aside by a self improving algorithm working out "what fits". Maps in the real world might be good talking points but are simply nowhere near accurate enough at a human level of intelligibility.

As an investment banker I never had much interest in mapping the territory (despite being logical) but I was interested in "the best way to get from here to there avoiding the obstacles" (I did not care how as long as it works). And this is how life in generality (outside of academia) is - "how can i profit maximise doing x, without breaking any laws (better still if i find a clever way around the laws". And with increasingly powerful self improving algorithms this ends up with the kind of dystopia shown in this video from Yuval Noah Harari and Tristan Harris - "supercomputers" (superintelligence) pointed at our brains. https://www.youtube.com/watch?v=v0sWeLZ8PXg

In all of this I know it is hard for EAs to properly engage - status is gained in any community (which is a powerful deep motivator) by largely agreeing with the norms of that community - and I know my views are far from normal in this community so status is gained by rejecting what I say. But as we share the same deep values - we want the world to be the best place it can be - (which is something very much other than making the most amount of money possible for already rich shareholders) - and I have huge belief in the potential and need for the EA community you will forgive me if I keep trying.

If we avoid this dystopian near future of "superintelligent multi-level marketing" I hope the future will be more like this suggested by Steven Strogatz https://www.nytimes.com/2018/12/26/science/chess-artificial-intelligence.html which would leave the key remaining challenge being one of creating a mechanism for ensuring value alignment....

"But envisage a day, perhaps in the not too distant future, when AlphaZero has evolved into a more general problem-solving algorithm; call it AlphaInfinity. Like its ancestor, it would have supreme insight: it could come up with beautiful proofs, as elegant as the chess games that AlphaZero played against Stockfish. And each proof would reveal why a theorem was true; AlphaInfinity wouldn’t merely bludgeon you into accepting it with some ugly, difficult argument.

For human mathematicians and scientists, this day would mark the dawn of a new era of insight. But it may not last. As machines become ever faster, and humans stay put with their neurons running at sluggish millisecond time scales, another day will follow when we can no longer keep up. The dawn of human insight may quickly turn to dusk.

Suppose that deeper patterns exist to be discovered — in the ways genes are regulated or cancer progresses; in the orchestration of the immune system; in the dance of subatomic particles. And suppose that these patterns can be predicted, but only by an intelligence far superior to ours. If AlphaInfinity could identify and understand them, it would seem to us like an oracle.

We would sit at its feet and listen intently. We would not understand why the oracle was always right, but we could check its calculations and predictions against experiments and observations, and confirm its revelations. Science, that signal human endeavor, would reduce our role to that of spectators, gaping in wonder and confusion.

Maybe eventually our lack of insight would no longer bother us. After all, AlphaInfinity could cure all our diseases, solve all our scientific problems and make all our other intellectual trains run on time. We did pretty well without much insight for the first 300,000 years or so of our existence as Homo sapiens. And we’ll have no shortage of memory: we will recall with pride the golden era of human insight, this glorious interlude, a few thousand years long, between our uncomprehending past and our incomprehensible future."

Curated and popular this week
Relevant opportunities