
French AI guy Francois Chollet has his own definition of intelligence, as a conversion ratio: the efficiency with which one can convert the information that's available into the ability to deal with future situations.
He says,
"A consequence of intelligence being a conversion ratio is that it is bounded. You cannot do better than optimality -- perfect conversion of the information you have available into the ability to deal with future situations.
You often hear people talk about how future AI will be omnipotent since it will have "an IQ of 10,000" or something like that -- or about how machine intelligence could increase exponentially. This makes no sense. If you are very intelligent, then your bottleneck quickly becomes the speed at which you can collect new information, rather than your intelligence.
In fact, most scientific fields today are not bounded by the limits of human intelligence but by experimentation."
It's certainly interesting, but I'm not entirely convinced.
There are matters of speed and the fact that electronics is incomparably faster than human neurobiology. Even if a robot and I are both performing optimal conversions, just the ability to complete them in milliseconds is not to be dismissed.
There are issues of recognizing, accessing and knowing all the information that we might already have available. There's that ancient 'universe in a grain of sand' idea. If the universe is holographic somehow and all information about everything is already present everywhere in everything, then the optimal conversion rate might (just speculatively) potentially be infinite.
And there's a deeper and more difficult to define issue, I think. Can this definition of intelligence really account for the difference between a human and a dog? Perhaps we could argue 'yes', that the human in simply much better at converting past experience into the ability to deal with future situations. But that's uninformative unless we inquire into the details.
Can we really say that we know what optimality in data conversion consists in? Or how far away from it we currently are? The human evolution of language - and by means of language, abstract and conceptual thought - was something of a quantum leap that's totally inconceivable from the point of view of a dog. (To conceive of it would require precisely the conceptual thought that the dog lacks.)
So might there be other, better ways of thinking that are inconceivable to humans because we never evolved the ability to think in those ways? If there are, and if AI's acquire the ability to access them, humans might end up in the same relationship to the AI's that dogs are to us.
Optimality might be a lot further away than we imagine (or possibly can imagine).
He says,
"A consequence of intelligence being a conversion ratio is that it is bounded. You cannot do better than optimality -- perfect conversion of the information you have available into the ability to deal with future situations.
You often hear people talk about how future AI will be omnipotent since it will have "an IQ of 10,000" or something like that -- or about how machine intelligence could increase exponentially. This makes no sense. If you are very intelligent, then your bottleneck quickly becomes the speed at which you can collect new information, rather than your intelligence.
In fact, most scientific fields today are not bounded by the limits of human intelligence but by experimentation."
It's certainly interesting, but I'm not entirely convinced.
There are matters of speed and the fact that electronics is incomparably faster than human neurobiology. Even if a robot and I are both performing optimal conversions, just the ability to complete them in milliseconds is not to be dismissed.
There are issues of recognizing, accessing and knowing all the information that we might already have available. There's that ancient 'universe in a grain of sand' idea. If the universe is holographic somehow and all information about everything is already present everywhere in everything, then the optimal conversion rate might (just speculatively) potentially be infinite.
And there's a deeper and more difficult to define issue, I think. Can this definition of intelligence really account for the difference between a human and a dog? Perhaps we could argue 'yes', that the human in simply much better at converting past experience into the ability to deal with future situations. But that's uninformative unless we inquire into the details.
Can we really say that we know what optimality in data conversion consists in? Or how far away from it we currently are? The human evolution of language - and by means of language, abstract and conceptual thought - was something of a quantum leap that's totally inconceivable from the point of view of a dog. (To conceive of it would require precisely the conceptual thought that the dog lacks.)
So might there be other, better ways of thinking that are inconceivable to humans because we never evolved the ability to think in those ways? If there are, and if AI's acquire the ability to access them, humans might end up in the same relationship to the AI's that dogs are to us.
Optimality might be a lot further away than we imagine (or possibly can imagine).