Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5

Justifiable AI

#1
C C Offline
https://www.ribbonfarm.com/2018/03/13/justifiable-ai/

EXCERPT: Can an artificial intelligence break the law? Suppose one did. Would you take it to court? Would you make it testify, to swear to tell the truth, the whole truth, and nothing but the truth? What if I told you that an AI can do at most two, and that the result will be ducks, dogs, or kangaroos?

There are many efforts to design AIs that can explain their reasoning. I suspect they are not going to work out. We have a hard enough time explaining the implications of regular science, and the stuff we call AI is basically pre-scientific. There’s little theory or causation, only correlation. We truly don’t know how they work. And yet we can’t stop anthropomorphizing the damned things. Expecting a glorified syllogism to stand up on its hind legs and explain its corner cases is laughable. It’s also beside the point, because there is probably a better way to accomplish society’s goals.

There is widespread support for AIs that can show their work. [...] But there’s a whiff of paradox to demanding human-scale relatability from machines designed to be superhuman. Imagine that you had AI that thinks out loud as it goes. You might get a satisfying narrative about why a specific decision was made, but there would be little basis to trust it. [...] Mathematicians have been struggling with something like this for decades. How can you trust a computer-generated proof that can only be checked by computer? One suspects a slug of question-begging, or a confusion about what “proof” really means. [...]

[...]

AIs are not going away, so we better learn to deal with them. One obstacle is that “explainable” is an overloaded term. Zachary Lipton recently made a good start on teasing out what we talk about when we talk about understanding AI.

Decomposability: This means that each part, input, parameter, and calculation can be examined and understood separately. You can start with an overall algorithm, eg looks-like + talks-like + walks-like = animal, and then drill down into the implementation of looks-like, etc, and test each separately. Neural nets, especially the deep learning flavors, cannot be easily broken down like this.

Simulatability via small models: Neural nets may not be easy to take apart, but many are small enough to hold in your head. Freshman compsci students are first led through the venerable tf/idf algorithm, which can be explained in a paragraph. Then the teacher drops the ah-ha moment when they demonstrate how with a couple of twists tf/idf is equivalent to a simple Bayesian net. This is essentially the gateway drug to machine learning.

Simulatibility via short paths: Even though you can’t possibly read through a dictionary with a trillion entries, you can find any item in it in forty steps or less. In the same way an AI’s model may have millions of nodes, but arranged so that using the model to make a single inference only touches a small number of them. In theory a human could follow along.

Algorithmic transparency: This one is a bit more technical, but this means that the algorithm’s behavior is well-defined. It can be proven to do certain things, like arrive a unique answer, even with inputs we haven’t tested yet. Deep learning models do not come with this assurance because (surprise!) we have no idea how they actually work.

A statistical spellchecker, which is not even close to what we call AI these days, passes all four of these tests. Lipton notes that humans pass none. Presumably, somewhere in the middle is a sweet spot of predictive power and intuitive operation. But we don’t know. The mathematicians we left scratching their heads a thousand words back can’t help you either.
Artificial humility

What we do know is that asking for “just so” narrative explanations from AI is not going to work....

MORE: https://www.ribbonfarm.com/2018/03/13/justifiable-ai/
Reply




Users browsing this thread: 1 Guest(s)