Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5

AI succeeds without need of understanding, theory, causation, views about being, etc

#1
C C Offline
The dark secret at the heart of artificial intelligence
https://www.technologyreview.com/2017/04...art-of-ai/

EXCERPT: ... a strange self-driving car was released onto the quiet roads of Monmouth County, New Jersey. The experimental vehicle [...] was unlike anything demonstrated by Google, Tesla, or General Motors, and it showed the rising power of artificial intelligence. The car didn’t follow a single instruction provided by an engineer or programmer. Instead, it relied entirely on an algorithm that had taught itself to drive by watching a human do it.

Getting a car to drive this way was an impressive feat. But it’s also a bit unsettling, since it isn’t completely clear how the car makes its decisions. Information from the vehicle’s sensors goes straight into a huge network of artificial neurons that process the data and then deliver the commands ... The result seems to match the responses you’d expect from a human driver. But what if one day it did something unexpected—crashed into a tree, or sat at a green light? As things stand now, it might be difficult to find out why. The system is so complicated that even the engineers who designed it may struggle to isolate the reason for any single action. And you can’t ask it: there is no obvious way to design such a system so that it could always explain why it did what it did.

The mysterious mind of this vehicle points to a looming issue with artificial intelligence. The car’s underlying AI technology, known as deep learning, has proved very powerful at solving problems in recent years, and it has been widely deployed [...] There is now hope that the same techniques will be able to diagnose deadly diseases, make million-dollar trading decisions, and do countless other things to transform whole industries.

But this won’t happen—or shouldn’t happen—unless we find ways of making techniques like deep learning more understandable to their creators and accountable to their users. Otherwise it will be hard to predict when failures might occur—and it’s inevitable they will... (MORE - details)



Our machines now have knowledge we’ll never understand
https://www.wired.com/story/our-machines...nderstand/

EXCERPT: "The new availability of huge amounts of data, along with the statistical tools to crunch these numbers, offers a whole new way of understanding the world. Correlation supersedes causation, and science can advance even without coherent models, unified theories, or really any mechanistic explanation at all."

So wrote Wired’s Chris Anderson in 2008. It kicked up a little storm at the time [...] For example, an article in a journal of molecular biology asked, “…if we stop looking for models and hypotheses, are we still really doing science?” The answer clearly was supposed to be: “No.”

But today [...]  the controversy sounds quaint. Advances in computer software, enabled by our newly capacious, networked hardware, are enabling computers not only to start without models — rule sets that express how the elements of a system affect one another — but to generate their own, albeit ones that may not look much like what humans would create. It’s even becoming a standard method, as any self-respecting tech company has now adopted a “machine-learning first” ethic.

We are increasingly relying on machines that derive conclusions from models that they themselves have created, models that are often beyond human comprehension, models that “think” about the world differently than we do.

But this comes with a price. This infusion of alien intelligence is bringing into question the assumptions embedded in our long Western tradition. We thought knowledge was about finding the order hidden in the chaos. We thought it was about simplifying the world. It looks like we were wrong. Knowing the world may require giving up on understanding it. (MORE - details)



Will brains or algorithms rule the kingdom of science? (2020)
https://aeon.co/essays/will-brains-or-al...of-science

EXCERPT: A schism is emerging in the scientific enterprise. On the one side is the human mind, the source of every story, theory and explanation that our species holds dear. On the other stand the machines, whose algorithms possess astonishing predictive power but whose inner workings remain radically opaque to human observers. [...] We appear to have reached a limit at which understanding and prediction – mechanisms and models – are falling out of alignment. In Bacon and Newton’s era, world accounts that were tractable to a human mind, and predictions that could be tested, were joined in a virtuous circle. Compelling theories, backed by real-world observations, have advanced humanity’s understanding of everything ...

[...] Paradoxes and their perceptual cousins, illusions, offer two intriguing examples of the tangled relationship between prediction and understanding. Both describe situations where we thought we understood something, only to be confronted with anomalies. Understanding is less well-understood than it seems. Some of the best-known visual illusions ‘flip’ between two different interpretations of the same object ... We know that objects in real life don’t really switch on a dime like this, and yet that’s what our senses are telling us. Ludwig Wittgenstein, who was obsessed with the duck-rabbit illusion, suggested that one sees an object secondarily following a primary interpretation, as opposed to understanding an object only after it has been seen. What we see is what we expect to see.

The cognitive scientist Richard Gregory, in his wonderful book "Seeing Through Illusions" (2009), calls them ‘strange phenomena of perception that challenge our sense of reality’. He explains how they happen because our understanding is informed by the predictions of multiple different rule systems, applied out of context [...] Paradoxes, like illusions, force intuition to collide with apparently basic facts about the world. Paradoxes are conclusions of valid arguments or observations that seem self-contradictory or logically untenable. They emerge frequently in the natural sciences – most notably in physics, in both its philosophical and scientific incarnations.

Even machines can suffer from paradoxes. ‘Simpson’s paradox’ describes the way that a trend appearing independently in several data-sets can disappear or even reverse when they’re combined – which means one data-set can be used to support competing conclusions. This occurs frequently in sport, where single players outperform others in any given season. However, when multiple seasons are combined, these players no longer lead, due to absolute differences such as the total games played, number of times batting, etc. There is also something known as an ‘accuracy paradox’, where models seem to perform well for completely circular reasons – that is, their solutions are essentially baked into their samples. This lies behind numerous examples of algorithmic bias where minorities based on race and gender are frequently misclassified, because the training data used as a benchmark for accuracy comes from our own biased and imperfect world.

Perhaps the most rigorous work on paradox was pursued by Kurt Gödel in Formally Undecidable Propositions of Principia Mathematica and Related Systems (1931). Gödel discovered that in every strictly formal mathematical system, there are statements that cannot be confirmed or refuted even when they are derived from the axioms of the system itself. The axioms of a formal system allow for the possibility of contradictions, and it is these contradictions that constitute the basis of the experience of paradox. Gödel’s basic insight was that any system of rules has a natural domain of application – but when rules are applied to inputs that are not of the same structure which guided the rules’ development, then we can expect weirdness.

This is exactly what can happen with adversarial neural networks, where two algorithms compete to win a game. One network might be trained to recognise one set of objects, such as stop signs. Meanwhile, its opponent might make some malicious minor modifications to a fresh data-set – say, stop signs with a few pixels moved about – leading the first network to categorise these images as anything from right-turn signs to speed-limit signs. Adversarial classifications look like extreme foolishness from a human point of view. But, as Gödel would appreciate, they might be perfectly natural errors from the perspective of the invisible rule-systems encoded in the neural network.

[...] Neural networks capture the bind that contemporary science confronts. They show how complicated models that include little or no structured data about the systems they represent can still outperform theories based on decades of research and analysis. In this respect, the insights from speech recognition mirror those gleaned from training computers to beat humans at chess and Go: the representations and rules of thumb favoured by machines need not reflect the representations and rules of thumb favoured by the human brain. Solving chess solves chess, not thought.

[...] Data can be acquired without explanation and without understanding. The very definition of a bad education is simply to be drilled with facts: as in learning history by rote-memorising dates and events. But true understanding is the expectation that other human beings, or agents more generally, can explain to us how and why their methods work. We require some means of replicating an idea and of verifying its accuracy. This requirement extends to nonhuman devices that purport to be able to solve problems intelligently. Machines need to be able to give an account of what they’ve done, and why.

[...] The prismatic writer Jorge Luis Borges could have been reflecting on all this when he wrote in the essay ‘History of the Echoes of a Name’ (1955):

Isolated in time and space, a god, a dream, and a man who is insane and aware of the fact repeat an obscure statement. Those words, and their two echoes, are the subject of these pages.

Let’s say that the god is the Universe, the dream our desire to understand, and the machines are the insane man, all repeating their obscure statements. Taken together, their words and echoes are the system of our scientific enquiry. It is the challenge of the 21st century to integrate the sciences of complexity with machine learning and artificial intelligence. The most successful forms of future knowledge will be those that harmonise the human dream of understanding with the increasingly obscure echoes of the machines... (MORE - details)
Reply


Possibly Related Threads…
Thread Author Replies Views Last Post
  Determinism without causation Magical Realist 0 8 50 minutes ago
Last Post: Magical Realist
  In the quantum world, the future causes the past (rival ontological views) C C 5 221 Feb 23, 2024 02:46 PM
Last Post: Zinjanthropos
  "Being in itself" vs "being for itself"..(Sartre) Magical Realist 0 80 Sep 7, 2023 07:43 PM
Last Post: Magical Realist
  "Being in itself" vs "being for itself"..(Sartre) Magical Realist 0 66 Sep 7, 2023 07:40 PM
Last Post: Magical Realist
  Imaginary numbers may hold the key to understanding the after-life Ostronomos 0 61 Feb 25, 2023 04:25 PM
Last Post: Ostronomos
  A guide to (not) understanding quantum mechanics (leave it at magic or not?) C C 4 100 Feb 17, 2023 04:01 PM
Last Post: confused2
  The return of Aristotelian views in philosophy & philosophy of science: Goodbye Hume? C C 1 668 Aug 17, 2018 02:01 PM
Last Post: Zinjanthropos
  Do religious views inform philosophical views? and vice versa? C C 0 564 Apr 3, 2018 02:02 AM
Last Post: C C
  The problem of mental causation Magical Realist 4 728 Feb 5, 2018 05:59 AM
Last Post: C C
  Have your world views been based on emotions or deep analysis and reason? Leigha 20 3,111 Nov 30, 2017 09:57 AM
Last Post: RainbowUnicorn



Users browsing this thread: 1 Guest(s)