Article  Artificial intelligence is ironically the end of reason

C C Offline

EXCERPTS: . . . AI designers understand at an abstract level what products like Chat GPT do [...] But take any one result, any output, and even the very people who designed it are unable to explain why an AI program produced the results it did. This is why many advanced AI models, particularly deep learning models, are often described as “black boxes”: we know what goes in, the data they are trained on and the prompts, and we know what comes out but have no idea what really goes on inside them.

[...] Often referred to as the issue of interpretability, this is a significant challenge in the field of AI because it means that the system is unable to explain or make clear the reasons behind its decisions, predictions, or actions. ... When there’s no possibility of an explanation behind life-changing decisions, when the reasoning of machines (if any) is as opaque to the humans who make them as to the humans whose lives they alter, we are left with a bare “Computer says no” answer. When we don’t know the reason something’s happened, we can’t argue back, we can’t challenge, and so we enter the Kafkaesque realm of the absurd in which reason and rationality are entirely absent. This is the world we are sleepwalking towards.

[...] If you’re tuned into philosophy debates of the 20th century involving post-modern thinkers like Foucault and Derrida, or even Critical Theory philosophers like Adorno and Horkheimer, perhaps you think the age of “Reason” is already over, or that it never really existed – it was simply another myth of the Enlightenment. ... But we are currently entering an era that will make even the harshest critics of reason nostalgic for the good old days.

[...] Being able to offerjustifications for our beliefs and our actions is key to who we are as a social species. It’s something we are taught as children. We have an ability to explain to others why we think what we think and why we do what we do. In other words, we are not black boxes, if asked we can show our reasoning process to others.

[...] It’s often argued that this opaqueness is an intrinsic feature of AI and can’t be helped. But that’s not the case. Recognizing the issues that arise from outsourcing important decisions to machines without being able to explain how those decisions were arrived at, has led to an effort to produce so-called Explainable AI: AI that is capable of explaining the rationale and results of what would otherwise be opaque algorithms.

The currently available versions of Explainable AI, however, are not without their problems...

[...] Another problem with Explainable AI is that it is widely believed that the more explainable an algorithm, the less accurate it is. The argument is that more accurate algorithms tend to be more complex, and hence, by definition, harder to explain... (MORE - missing details)

Possibly Related Threads…
Thread Author Replies Views Last Post
  "The apocalyptic threat from artificial intelligence isn’t science fiction" proposal C C 1 93 Mar 15, 2023 05:03 AM
Last Post: Yazata
  Why Bezos keeps protesting NASA award to SpaceX + "Real reason for Texas blackouts" C C 1 46 Aug 12, 2021 12:17 AM
Last Post: Syne
  If reason exists without deliberation, it cannot be uniquely human C C 17 1,218 Jun 8, 2019 04:40 PM
Last Post: Secular Sanity

Users browsing this thread: 1 Guest(s)