The symbol grounding problem

#1
C C Offline
http://what-when-how.com/artificial-inte...elligence/

EXCERPT: The symbol grounding problem aroused from the notice that symbol systems manipulated structures that could be associated with things in the world by an observer operating the system, but not by the system itself. The quest for symbol grounding processes is concerned with understanding processes which could enable the connection of these purely symbolic representations with what they represent in fact, which could be directly, or by means of other grounded representations....


http://www.scholarpedia.org/article/Symb...ng_problem

EXCERPT: The Symbol Grounding Problem is related to the problem of how words (symbols) get their meanings, and hence to the problem of what meaning itself really is. The problem of meaning is in turn related to the problem of consciousness, or how it is that mental states are meaningful. According to a widely held theory of cognition, "computationalism," cognition (i.e., thinking) is just a form of computation. But computation in turn is just formal symbol manipulation: symbols are manipulated according to rules that are based on the symbols' shapes, not their meanings. How are those symbols (e.g., the words in our heads) connected to the things they refer to? It cannot be through the mediation of an external interpreter's head, because that would lead to an infinite regress, just as my looking up the meanings of words in a (unilingual) dictionary of a language that I do not understand would lead to an infinite regress. The symbols in an autonomous hybrid symbolic+sensorimotor system -- a Turing-scale robot consisting of both a symbol system and a sensorimotor system that reliably connects its internal symbols to the external objects they refer to, so it can interact with them Turing-indistinguishably from the way a person does -- would be grounded. But whether its symbols would have meaning rather than just grounding is something that even the robotic Turing Test -- hence cognitive science itself -- cannot determine, or explain....


http://users.ecs.soton.ac.uk/harnad/Pape...oblem.html

EXCERPT: There has been much discussion recently about the scope and limits of purely symbolic models of the mind and about the proper role of connectionism in cognitive modeling. This paper describes the "symbol grounding problem": How can the semantic interpretation of a formal symbol system be made intrinsic to the system, rather than just parasitic on the meanings in our heads? How can the meanings of the meaningless symbol tokens, manipulated solely on the basis of their (arbitrary) shapes, be grounded in anything but other meaningless symbols? The problem is analogous to trying to learn Chinese from a Chinese/Chinese dictionary alone. A candidate solution is sketched: Symbolic representations must be grounded bottom-up in nonsymbolic representations of two kinds: (1) "iconic representations" , which are analogs of the proximal sensory projections of distal objects and events, and (2) "categorical representations" , which are learned and innate feature-detectors that pick out the invariant features of object and event categories from their sensory projections. Elementary symbols are the names of these object and event categories, assigned on the basis of their (nonsymbolic) categorical representations. Higher-order (3) "symbolic representations" , grounded in these elementary symbols, consist of symbol strings describing category membership relations (e.g., "An X is a Y that is Z"). Connectionism is one natural candidate for the mechanism that learns the invariant features underlying categorical representations, thereby connecting names to the proximal projections of the distal objects they stand for. In this way connectionism can be seen as a complementary component in a hybrid nonsymbolic/symbolic model of the mind, rather than a rival to purely symbolic modeling. Such a hybrid model would not have an autonomous symbolic "module," however; the symbolic functions would emerge as an intrinsically "dedicated" symbol system as a consequence of the bottom-up grounding of categories' names in their sensory representations. Symbol manipulation would be governed not just by the arbitrary shapes of the symbol tokens, but by the nonarbitrary shapes of the icons and category invariants in which they are grounded....
Reply
#2
Yazata Online
What's wrong with causal grounding by means of the senses?
Reply
#3
C C Offline
That's the gist of the new revolution, in which Searle seems to have partly won and Peirce's work is heavily en vogue.

The original conception of artificial intelligent systems as symbols systems brought forth a problem known as symbol grounding problem. If symbol systems manipulate symbols, these symbols should represent something to the system itself and not be parasitic on external humans for meaning. But the system has no way of grounding these symbols in its sensory and motor interaction history, since it does not have one. Many researchers pointed out this key flaw, particularly John Searle (1980) with his Chinese Room Argument and Stevan Harnad (1990) with the definition for the problem that has become well known.

A direction towards modeling artificial systems as embodied and situated agents, instead of symbol manipulating systems alone was indicated, urging the need to implement systems that could autonomously interact with its environment and with the things it should have representations for. But the topic of symbol grounding also needs a description of how certain things come to represent other things to someone, the topic of study of semiotics. The semiotics of C.S.Peirce has been used as theoretical framework in the discussion the topic of symbol grounding problem in Artificial Intelligence. The application of his theory in dealing with the symbol grounding problem should further contribute to the development of computational models of cognitive systems and to the construction of ever more meaningful machines.
Reply


Possibly Related Threads…
Thread Author Replies Views Last Post
  The problem of meaning in AI + The problem of AI consciousness C C 1 798 Jan 16, 2017 02:47 AM
Last Post: Syne



Users browsing this thread: 1 Guest(s)