Intelligence without causal theory, laws, & other concepts for understanding...

#1
C C Offline
...instead just mapping the probabilities of _X_ occurring in relation to other items?

Everyone is familiar with zombie consciousness: Where say, a robot -- via either conversational speech programming or navigating around obstacles in an environment -- can act as if it has thoughts, body feelings, and sensory representations of the world that are materializing privately to it. When it fact the device is just as blank in that respect as a passive rock. There are indeed mechanistic processes producing "awareness" behavior and responses -- but all transpiring in the "dark", absent of existential manifestation as anything (visual, aural, tactile, etc).

Now there is "zombie intelligence", where machine learning AI exhibits expertise and skill without any "real" understanding of what it is manipulating successfully.
{*} The arising of potential intellects in the cosmos that can succeed at complex levels without the need of the various ideas that human understanding depends upon... Well, that might eerily raise the question of whether our everyday ideas, and deeper epistemological, ontological, and nomological orientations -- and technical descriptions of science, do have rigid counterparts in the "supposed" realm of non-invented, actual furniture that exists non-artificially and non-mentally. (As so many who reify them non-critically believe.)

- - - {*} Arguably that's also what evolution has been slowly and clumsily doing for billions of years -- a watchmaker that is blind in both conceptual and sensorial departments. But abstract evolution (as an aloof, immaterial "god" or general principle) can't realize itself as an immediate, local, distinct concrete system that can potentially communicate with you directly on specific matters.


A new AI language model generates poetry and prose
https://www.economist.com/science-and-te...-and-prose

INTRO: The SEC said, “Musk,/your tweets are a blight./They really could cost you your job,/if you don’t stop/all this tweetingat night.”/…Then Musk cried, “Why?/The tweets I wrote are not mean,/I don’t use all-caps/and I’m sure that my tweets are clean.”/“But your tweets can move markets/and that’s why we’re sore./You may be a genius/and a billionaire,/but that doesn’t give you the right to be a bore!”

THE PRECEDING lines—describing Tesla and SpaceX founder Elon Musk’s run-ins with the Securities and Exchange Commission, an American financial regulator—are not the product of some aspiring 21st-century Dr Seuss. They come from a poem written by a computer running a piece of software called Generative Pre-Trained Transformer 3. GPT-3, as it is more commonly known, was developed by OpenAI, an artificial-intelligence (AI) laboratory based in San Francisco, and which Mr Musk helped found. It represents the latest advance in one of the most studied areas of AI: giving computers the ability to generate sophisticated, human-like text.

The software is built on the idea of a “language model”. This aims to represent a language statistically, mapping the probability with which words follow other words—for instance, how often “red” is followed by “rose”. The same sort of analysis can be performed on sentences, or even entire paragraphs. Such a model can then be given a prompt—“a poem about red roses in the style of Sylvia Plath”, say—and it will dig through its set of statistical relationships to come up with some text that matches the description.

Actually building such a language model, though, is a big job. This is where AI—or machine learning, a particular subfield of AI—comes in. By trawling through enormous volumes of written text, and learning by trial and error from millions of attempts at text prediction, a computer can crunch through the laborious task of mapping out those statistical relationships.

The more text to which an algorithm can be exposed, and the more complex you can make the algorithm, the better it performs. And what sets GPT-3 apart is its unprecedented scale. The model that underpins GPT-3 boasts 175bn parameters, each of which can be individually tweaked—an order of magnitude larger than any of its predecessors. It was trained on the biggest set of text ever amassed, a mixture of books, Wikipedia and Common Crawl, a set of billions of pages of text scraped from every corner of the internet... (MORE)
Reply


Possibly Related Threads…
Thread Author Replies Views Last Post
  Research Despite impressive output, generative AI doesn’t have coherent understanding of world C C 2 518 Nov 7, 2024 02:34 PM
Last Post: confused2
  Article Understanding how choice overload in ChatGPT recommendations impacts decision-making C C 1 425 Oct 1, 2023 05:53 PM
Last Post: confused2
  Artificial Intelligence that can discover hidden physical laws in various data C C 0 369 Dec 11, 2021 05:08 AM
Last Post: C C
  Helping machines perceive some laws of physics C C 0 397 Dec 3, 2019 04:29 AM
Last Post: C C
  How researchers are teaching computers the laws of physics C C 0 1,124 Jan 7, 2016 10:41 AM
Last Post: C C



Users browsing this thread: 1 Guest(s)