Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5

Research  AI can 'lie and BS' like its maker + AI self-organizes to develop features of brains

#1
C C Offline
AI system self-organizes to develop features of brains of complex organisms
https://www.eurekalert.org/news-releases/1008361

INTRO: Cambridge scientists have shown that placing physical constraints on an artificially-intelligent system – in much the same way that the human brain has to develop and operate within physical and biological constraints – allows it to develop features of the brains of complex organisms in order to solve tasks.

As neural systems such as the brain organise themselves and make connections, they have to balance competing demands. For example, energy and resources are needed to grow and sustain the network in physical space, while at the same time optimising the network for information processing. This trade-off shapes all brains within and across species, which may help explain why many brains converge on similar organisational solutions.

Jascha Achterberg, a Gates Scholar from the Medical Research Council Cognition and Brain Sciences Unit (MRC CBSU) at the University of Cambridge said: “Not only is the brain great at solving complex problems, it does so while using very little energy. In our new work we show that considering the brain’s problem solving abilities alongside its goal of spending as few resources as possible can help us understand why brains look like they do.”

Co-lead author Dr Danyal Akarca, also from the MRC CBSU, added: “This stems from a broad principle, which is that biological systems commonly evolve to make the most of what energetic resources they have available to them. The solutions they come to are often very elegant and reflect the trade-offs between various forces imposed on them.”

In a study published today in Nature Machine Intelligence, Achterberg, Akarca and colleagues created an artificial system intended to model a very simplified version of the brain and applied physical constraints. They found that their system went on to develop certain key characteristics and tactics similar to those found in human brains... (MORE - details, no ads)


AI can 'lie and BS' like its maker, but still not intelligent like humans
https://www.uc.edu/news/articles/2023/11...umans.html

PRESS RELEASE: The emergence of artificial intelligence has caused differing reactions from tech leaders, politicians and the public. While some excitedly tout AI technology such as ChatGPT as an advantageous tool with the potential to transform society, others are alarmed that any tool with the word “intelligent” in its name also has the potential to overtake humankind.

The University of Cincinnati’s Anthony Chemero, a professor of philosophy and psychology in the UC College of Arts and Sciences, contends that the understanding of AI is muddled by linguistics: That while indeed intelligent, AI cannot be intelligent in the way that humans are, even though “it can lie and BS like its maker.”

According to our everyday use of the word, AI is definitely intelligent, but there are intelligent computers and have been for years, Chemero explains in a paper he co-authored in the journal Nature Human Behaviour (full, free version here). To begin, the paper states that ChatGPT and other AI systems are large language models (LLM), trained on massive amounts of data mined from the internet, much of which shares the biases of the people who post the data.

LLMs generate impressive text, but often make things up whole cloth,” he states. “They learn to produce grammatical sentences, but require much, much more training than humans get. They don’t actually know what the things they say mean,” he says. “LLMs differ from human cognition because they are not embodied.”

The people who made LLMs call it “hallucinating” when they make things up; although Chemero says, “it would be better to call it ‘bullsh*tting,’” because LLMs just make sentences by repeatedly adding the most statistically likely next word — and they don’t know or care whether what they say is true.

And with a little prodding, he says, one can get an AI tool to say “nasty things that are racist, sexist and otherwise biased.”

The intent of Chemero’s paper is to stress that the LLMs are not intelligent in the way humans are intelligent because humans are embodied: Living beings who are always surrounded by other humans and material and cultural environments.

“This makes us care about our own survival and the world we live in,” he says, noting that LLMs aren’t really in the world and don’t care about anything.

The main takeaway is that LLMs are not intelligent in the way that humans are because they “don’t give a damn,” Chemero says, adding “Things matter to us. We are committed to our survival. We care about the world we live in”.
Reply
#2
confused2 Offline
Quote:The main takeaway is that LLMs are not intelligent in the way that humans are because they “don’t give a damn,” Chemero says, adding “Things matter to us. We are committed to our survival. We care about the world we live in”.

In the Hitch Hikers Guide to the Galaxy the Magratheans built a supercomputer (Deep Thought) to answer the question "Life, the Universe and Everything". The philosophers objected because they feared the supercomputer would put them out of business. Forty years later we see life imitating art.

https://en.wikipedia.org/wiki/List_of_Th...ep_Thought

In later editions it seems Deep Thought was capable of design work..
Quote:It is also revealed that, in the intervening [thinking] time, Deep Thought was commissioned by an intergalactic consortium of angry housewives to create the Point of View Gun, a weapon that causes any man it is used on to see things from the firer's point-of-view, regardless of the firer's gender (due to it being originally used by those housewives having got fed up with ending arguments with their husbands with the phrase "You just don't get it, do you?").
Sometimes life imitates life.
Reply


Possibly Related Threads…
Thread Author Replies Views Last Post
  How does ChatGPT — and Its maker — handle vaccine conspiracies? C C 0 57 Feb 20, 2023 05:07 PM
Last Post: C C
  Designing customized “brains” for robots + AI can help identify fake news C C 0 133 Jan 21, 2021 10:17 PM
Last Post: C C
  What is AI? + 'Mini-brains' to help robots recognize pain & to self-repair C C 0 119 Oct 15, 2020 09:45 PM
Last Post: C C
  Can robots ever have a true sense of self? Scientists are making progress C C 0 295 Feb 28, 2019 08:32 PM
Last Post: C C
  What artificial brains can teach us about how real brains learn C C 1 325 Oct 1, 2017 11:22 PM
Last Post: Syne
  Peering in an AI's brain to help trust its decisions + Transistor behaves like neuron C C 0 544 Jul 3, 2017 06:17 PM
Last Post: C C



Users browsing this thread: 1 Guest(s)