Research  ChatGPT “thought on the fly” when put through Ancient Greek maths puzzle

#1
C C Offline
https://www.eurekalert.org/news-releases/1098246

INTRO: The Artificial Intelligence chatbot, ChatGPT, appeared to improvise ideas and make mistakes like a student in a study that rebooted a 2,400-year-old mathematical challenge.

The experiment, by two education researchers, asked the chatbot to solve a version of the “doubling the square” problem – a lesson described by Plato in about 385 BCE and, the paper suggests, “perhaps the earliest documented experiment in mathematics education”. The puzzle sparked centuries of debate about whether knowledge is latent within us, waiting to be ‘retrieved’, or something that we ‘generate’ through lived experience and encounters.

The new study explored a similar question about ChatGPT’s mathematical ‘knowledge’ – as that can be perceived by its users. The researchers wanted to know whether it would solve Plato’s problem using knowledge it already ‘held’, or by adaptively developing its own solutions.

Plato describes Socrates teaching an uneducated boy how to double the area of a square. At first, the boy mistakenly suggests doubling the length of each side, but Socrates eventually leads him to understand that the new square’s sides should be the same length as the diagonal of the original.

The researchers put this problem to ChatGPT-4, at first imitating Socrates’ questions, and then deliberately introducing errors, queries and new variants of the problem.

Like other Large Language Models (LLMs), ChatGPT is trained on vast collections of text and generates responses by predicting sequences of words learned during its training. The researchers expected it to handle their Ancient Greek maths challenge by regurgitating its pre-existing ‘knowledge’ of Socrates’ famous solution. Instead, however, it seemed to improvise its approach and, at one point, also made a distinctly human-like error.

The study was conducted by Dr Nadav Marco, a visiting scholar at the University of Cambridge, and Andreas Stylianides, Professor of Mathematics Education at Cambridge. Marco is permanently based at the Hebrew University and David Yellin College of Education, Jerusalem.

While they are cautious about the results, stressing that LLMs do not think like humans or ‘work things out’, Marco did characterise ChatGPT’s behaviour as “learner-like”... (MORE - details, no ads)
Reply
#2
Syne Offline
If you ask intentionally leading questions, LLMs will always follow along. They are not "motivated" by "being right' or "getting it right." The only motivation is that given to them by the user, misleading and all.
Reply


Possibly Related Threads…
Thread Author Replies Views Last Post
  Research Can ChatGPT actually “see” red? New results of Google-funded study are nuanced C C 1 423 Jul 9, 2025 02:32 AM
Last Post: confused2
  Research Don't worry. Study shows you're likely a more creative writer than ChatGPT. For now C C 0 433 Oct 29, 2024 01:04 AM
Last Post: C C
  Article ‘In awe’: scientists impressed by latest ChatGPT model o1 C C 0 528 Oct 2, 2024 03:44 PM
Last Post: C C
  Article How antimatter engines could fly humans to other stars in just a few years C C 0 360 Feb 23, 2024 05:37 PM
Last Post: C C
  Article The hacking of ChatGPT is just getting started C C 0 230 Apr 14, 2023 03:12 PM
Last Post: C C
  New microscope sees through intact skull + Device enables "seeing" shapes minus eyes C C 0 422 Dec 3, 2020 09:57 PM
Last Post: C C
  Future autonomous machines may build trust through emotion C C 0 326 Sep 16, 2020 06:14 PM
Last Post: C C
  Ancient Greek inventions + Space "tower" sends text to unmodified phone for 1st time C C 0 388 Mar 18, 2020 10:20 PM
Last Post: C C
  Buy cars through giant vending machine + 'Telepathy' beam works up to 30 meters away C C 3 1,067 May 27, 2017 07:13 PM
Last Post: Magical Realist
  Put this battery in your will! scheherazade 2 978 Sep 24, 2016 05:05 PM
Last Post: Magical Realist



Users browsing this thread: 2 Guest(s)