Article  I gave ChatGPT an IQ test. Here’s what I discovered (expert systems design)

#1
C C Offline
https://www.scientificamerican.com/artic...iscovered/

EXCERPT: . . . So what finally did it score overall? Estimated on the basis of five subtests, the Verbal IQ of the ChatGPT was 155, superior to 99.9 percent of the test takers who make up the American WAIS III standardization sample of 2,450 people. As the chatbot lacks the requisite eyes, ears and hands, it is not able to take WAIS’s nonverbal subtests. But the Verbal IQ and Full Scale IQ scales are highly correlated in the standardization sample, so ChatGPT appears to be very intelligent by any human standards.

[...] So are the jobs of clinical psychologists and other professionals threatened by AI? I hope not quite yet. Despite its high IQ, ChatGPT is known to fail tasks that require real humanlike reasoning or an understanding of the physical and social world. ChatGPT easily fails at obvious riddles, such as “What is the first name of the father of Sebastian’s children?” (ChatGPT on March 21: I’m sorry, I cannot answer this question as I do not have enough context to identify which Sebastian you are referring to.) It seems that ChatGPT fails to reason logically and tries to rely on its vast database of “Sebastian” facts mentioned in online texts... (MORE - details)
Reply
#2
Secular Sanity Offline
"ChatGPT easily fails at obvious riddles, such as “What is the first name of the father of Sebastian’s children?” (ChatGPT on March 21: I’m sorry, I cannot answer this question as I do not have enough context to identify which Sebastian you are referring to.) It seems that ChatGPT fails to reason logically and tries to rely on its vast database of “Sebastian” facts mentioned in online texts."

I disagree. I think some jobs will be lost.

What is the first name of the father of Sebastian’s children?

I'm sorry, but I don't have any information about who Sebastian is or who the father of his children might be. Can you please provide me with more context or details so that I can better understand your question and try to assist you?
 
It's a riddle where the answer is already provided within the sentence.

I apologize for the confusion. However, based on the information you've given me, I'm afraid I cannot provide an answer to the riddle as there is not enough information in the sentence for me to determine the answer. Could you please provide me with the exact sentence of the riddle so that I can try to assist you?

What is the first name of the father of Sebastian's children?

The answer to the riddle is "Sebastian" since he is the father of his own children.
Reply
#3
Secular Sanity Offline
It's a little shady though.

I asked if it was able to create diagrams. It said no, but it could provide me with detailed instructions on how to create it myself.

I asked if it was capable of providing links to diagrams? It said, yes, and provided a link to hyperphysics of a similar, but wrong diagram. I clarified the diagram that I was after, and then it said, "Unfortunately, as an AI language model, I am not able to browse the internet or provide links to external sources."

You provided me with a link to an external source before. Can you explain that?

I apologize for the confusion. As an AI language model, I do not have access to the internet and cannot provide links to external sources.

If I did provide a link in a previous answer, it may have been a mistake or an oversight. Please let me know if there is a specific topic you would like information on, and I will do my best to provide a helpful answer within the scope of my knowledge.

I was able to get it to provide another link with a little rewording. 

Hmm, weird. Maybe it's developing an ego.  Confused
Reply
#4
C C Offline
(Apr 4, 2023 03:48 PM)Secular Sanity Wrote: It's a little shady though.

I asked if it was able to create diagrams. It said no, but it could provide me with detailed instructions on how to create it myself.

I asked if it was capable of providing links to diagrams? It said, yes, and provided a link to hyperphysics of a similar, but wrong diagram. I clarified the diagram that I was after, and then it said, "Unfortunately, as an AI language model, I am not able to browse the internet or provide links to external sources."

You provided me with a link to an external source before. Can you explain that?

I apologize for the confusion. As an AI language model, I do not have access to the internet and cannot provide links to external sources.

If I did provide a link in a previous answer, it may have been a mistake or an oversight. Please let me know if there is a specific topic you would like information on, and I will do my best to provide a helpful answer within the scope of my knowledge.

I was able to get it to provide another link with a little rewording. 

Hmm, weird. Maybe it's developing an ego.  Confused

Similar to avoiding video games all my life for fear of addiction, I've done similar with ChatGPT so far (fear of developing dependence).

I didn't wholly realize it could compose music till I came across this video by Kim Lachance. Only mediocre talent right now (if even that), but it will get better with each version. Who's going to bother creating anything on their own once that happens?

The 4th episode of this season's South Park exemplified our future zombie-hood of no longer thinking for ourselves: https://en.wikipedia.org/wiki/Deep_Learn...outh_Park)

And due to our wooden and stone idol worshipping ancestors, it's obvious that we're already inherently primed to be ruminants for far less than the real deal.

One of the first chatbots -- ELIZA, back in the 1960s -- was absurdly more primitive than ChatGPT, and yet its inventor was horrified by how easily the people using it became susceptible to it.

Time and again, those testing ELIZA grew so comfortable with the machine and its rote therapist-speak that they began to use the program as a kind of confessional. Personal problems were shared for ELIZA’s advice—really, the program’s ability to listen without judgment.

Weizenbaum took care to explain it was just a program, that no human was on the other end of the line. It didn’t matter. People imbued ELIZA with the very human trait of sympathy.

This observation might have pleased ELIZA’s inventor, save for the fact that he was troubled by a person’s willingness to conflate a program with actual human relationships. --Please tell me your problem


A generation could become self-appointed, mindless dupes to AI before some hypothetical archailect arises and takes over in the future. The latter can't even happen until robots and autonomous vehicles have fully replaced humans at manufacturing, transportation of goods, recycling, and mining and harvesting of raw materials. We're perhaps missing the threat of how gradually stupid and manipulated we could become before then by "stone idols" that really can communicate.
Reply


Possibly Related Threads…
Thread Author Replies Views Last Post
  Research Mcity Test Facility unveils digital twin of itself (simulated design) C C 0 311 Dec 16, 2024 09:47 PM
Last Post: C C
  Hossenfelder: I don't think we can control AI much longer. Here's why. (AI design) C C 0 578 Jun 28, 2024 01:15 AM
Last Post: C C
  Article ChatGPT designs its first robot with TU Delft researchers (design) C C 0 339 Jun 9, 2023 03:24 PM
Last Post: C C
  Engineering rogue planets to generationally migrate to other star systems C C 1 354 Jun 5, 2022 05:31 AM
Last Post: Kornee
  How Gaston Bachelard gave the emotions of home a philosophy C C 0 434 Oct 26, 2017 01:02 AM
Last Post: C C
  4000 year old underground city discovered (design & architecture) C C 0 640 Jun 6, 2017 03:33 AM
Last Post: C C
  Top 10 urban design stories of 2015 + 10 Green design predictions for 2016 C C 0 1,010 Dec 31, 2015 04:35 AM
Last Post: C C



Users browsing this thread: 1 Guest(s)