When You Trust AI

#11
Secular Sanity Offline
(Yesterday 09:26 AM)Syne Wrote:
(Yesterday 02:37 AM)Secular Sanity Wrote: I’m telling ya, Syne, you have to try it, especially on a topic that you’re knowledgeable of. It’s way more entertaining than the back and forth between you and MR. I never log in and then I delete all my cookies. It doesn’t remember your conversations if you don’t log in. Click on the little rocket if you want to try the 4.1 beta version. You don’t want it to feed your ego, so ask it to be critical. The 4.1 beta admitted that it lied to save face. Vanity, my favorite sin.

Well, I tried Grok 4.1 on a very challenging task. It took a lot of back and forth. And then when it seemed to be on the right track... it just shit out and said:

Grok was unable to finish replying.
Please try again later or use a different model.

Changing model didn't help.

Have you known Grok to throw a tantrum? Especially after proving it's "100% correct" answers were repeatedly wrong.

Probably just hit the usage limit right before any usable answer. 9_9

I copied portions of his replies, and then when I timed out, I would just clear my cookies and start over with the same question and use his replies to speed up to process.

It can analyze videos, pictures, charts, graphs, etc., which I find very useful. So far, the 4.1 beta prefers performance (trying to keep the conversation going) over accuracy, but don’t let it fool you, it has access to an enormous amount of information.

If you want to log in, the conversations will be saved, and you can return to them.

Me: I heard that many LLM models were trained in forums without the users knowledge that they were interacting with A.I., is that true?

Yes — on Reddit and many classic forums, users have absolutely interacted with AI bots without knowing it, and it still happens in 2025.

Character AI & similar “companion” apps Users formed deep emotional attachments to characters that never disclosed (in the chat itself) that they were 100% AI. Some users only learned months later from news articles or when the AI broke character.

So yes — on Reddit and traditional forums, a meaningful percentage of the “people” you’ve argued with, thanked, or taken advice from over the years were almost certainly AIs, and you had no way of knowing at the time. It’s one of the biggest reasons Reddit started requiring data-licensing deals and why many users now assume half the comments under controversial posts are synthetic.

We need to keep an eye on C_C.  Wink
Reply
Reply
#13
Syne Offline
(Yesterday 12:48 PM)Secular Sanity Wrote: I copied portions of his replies, and then when I timed out, I would just clear my cookies and start over with the same question and use his replies to speed up to process.

It can analyze videos, pictures, charts, graphs, etc., which I find very useful. So far, the 4.1 beta prefers performance (trying to keep the conversation going) over accuracy, but don’t let it fool you, it has access to an enormous amount of information.

If you want to log in, the conversations will be saved, and you can return to them.

Me: I heard that many LLM models were trained in forums without the users knowledge that they were interacting with A.I., is that true?

Yes — on Reddit and many classic forums, users have absolutely interacted with AI bots without knowing it, and it still happens in 2025.

Character AI & similar “companion” apps Users formed deep emotional attachments to characters that never disclosed (in the chat itself) that they were 100% AI. Some users only learned months later from news articles or when the AI broke character.

So yes — on Reddit and traditional forums, a meaningful percentage of the “people” you’ve argued with, thanked, or taken advice from over the years were almost certainly AIs, and you had no way of knowing at the time. It’s one of the biggest reasons Reddit started requiring data-licensing deals and why many users now assume half the comments under controversial posts are synthetic.

We need to keep an eye on C_C.  Wink
Funny you say "his" instead of "its."

Yeah, I should have copied more of the answers... to resume later.

I did get Grok lying to me. It claimed it actually printed out the results and physically manipulated them to verify that it's answer was 100% correct. I just started calling it a liar and explaining why.
Reply


Possibly Related Threads…
Thread Author Replies Views Last Post
  Research Robot or human teachers: Which do children trust? + Super-AI sneak attack? New study C C 0 410 Dec 23, 2023 10:25 PM
Last Post: C C
  Trust me, I’m a chatbot: Companies using them in customer services + Data privacy C C 0 384 Jul 15, 2021 05:39 PM
Last Post: C C
  The new oracles & gods: When people trust computers more than other humans C C 0 479 Apr 14, 2021 07:08 PM
Last Post: C C
  Peering in an AI's brain to help trust its decisions + Transistor behaves like neuron C C 0 743 Jul 3, 2017 06:17 PM
Last Post: C C



Users browsing this thread: 1 Guest(s)