Chat GPT 4.5 has achieved a milestone in AI evolution.

bobcat

Well-known Member
Location
Northern Calif
A UC San Diego test was carried out in an online hub with the following:
A group of 300 people were used as participants
In the test each participant was randomly assigned to be either an interrogator, or one of the witnesses being interrogated, and the remaining witness was a chatbot, and they did 8 rounds.
In the first series, the LLM chatbots were only instructed that they were going to be in a Turing Test, and the goal is to convince the interrogator that you are human. In that series Chat GPT 4.5 was only more convincing than the actual human a little over 1/3 of the time.

In the next series the chatbot was instructed to adopt a specific persona, like a young person who's knowledgeable about the internet and culture.
Remarkably in this test, Chat GPT 4.5 was voted more convincing than the human witness 73% of the time. It was voted to be human significantly more than the real human.

Granted, this is only a thought experiment test, and should by no means indicate that an AI is capable of surpassing human mental abilities, but it certainly indicates that specific LLM's are becoming master conversationalist. It's intriguing, and I'm sure this will be getting lots of attention in the ever-evolving perception of technology and the future.
 

I wonder if a chat bot could run the country. It would save even more money if all congressional seats were filled by chat bots. No more debates. Just an evaluation of actual data and no politics to get in the way. Of course none of them would pass for politicians, unless they were programed to speak a high percentage of nonsense, or people would figure it out.
 
I wonder if a chat bot could run the country. It would save even more money if all congressional seats were filled by chat bots. No more debates. Just an evaluation of actual data and no politics to get in the way. Of course none of them would pass for politicians, unless they were programed to speak a high percentage of nonsense, or people would figure it out.

I think it would have to go further than that. It would have to include being programmed to completely avoid the question. Then give an answer to a question that hasn't been asked.

A chatbot might need a subroutine that constantly monitors and calculates public approval in real time, just in case it ever accidentally said something honest.

UPDATE:
We could think about this forever, introducing more subroutines depending on our experience listening to some politicians -- fine tune it. Give it an 'A.S.M.', an Artificial Selective Memory. Able to completely forget what it said last week if public opinion has shifted. Maybe even a firewall to protect it against any malicious accountability. With this level of fine tuning, and more, It might even end up getting reelected!

I think I might have read too many 'Private Eye' magazines.
 

Last edited:
I wonder if AI/bots can go "insane"... we might have to find out the hard way. 😅
Actually, I've been watching a series called "Humans", and it is centered around AI's gaining consciousness, and all the things that result from it. It's on Amazon Prime, but may be available on other streaming platforms, IDK, but it is certainly thought provoking.
 
I already know people who sound like computers on the phone and text. It is easy to fool people. Look at texts from business.
Clipped speech and terse language.
 


Back
Top