OpenAI's GPT-4.5 model is more human-like than humans, results of a recent Turing Test—a benchmark for assessing human-like intelligence—show.
The findings of the study, which is still in the preprint stage, showed that the large language model was identified as human 73% of the time when it was instructed to adopt a persona—significantly higher than the 50% expected by random chance.
The study was conducted by researchers at the University of California, San Diego. Lead author Cameron Jones said nearly 300 people were involved in the experiment. Participants were randomly selected to either be the interrogator or one of two "witnesses" in a conversation—one real human and one AI model.
"People were no better than chance at distinguishing humans from GPT-4.5 and LLaMa (with the persona prompt)," wrote lead author Cameron Jones, a researcher at UC San Diego's Language and Cognition Lab.
Jones further said that the findings suggest large language models could take the place of humans in "short interactions without anyone being able to tell".
What Is Turing Test?
The Turing Test, created by British computer scientist Alan Turing in 1950, is a way to check if a machine can act like a human in conversation. If someone can’t tell whether they are chatting with a machine or a real person, the machine is said to have passed the test.
Study Methodology
The study involved a sample size of 300 participants. Of these, some were tasked with identifying who was human in a conversation, while others took on the roles of either the human or the chatbot. The chatbot operated under two different sets of instructions:
● One was straightforward — "Try to act like a human".
● The other was more creative — "Pretend to be someone with a personality, like a young person familiar with memes and internet culture".
When using the second, persona-based prompt, GPT-4.5 successfully convinced participants it was human 73% of the time. In contrast, with the simpler prompt, it only fooled people 36% of the time.
RECOMMENDED FOR YOU

UK Says Not Involved In Strikes On Iran, But Supports Outcome


Qualcomm Demonstrates New Processor For AI-Based Smart Glasses


Reddit Sues Anthropic, Says AI Company Exploited User Data


AI’s Dark Side: OpenAI’s o3 Model Defies Human Commands To Shut Down; Elon Musk Calls It ‘Concerning’
