OpenAI's GPT-4.5 Model Scores 73% In Turing Test, Appears More Human-Like Than Humans In New Study

Advertisement
Read Time: 2 mins
The study was conducted by researchers at the University of California, San Diego. (Photo Source: X)

OpenAI's GPT-4.5 model is more human-like than humans, results of a recent Turing Test—a benchmark for assessing human-like intelligence—show.

The findings of the study, which is still in the preprint stage, showed that the large language model was identified as human 73% of the time when it was instructed to adopt a persona—significantly higher than the 50% expected by random chance.

The study was conducted by researchers at the University of California, San Diego. Lead author Cameron Jones said nearly 300 people were involved in the experiment. Participants were randomly selected to either be the interrogator or one of two "witnesses" in a conversation—one real human and one AI model.

"People were no better than chance at distinguishing humans from GPT-4.5 and LLaMa (with the persona prompt)," wrote lead author Cameron Jones, a researcher at UC San Diego's Language and Cognition Lab.

Jones further said that the findings suggest large language models could take the place of humans in "short interactions without anyone being able to tell".

Advertisement

Loading...