ADVERTISEMENT

ChatGPT, Gemini, Copilot, Others Generating Research Papers, Journals That Don't Exist: Red Cross

A possible reason for this is that these AI systems are geared towards generating an answer, irrespective of missing or non-existent historical sources.

<div class="paragraphs"><p>The Red Cross warned users that if they are reffered to any information from an AI source, it should be scrutinized as it might be inaccurate. (Photo: Envato)</p></div>
The Red Cross warned users that if they are reffered to any information from an AI source, it should be scrutinized as it might be inaccurate. (Photo: Envato)
Show Quick Read
Summary is AI Generated. Newsroom Reviewed

Generative AI chatbots like OpenAi's ChatGPT, Google Gemini and Microsoft's Copilot are generating research records that are entirely fabricated and non-existent, according to a statement from the International Committee of the Red Cross.

These artificial intelligence large language models are hallucinating entirely fictional sources for research data, including research papers, journals, and archives, said the organisation, which is in charge of the administration of some of the most globally used research archives.

The non-profit organisation stated that these AI bots do not undertake research, cross check references and verify the veracity of the information it presents to the user, it instead generates content.

Generative AI bots generate new content based on statistical patterns and may, therefore, produce invented catalogue numbers, descriptions of documents or even references to platforms that have never existed.
International Committee of the Red Cross

A possible reason for this is that these AI systems are geared towards generating an answer, irrespective of missing or non-existent historical sources, according to the humanitarian body. The systems, in turn, invent sources when no sources exist.

"Because their purpose is to generate content, they cannot indicate that no information exists; instead, they will invent details that appear plausible but have no basis in the archival record," the statement added.

The Red Cross warned users that if they referred to any information from an AI source, it should be scrutinised as it might be inaccurate. It recommended that people look for these archives themselves online and see if they are referenced in published scholarly articles.

Opinion
Nvidia Wins Trump’s Approval To Sell H200 AI Chips In China
OUR NEWSLETTERS
By signing up you agree to the Terms & Conditions of NDTV Profit