ChatGPT, Gemini, Copilot, Others Generating Research Papers, Journals That Don't Exist: Red Cross

A possible reason for this is that these AI systems are geared towards generating an answer, irrespective of missing or non-existent historical sources.

The Red Cross warned users that if they are reffered to any information from an AI source, it should be scrutinized as it might be inaccurate. (Photo: Envato)

Quick Read
Summary is AI Generated. Newsroom Reviewed

  • Generative AI chatbots fabricate entirely fictional research records, says Red Cross
  • AI models hallucinate non-existent research papers and archival sources
  • These bots do not verify or cross-check information before generating content

Generative AI chatbots like OpenAi's ChatGPT, Google Gemini and Microsoft's Copilot are generating research records that are entirely fabricated and non-existent, according to a statement from the International Committee of the Red Cross.

These artificial intelligence large language models are hallucinating entirely fictional sources for research data, including research papers, journals, and archives, said the organisation, which is in charge of the administration of some of the most globally used research archives.

The non-profit organisation stated that these AI bots do not undertake research, cross check references and verify the veracity of the information it presents to the user, it instead generates content.

Generative AI bots generate new content based on statistical patterns and may, therefore, produce invented catalogue numbers, descriptions of documents or even references to platforms that have never existed.
International Committee of the Red Cross

A possible reason for this is that these AI systems are geared towards generating an answer, irrespective of missing or non-existent historical sources, according to the humanitarian body. The systems, in turn, invent sources when no sources exist.

"Because their purpose is to generate content, they cannot indicate that no information exists; instead, they will invent details that appear plausible but have no basis in the archival record," the statement added.

The Red Cross warned users that if they referred to any information from an AI source, it should be scrutinised as it might be inaccurate. It recommended that people look for these archives themselves online and see if they are referenced in published scholarly articles.

Also Read: Nvidia Wins Trump’s Approval To Sell H200 AI Chips In China

Watch LIVE TV, Get Stock Market Updates, Top Business, IPO and Latest News on NDTV Profit. Feel free to Add NDTV Profit as trusted source on Google.
WRITTEN BY
Prajwal Jayaraj
Prajwal Jayaraj covers business news for NDTV Profit. He holds a postgradua... more
GET REGULAR UPDATES
Add us to your Preferences
Set as your preferred source on Google