“Study: Nearly Half of AI Assistants Provide Misleading Information”

Date:

A recent study conducted by the European Broadcasting Union (EBU) and the BBC revealed that nearly half of the responses provided by leading AI assistants contain misleading information. The international research, which analyzed 3,000 responses in 14 languages, assessed the accuracy, sourcing, and ability to differentiate between opinion and fact of AI assistants such as OpenAI’s ChatGPT, Microsoft’s Copilot, Google’s Gemini, and Perplexity.

According to the findings, 45% of the AI responses examined contained significant issues, with 81% displaying some form of problem. The study also highlighted that approximately 7% of online news consumers and 15% of individuals under 25 rely on AI assistants for news consumption.

In response to the study, Gemini, Google’s AI assistant, expressed its commitment to enhancing its platform based on user feedback. Similarly, OpenAI and Microsoft acknowledged the issue of “hallucinations” in AI models and are working to address it. Perplexity, on the other hand, boasts a 93.9% accuracy rate in terms of factuality according to its website.

The study further revealed that a third of AI assistant responses contained serious sourcing errors, with Gemini showing the highest rate of significant sourcing issues compared to other assistants. In addition, accuracy issues, such as outdated information, were identified in 20% of the responses from all AI assistants studied.

Examples of misinformation highlighted in the study included Gemini’s inaccurate statement about changes to a law on disposable vapes and ChatGPT falsely reporting Pope Francis as the current Pope months after his passing. The research involved 22 public-service media organizations from 18 countries, including prominent entities like CBC, Radio-Canada, and others from various nations.

The EBU emphasized the importance of AI companies improving the accuracy and accountability of their AI assistants in responding to news-related queries. They stressed the need for AI assistants to be held to the same level of accountability as traditional news organizations in identifying and correcting errors to maintain public trust and ensure democratic participation.

Share post:

Popular

More like this
Related

Trump Seeks Millions in Damages from Investigations

President Donald Trump claimed on Tuesday that he is...

“Blue Jays Seek Redemption in Crucial ALCS Game 3”

The Toronto Blue Jays are gearing up for a...

“Canada’s Job Growth Surges: 60,000 New Jobs Added”

Canada saw a surge in job growth in September,...

Actor Robbie G.K. Reflects on Success of “Heated Rivalry”

Robbie G.K., known for his role as Kip Grady...