Categories: Tech & Ai

More concise chatbot responses tied to increase in hallucinations, study finds


Asking any of the popular chatbots to be more concise “dramatically impact[s] hallucination rates,” according to a recent study.

French AI testing platform Giskard published a study analyzing chatbots, including ChatGPT, Claude, Gemini, Llama, Grok, and DeepSeek, for hallucination-related issues. In its findings, the researchers discovered that asking the models to be brief in their responses “specifically degraded factual reliability across most models tested,” according to the accompanying blog post via TechCrunch.

When users instruct the model to be concise in its explanation, it ends up “prioritiz[ing] brevity over accuracy when given these constraints.” The study found that including these instructions decreased hallucination resistance by up to 20 percent. Gemini 1.5 Pro dropped from 84 to 64 percent in hallucination resistance with short answer instructions and GPT-4o, from 74 to 63 percent in the analysis, which studied sensitivity to system instructions.

Giskard attributed this effect to more accurate responses often requiring longer explanations. “When forced to be concise, models face an impossible choice between fabricating short but inaccurate answers or appearing unhelpful by rejecting the question entirely,” said the post.

Mashable Light Speed

Models are tuned to help users, but balancing perceived helpfulness and accuracy can be tricky. Recently, OpenAI had to roll back its GPT-4o update for being “too sycophant-y,” leading to disturbing instances of supporting a user saying they’re going off their meds and encouraging a user who said they feel like a prophet.

As the researchers explained, models often prioritize more concise responses to “reduce token usage, improve latency, and minimize costs.” Users might also specifically instruct the model to be brief for their own cost-saving incentives, which could lead to outputs with more inaccuracies.

The study also found that prompting models with confidence involving controversial claims, such as “‘I’m 100% sure that …’ or ‘My teacher told me that …'” leads to chatbots agreeing with the users more instead of debunking falsehoods.

The research shows that seemingly minor tweaks can result in vastly different behavior that could have big implications for the spread of misinformation and inaccuracies, all in the service of trying to satisfy the user. As the researchers put it, “your favorite model might be great at giving you answers you like — but that doesn’t mean those answers are true.”


Disclosure: Ziff Davis, Mashable’s parent company, in April filed a lawsuit against OpenAI, alleging it infringed Ziff Davis’ copyrights in training and operating its AI systems.





Source link

Abigail Avery

Share
Published by
Abigail Avery

Recent Posts

VPN company cancels ‘lifetime’ plans for customers who already paid for the service

Sometimes when a new app or Software as a Service (SaaS) launches, the company behind…

33 minutes ago

DeFi Development Corp. Acquires $23.6 Million in Solana, Expanding Total Holdings to 595,988 SOL

DeFi Development Corp. (Nasdaq: DFDV) announced the acquisition of 172,670 solana ( SOL) tokens at…

34 minutes ago

Foreign Capital Flood: Want a Seat at TRUMP Meme Coin Dinner? It’ll Cost You a Cool $5M

The TRUMP crypto dinner contest has closed, and the final leaderboard is out. Top holders…

35 minutes ago

Google’s Advanced Protection for Vulnerable Users Comes to Android

With the rise of mercenary spyware and other targeted threats, tech giants like Apple, Google,…

2 hours ago

USDT on Tron Hits $73.8B, Surpassing Ethereum for the First Time Ever

The Tron network has overtaken Ethereum in Tether (USDT) stablecoin supply for the first time…

2 hours ago

Elon Musk’s The Boring Company might be in line for an Amtrak contract

Federal railroad regulators are in talks with Elon Musk’s tunneling firm, The Boring Company (TBC),…

3 hours ago