Hey chatbot, is this true? AI 'factchecks' sow misinformation
The Hindu
The quality and accuracy of AI chatbots can vary, depending on how they are trained and programmed.
As misinformation exploded during India's four-day conflict with Pakistan, social media users turned to an AI chatbot for verification only to encounter more falsehoods, underscoring its unreliability as a fact-checking tool.
With tech platforms reducing human fact-checkers, users are increasingly relying on AI-powered chatbots, including xAI's Grok, OpenAI's ChatGPT, and Google's Gemini, in search of reliable information.
"Hey @Grok, is this true?" has become a common query on Elon Musk's platform X, where the AI assistant is built in, reflecting the growing trend of seeking instant debunks on social media.
But the responses are often themselves riddled with misinformation.
Grok, now under renewed scrutiny for inserting "white genocide," a far-right conspiracy theory, into unrelated queries, wrongly identified old video footage from Sudan's Khartoum airport as a missile strike on Pakistan's Nur Khan airbase during the country's recent conflict with India.
Unrelated footage of a building on fire in Nepal was misidentified as "likely" showing Pakistan's military response to Indian strikes.
"The growing reliance on Grok as a fact-checker comes as X and other major tech companies have scaled back investments in human fact-checkers," McKenzie Sadeghi, a researcher with the disinformation watchdog NewsGuard, told AFP.













