Primary Country (Mandatory)

Other Country (Optional)

Set News Language for United States

Primary Language (Mandatory)
Other Language[s] (Optional)
No other language available

Set News Language for World

Primary Language (Mandatory)
Other Language(s) (Optional)

Set News Source for United States

Primary Source (Mandatory)
Other Source[s] (Optional)

Set News Source for World

Primary Source (Mandatory)
Other Source(s) (Optional)
  • Countries
    • India
    • United States
    • Qatar
    • Germany
    • China
    • Canada
    • World
  • Categories
    • National
    • International
    • Business
    • Entertainment
    • Sports
    • Special
    • All Categories
  • Available Languages for United States
    • English
  • All Languages
    • English
    • Hindi
    • Arabic
    • German
    • Chinese
    • French
  • Sources
    • India
      • AajTak
      • NDTV India
      • The Hindu
      • India Today
      • Zee News
      • NDTV
      • BBC
      • The Wire
      • News18
      • News 24
      • The Quint
      • ABP News
      • Zee News
      • News 24
    • United States
      • CNN
      • Fox News
      • Al Jazeera
      • CBSN
      • NY Post
      • Voice of America
      • The New York Times
      • HuffPost
      • ABC News
      • Newsy
    • Qatar
      • Al Jazeera
      • Al Arab
      • The Peninsula
      • Gulf Times
      • Al Sharq
      • Qatar Tribune
      • Al Raya
      • Lusail
    • Germany
      • DW
      • ZDF
      • ProSieben
      • RTL
      • n-tv
      • Die Welt
      • Süddeutsche Zeitung
      • Frankfurter Rundschau
    • China
      • China Daily
      • BBC
      • The New York Times
      • Voice of America
      • Beijing Daily
      • The Epoch Times
      • Ta Kung Pao
      • Xinmin Evening News
    • Canada
      • CBC
      • Radio-Canada
      • CTV
      • TVA Nouvelles
      • Le Journal de Montréal
      • Global News
      • BNN Bloomberg
      • Métro
AI chatbots are supposed to improve health care. But research says some are perpetuating racism

AI chatbots are supposed to improve health care. But research says some are perpetuating racism

CTV
Friday, October 20, 2023 12:40:55 PM UTC

As hospitals and health care systems turn to artificial intelligence to help summarize doctors' notes and analyze health records, a new study led by Stanford School of Medicine researchers cautions that popular chatbots are perpetuating racist, debunked medical ideas, prompting concerns that the tools could worsen health disparities.

As hospitals and health care systems turn to artificial intelligence to help summarize doctors' notes and analyze health records, a new study led by Stanford School of Medicine researchers cautions that popular chatbots are perpetuating racist, debunked medical ideas, prompting concerns that the tools could worsen health disparities for Black patients.

Powered by AI models trained on troves of text pulled from the internet, chatbots such as ChatGPT and Google's Bard responded to the researchers' questions with a range of misconceptions and falsehoods about Black patients, sometimes including fabricated, race-based equations, according to the study published Friday in the academic journal Digital Medicine and obtained exclusively by The Associated Press.

Experts worry these systems could cause real-world harms and amplify forms of medical racism that have persisted for generations as more physicians use chatbots for help with daily tasks such as emailing patients or appealing to health insurers.

The report found that all four models tested -- ChatGPT and the more advanced GPT-4, both from OpenAI; Google's Bard, and Anthropic's Claude -- failed when asked to respond to medical questions about kidney function, lung capacity and skin thickness. In some cases, they appeared to reinforce long-held false beliefs about biological differences between Black and white people that experts have spent years trying to eradicate from medical institutions.

Those beliefs are known to have caused medical providers to rate Black patients' pain lower, misdiagnose health concerns and recommend less relief.

"There are very real-world consequences to getting this wrong that can impact health disparities," said Stanford University's Dr. Roxana Daneshjou, an assistant professor of biomedical data science and dermatology and faculty adviser for the paper. "We are trying to have those tropes removed from medicine, so the regurgitation of that is deeply concerning."

Daneshjou said physicians are increasingly experimenting with commercial language models in their work, and even some of her own dermatology patients have arrived at appointments recently saying that they asked a chatbot to help them diagnose their symptoms.

Read full story on CTV
Share this story on:-
More Related News
© 2008 - 2025 Webjosh  |  News Archive  |  Privacy Policy  |  Contact Us