Primary Country (Mandatory)

Other Country (Optional)

Set News Language for United States

Primary Language (Mandatory)
Other Language[s] (Optional)
No other language available

Set News Language for World

Primary Language (Mandatory)
Other Language(s) (Optional)

Set News Source for United States

Primary Source (Mandatory)
Other Source[s] (Optional)

Set News Source for World

Primary Source (Mandatory)
Other Source(s) (Optional)
  • Countries
    • India
    • United States
    • Qatar
    • Germany
    • China
    • Canada
    • World
  • Categories
    • National
    • International
    • Business
    • Entertainment
    • Sports
    • Special
    • All Categories
  • Available Languages for United States
    • English
  • All Languages
    • English
    • Hindi
    • Arabic
    • German
    • Chinese
    • French
  • Sources
    • India
      • AajTak
      • NDTV India
      • The Hindu
      • India Today
      • Zee News
      • NDTV
      • BBC
      • The Wire
      • News18
      • News 24
      • The Quint
      • ABP News
      • Zee News
      • News 24
    • United States
      • CNN
      • Fox News
      • Al Jazeera
      • CBSN
      • NY Post
      • Voice of America
      • The New York Times
      • HuffPost
      • ABC News
      • Newsy
    • Qatar
      • Al Jazeera
      • Al Arab
      • The Peninsula
      • Gulf Times
      • Al Sharq
      • Qatar Tribune
      • Al Raya
      • Lusail
    • Germany
      • DW
      • ZDF
      • ProSieben
      • RTL
      • n-tv
      • Die Welt
      • Süddeutsche Zeitung
      • Frankfurter Rundschau
    • China
      • China Daily
      • BBC
      • The New York Times
      • Voice of America
      • Beijing Daily
      • The Epoch Times
      • Ta Kung Pao
      • Xinmin Evening News
    • Canada
      • CBC
      • Radio-Canada
      • CTV
      • TVA Nouvelles
      • Le Journal de Montréal
      • Global News
      • BNN Bloomberg
      • Métro
Don't expect quick fixes in 'red-teaming' of AI models. Security was an afterthought

Don't expect quick fixes in 'red-teaming' of AI models. Security was an afterthought

CTV
Monday, August 14, 2023 03:02:20 PM UTC

White House officials concerned by AI chatbots' potential for societal harm and the Silicon Valley powerhouses rushing them to market are heavily invested in a three-day competition ending Sunday at the DefCon hacker convention in Las Vegas.

White House officials concerned by AI chatbots' potential for societal harm and the Silicon Valley powerhouses rushing them to market are heavily invested in a three-day competition ending Sunday at the DefCon hacker convention in Las Vegas.

Some 3,500 competitors have tapped on laptops seeking to expose flaws in eight leading large-language models representative of technology's next big thing. But don't expect quick results from this first-ever independent "red-teaming" of multiple models.

Findings won't be made public until about February. And even then, fixing flaws in these digital constructs -- whose inner workings are neither wholly trustworthy nor fully fathomed even by their creators -- will take time and millions of dollars.

Current AI models are simply too unwieldy, brittle and malleable, academic and corporate research shows. Security was an afterthought in their training as data scientists amassed breathtakingly complex collections of images and text. They are prone to racial and cultural biases, and easily manipulated.

"It's tempting to pretend we can sprinkle some magic security dust on these systems after they are built, patch them into submission, or bolt special security apparatus on the side," said Gary McGraw, a cybsersecurity veteran and co-founder of the Berryville Institute of Machine Learning. DefCon competitors are "more likely to walk away finding new, hard problems," said Bruce Schneier, a Harvard public-interest technologist. "This is computer security 30 years ago. We're just breaking stuff left and right." Michael Sellitto of Anthropic, which provided one of the AI testing models, acknowledged in a press briefing that understanding their capabilities and safety issues "is sort of an open area of scientific inquiry."

Conventional software uses well-defined code to issue explicit, step-by-step instructions. OpenAI's ChatGPT, Google's Bard and other language models are different. Trained largely by ingesting -- and classifying -- billions of datapoints in internet crawls, they are perpetual works-in-progress, an unsettling prospect given their transformative potential for humanity.

After publicly releasing chatbots last fall, the generative AI industry has had to repeatedly plug security holes exposed by researchers and tinkerers.

Read full story on CTV
Share this story on:-
More Related News
© 2008 - 2025 Webjosh  |  News Archive  |  Privacy Policy  |  Contact Us