Crafting safe Generative AI systems Premium
The Hindu
Artificial Intelligence regulation is necessary but not sufficient; a broader approach should be considered
The Generative AI revolution is upon us and will potentially unleash a wave of technical and social change. Large Language Models (LLMs) alone are predicted to add $2.6 trillion-$4.4 trillion annually to the global economy. As one example of their potential impact, consider the ongoing pilot of Jugalbandi Chatbot in rural India (powered by ChatGPT). Jugalbandi promises to serve as a universal translator, accepting queries in local languages, retrieving answers from English-language sources, and presenting them back to users in their native language. This service alone can democratise access to information and improve the economic well-being of millions of people. And this is only one of hundreds of new services which are being developed.
However, alongside positive developments, this AI revolution also brings risks. Most pressingly, AI powered tools are enabling bad actors to create artificial entities that are indistinguishable from humans online (via speech, text, and video). Bad actors can misrepresent themselves or others and potentially launch a barrage of variations on old harms such as misinformation and disinformation, security hacks, fraud, hate speech, shaming, etc.
In the U.S., an AI generated image of the Pentagon burning spooked equity markets. Fake Twitter and Instagram users promulgating strong political views have been reposted millions of times, contributing to polarised politics online. Cloned AI voices have been used to circumvent bank customer authentication measures. An individual in Belgium was allegedly driven to suicide with his conversations with an LLM. And recent elections in Turkey were marred by AI generated deepfakes. Over one billion voters will head to polls across the U.S., India, the EU, the U.K., and Indonesia in the next two years, and the risk of bad actors harnessing Generative AI for misinformation and election influence is steadily growing.
Concerns about safety associated with Generative AI deployment, then, is rightly at the top of policy makers’ agenda. Using AI tools to misrepresent people or create fake information is at the heart of the safety debate. Unfortunately, most of the proposals under discussion do not seem promising. A common regulatory proposal is to require all digital assistants (aka ‘bots’) to self-identify as such, and to criminalise fake media. While both measures could be useful to create accountability, they are not likely to satisfactorily address the challenge. Established companies may ensure their AI bots self-identify, and only publish valid information. However, bad actors will simply disregard the rule, capitalising on the trust created by compliant companies. We need a more conservative assurance paradigm, whereby all digital entities are assumed to be AI bots or fraudulent businesses unless proven otherwise.
Regulation is necessary but not sufficient; a broader approach should be considered to improve Internet safety and integrity. Based on our recent research at the Harvard Kennedy School, we propose an identity assurance framework. Identity assurance ensures trust between interacting parties by verifying the authenticity of the involved entities, enabling them to have confidence in each other’s claimed identities. The key principles of this framework are that it be open to the numerous credential types emerging around the world, not specific to any single technology or standard, and yet provide privacy protections. Digital wallets are particularly important as they enable selective disclosure and protect users against government or corporate surveillance. This identity assurance framework would be extended to humans, bots, and businesses.
Today, more than 50 countries have initiatives underway to develop or issue digital identity credentials which will form the foundation of this identity assurance framework. India, with Aadhaar, is in a leadership position to establish online identity assurance safeguards. The EU is now establishing a new identity standard which will also support online identity assurance, but full user adoption will likely take the rest of this decade.
Also read | AI anxiety: workers fret over uncertain future
No room for complacency till counting is completed, Chandrababu Naidu tells TDP-BJP-JSP contestants. The TDP-BJP-JSP alliance will register a comfortable victory in the general elections over the YSRCP, he says. Alleging that the YSRCP has conspired to create disturbances on the counting day, the TDP national president advises the chief counting agents and their teams to see to it that the officials adhere to norms related to counting.