Prior to the public release of artificial intelligence (AI) tools that are “unreliable” or in trial, India has requested that tech companies obtain government consent. It also notes that these tools should be labeled with the possibility that they could provide incorrect responses to user inquiries.
The country’s IT ministry stated in an advice sent to the platforms last Friday that the usage of such technologies, including generative AI, and their “availability to the users on Indian Internet must be done so with explicit permission of the Government of India.”
Countries across the world are racing to draw up rules to regulate AI. India has been tightening regulations for social media companies, which count the South Asian nation as a top growth market.
The advisory came a week after a top minister on Feb. 23lambasted Google‘s Gemini AI tool for a response that Indian Prime Minister Narendra Modi has been accused by some of implementing policies characterised as “fascist”.
These are direct violations of Rule 3(1)(b) of Intermediary Rules (IT rules) of the IT act and violations of several provisions of the Criminal code. @GoogleAI @GoogleIndia @GoI_MeitY https://t.co/9Jk0flkamN
— Rajeev Chandrasekhar 🇮🇳(Modiyude Kutumbam) (@Rajeev_GoI) February 23, 2024
A day later, Google said it had quickly worked to address the issue and the tool “may not always be reliable”, in particular for current events and political topics.
“Safety and trust is platforms legal obligation. ‘Sorry Unreliable’ does not exempt from law,” deputy IT minister Rajeev Chandrasekhar said on the social media platform X in response to Google’s statement.
India’s Friday advisory also asked platforms to ensure that their AI tools do not “threaten the integrity of the electoral process”. India’s general elections are to be held this summer, where the ruling Hindu nationalist party is expected to secure a clear majority.