Press Enter to search
The Ministry of Electronics and Information Technology (MeitY) has updated its advisory to major social media companies regarding the use of artificial intelligence (AI) in the country. The revised advisory, issued on Friday, eliminates the need for intermediaries and platforms to seek government permission before deploying AI models and tools deemed 'under-tested' or 'unreliable.'
The new advisory supersedes the previous note issued on March 1, outlining due diligence procedures for intermediaries and platforms under the Information Technology Act, 2000, and Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021. While the obligations remain unchanged, the language has been modified for clarity.
In the revised advisory, intermediaries are no longer required to submit an action taken-cum-status report but must still comply with the guidelines immediately. The language has been softened, replacing the requirement for 'explicit permission' with a directive to label under-tested or unreliable AI models to inform users about potential fallibility or unreliability.
Moreover, the advisory emphasizes that AI models should not be used to share unlawful content or contribute to bias, discrimination, or electoral process integrity threats. Intermediaries are advised to use consent popups or similar mechanisms to inform users explicitly about AI model unreliability.
MeitY also underscores the need to identify and label deepfakes and misinformation using unique metadata or identifiers, without defining "deepfake." The advisory applies to eight significant social media intermediaries but excludes platforms like Adobe, Sarvam AI, and Ola's Krutrim AI.
The March 1 advisory faced criticism from startup founders, with concerns about its impact on innovation. The revised advisory aims to provide clearer guidelines while ensuring AI deployment transparency and accountability among major social media platforms.