Introduction:

With the widespread use of AI in recent times, the threat to personal data security, sensitive information, and overall security has become a major concern. Additionally, increasing incidents of deep fakes, voice scams, social bias, financial fraud, and misinformation are making the regulation of AI urgent. To address these concerns and monitor AI models, the Ministry of Electronics and Information Technology (MeitY) issued a revised advisory on March 15, 2024, replacing the earlier advisory from March 1, 2024, which faced criticism from various tech firms and startups.

Applicability:

This advisory is applicable on all on Intermediaries and Platforms using use AI Models, Large Language Models, AI Software’s and Algorithms. While the definition of Platforms remains vague as it has not been defined under IT Rules.

Intermediary has been defined under Section 2(1)(w) of the Information Technology Act 2000 as a person who receives, stores or transmits any electronic record and provides any service relating to such record on the behalf of another person. Intermediary includes network service providers, telecom service providers, internet service providers, search engines, web-hosting service providers, online-auction sites, online payment sites, online-marketplaces and cyber cafes.

Key Highlights of the Advisory:

  1. Every intermediary and platform must ensure that using AI models, LLM, Generative AI, software, or algorithms on their systems does not let users post, display, upload, or share any unlawful content as defined in Rule 3(1)(b) of the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 and other existing laws.

Further Compliance with Rule 3 (1)(b) becomes crucial to ensure the due diligence as prescribed under the Rules, wherein Rule 3 (1)(b) specifically states that an intermediary, including social media intermediary and significant social media intermediary, shall observe the following while discharging its duties:

The rules and regulations, privacy policy or user agreement of the intermediary shall inform the user of its computer resource not to host, display, upload, modify, publish, transmit, store, update or share any information that:

  • belongs to another person and to which the user does not have any right;
  • is defamatory, obscene, pornographic, paedophilic, invasive of another’s privacy, including bodily privacy, insulting or harassing on the basis of gender, libellous, racially or ethnically objectionable, relating or encouraging money laundering or gambling, or otherwise inconsistent with or contrary to the laws in force;
  • is harmful to child;
  • infringes any patent, trademark, copyright or other proprietary rights;
  • violates any law for the time being in force;
  • deceives or misleads the addressee about the origin of the message or knowingly and intentionally communicates any information which is patently false or misleading in nature but may reasonably be perceived as a fact;
  • impersonates another person;
  • threatens the unity, integrity, defence, security or sovereignty of India, friendly relations with foreign States, or public order, or causes incitement to the commission of any cognisable offence or prevents investigation of any offence or is insulting other nation;
  • contains software virus or any other computer code, file or program designed to interrupt, destroy or limit the functionality of any computer resource;
  • is patently false and untrue, and is written or published in any form, with the intent to mislead or harass a person, entity or agency for financial gain or to cause any injury to any person;
  1. Every intermediary and platform must ensure that their computer systems, whether by themselves or through the use of AI models, LLM, Generative AI, software, or algorithms, do not allow any bias or discrimination, and do not threaten the integrity of the electoral process.
  2. Artificial Intelligence foundational models, LLM, Generative AI, software, or algorithms that are under-tested or unreliable should only be available to users in India if they are clearly labeled to indicate the possible inherent fallibility or unreliability of the output generated. Additionally, consent pop-ups or similar mechanisms should be used to explicitly inform users about these potential issues.
  3. Every intermediary and platform must inform users through their terms of service and user agreements about the consequences of handling unlawful information. This includes disabling or removing such information, suspending or terminating the user’s account access or usage rights, and possible legal penalties under the applicable laws.
  4. Where an intermediary’s software or computer resource allows the synthetic creation, generation, or modification of text, audio, visual, or audio-visual information that could be used as misinformation or a deepfake, it should ensure that this information is labeled or embedded with permanent, unique metadata or identifiers. This will make it possible to identify that the information was created or modified using the intermediary’s resource. Additionally, if a user makes any changes, the metadata should be configured to identify the user or computer resource responsible for the change.

In conclusion, the Ministry of Electronics and Information Technology’s updated advisory is a crucial step to ensure the safe and responsible use of AI. It requires platforms using AI to follow strict rules to prevent issues like misinformation, deep fakes, and data breaches. The advisory highlights the importance of informing users about the risks of AI-generated content and ensuring transparency by labeling such content clearly. By enforcing these guidelines, the ministry aims to protect users and maintain the integrity of information shared online, addressing the urgent need for effective AI regulation.

Source:

  1. https://www.meity.gov.in/writereaddata/files/Advisory%2015March%202024.pdf
  2. https://www.meity.gov.in/content/information-technology-intermediary-guidelines-and-digital-media-ethics-code-rules-2021

 

Disclaimer:  This is an effort by Lexcomply.com, to contribute towards improving compliance management regime. User is advised not to construe this service as legal opinion and is advisable to take a view of subject experts.

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes:

<a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>