
“AI systems can harness data from various training data-sets and not only replicate human biases, but may also amplify them at a widespread level. Additionally, AI systems may also harness inherent or inadvertent biases present in training data-sets which may not be intentionally fed by the developers.”
Introduction
The world is witnessing many sophisticated generative-AI systems which have profoundly changed how humans write, communicate and interpret content. Such platforms offer services which immensely improve human efficiency in producing a wide range and types of content to do a variety of tasks. However sometimes, these platforms may end up generating unethical or illegal content. AI systems may also harness inherent or inadvertent biases present in training data-sets which may not be intentionally fed by the developers. Some of these biases may lead to glitches or generate harmful content for end-users.

Risks emanating from harmful AI-generated content
Harmful content may be divided into (i) false; or (ii) toxic content (controversial, content inciting violent action). This can cause a multitude of risks which may cause physical, mental, reputational, societal, financial and political harm. Some recent instances of generative AI platforms producing harmful content: From allegedly encouraging and providing a plan to assassinate the Queen, to promoting racism and condoning the holocaust, allegedly influencing elections, generating false and defamatory content, generating fake-photos of the pentagon bombed which caused the US stock market to crash for a brief period. This necessitates a regime to affix liability and responsibility on such generative AI defects.
Legislation governing AI:
A. European Union
The proposed EU AI Act, 2024, classifies AI systems into two categories: (i) High-risk AI systems; and (ii) Low-risk AI systems and lays down several risk-mitigation measures for AI service providers to take before placing a particular product in the market and putting them to service. Further, to complement the AI Act, the European Commission has proposed two Directives: (i) the AI Liability Directive; and (ii) Product Liability Directive, provide for a harmonized risk-based strict or fault-based liability regime to compensate users who suffer harm as a result of AI systems.
B. USA
The USA follows the Communications Decency Act 1990 for punishing toxic content. While section 230 of CDA provides safe harbor to “interactive computer services”, liability can be fastened if they are found to be “information content providers”. Courts have debated extending this liability to AI algorithms responsible for content inciting violent action in the case of Gonzalez vs Google and various law-makers and stakeholders in the policy and legal sector have expressed diverse opinions on this aspect. Under the CDA, the court undertakes a fact-specific inquiry on the nature of AI algorithm used and level of editorial agency involved, and then determined whether the “computer service provider” also “materially contributes” to the production of content by being responsible whole, or in part, for the illegality of content. Courts in the USA are currently considering this aspect in the defamation lawsuit against OpenAI.
C. Singapore
Singapore has proposed a generative AI liability framework, targeted at (i) incorrect; and (ii) toxic content (deep-fakes or content inciting violence). This fastens liability on the developer and deployer for the AI system for incorrect content, and an additional liability on other stakeholders including the user (prompter) for toxic content. Salient features of the proposal, inter alia, include several suggestive measures like (i) Retrieval-Augmented Generation (RAG) and reinforcement learning; (ii) User warnings for protecting the end-user from hallucinations; (iii) Technological content-specific filters and red-teaming; (iv) Recommending training data-sets; and (v) Liability of developer and deployer for negligence. Here, the liability for incorrect information has been fastened on the developer and deployer of the AI system, taking into account several factors – whether it was a trivial wrong without any damage, or the user has suffered damage, and whether the deployer had a legal duty to provide correct information.
This is relevant because in some cases, for e.g., Chatbots used on websites of entities, the entity has a duty to provide correct information to its user. A Canadian tribunal made an airline AirCanada liable for misrepresentation because the chatbot deployed on its website rendered incorrect information to viewers.
D. Indian perspective
The relevant statutes in India which would be applicable for harmful content on digital media are inter alia, the: (i) Information Technology Act, 2000, and allied rules; (ii) Indian Penal Code, 1860 (or the Bharatiya Nyaya Sanhita, 2023). Section 79 of IT Act, 2000 read with Rule 6 of the IT (Intermediary Guidelines), Rules 2021 would require proving ‘actual knowledge’ of the intermediary to surpass the safe harbor and fasten liability. While it has been held that receipt of an order from a court or notified government agency to the intermediary would suffice ‘actual knowledge’, applying the same standard to AI systems over content generated on their platform would create several problems due to the ‘black box’ problem in the functioning of the AI system which would make it difficult to attribute the generation of harmful content to the deployer. It would be interesting how India places generative AI systems vis-à-vis the IT Act and allied rules, which were legislated to regulate intermediaries, and not AI platforms.
In a positive step, MEITY has issued an advisory dated 15 March 2024 which requires intermediaries and AI platforms to take several measures like (i) content-labeling, (ii) user warnings and highlight the concerns of fallibility/unreliability of content; (iii) take steps to mitigate bias or discrimination in AI algorithms; and (iv) take steps to prevent content outlined in Rule 3 of IT Rules, 2021 from being generated.

Suggestions:
1. Control based liability framework:
The two areas of possible regulation can be strict fault-based product liability and other liability for generating harmful content.
Considering the blurred lines of control in producing the output content, a control-based liability regime to fasten liability on all relevant stakeholders in the development, deployment and use of the generative AI system may be an optimal solution. The degree of liability would vary depending on the role played by the stakeholder, extent of control, harm caused (if any), among other factors.
While determining this, what would be interesting is how regulators consider the perspective of AI developers ‒ that biases may not be intentionally imputed into the AI system, but the AI system may still exhibit hidden biases present in the society without knowledge of the developers. Due to the sheer vastness of the training data in self-learning AI systems, developers and deployers may not be able to control or understand the decision-making mechanism in AI systems after a point. This is called the ‘black-box’ problem. While this may be a legitimate argument from developers, this would be in contrast to the fundamental pillars of ethical AI, viz. transparency and explainability to ensure a safe user experience. Hence, how the liability between developers and deployers is apportioned in this control-based liability regime is another issue arising for consideration of regulators.

2. Applicability of safe harbor provisions on generative AI systems
For determining fault-based liability for generating harmful content, it would be imperative to determine whether the AI system had the requisite independent agency to produce the output, or it merely re-assembled content already available on the internet. Here, a distinction would have to be made between the role of a mere neutral ‘service provider’ from that of an active ‘content producer’. Courts will face issues of law concerning interpretation of terms like “publication” of content, “person(s)” and whether it would also include legal/artificial persons, and other terms present in the respective statutes which could be given a dynamic and purposive interpretation.

3. User Warnings and Labeling: Towards a safer user experience
One of the immediate steps which India can take is to mandate labeling of AI-generated content across digital media. This can be done through watermarks, digital fingerprints, and labeling on social media. USA has proposed a bipartisan senate AI Labeling Act aimed at preventing users from being misguided by which content is AI generated and which not. This is important so that users are informed about AI generated content (like deep-fakes, etc.) before they are influenced by
such content.
Conclusion
Apart from EU, USA and Singapore, many other countries like the United Kingdom, Canada, Brazil, China, Singapore, Japan, etc. have proposed white-papers or recommendatory strategy-documents dealing with risks and harm caused by AI systems to create a conducive AI landscape to promote innovation and aiding human life.
Comentarios