Re-thinking content moderation: structural solutions beyond the GAC

This post is authored by Sachin Dhawan and Vignesh Shanmugam

The grievance appellate committee (‘GAC’) provision in the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2022 has garnered significant controversy. While it seeks to empower users to challenge the arbitrary moderation decisions of platforms, the provision itself has been criticised for being arbitrary. Lawyers, privacy advocates, technology companies, and other stakeholders have raised many  concerns about the constitutional validity of the GAC, its lack of transparency and independence, and excessive delegated power.

Although these continuing discussions on the GAC are necessary, they do not address the main concerns plaguing content moderation today. Even if sufficient legal and procedural safeguards are incorporated, the GAC will still be incapable of resolving the systemic issues in content moderation. This fundamental limitation persists because “governing content moderation by trying to regulate individual decisions is [like] using a teaspoon to remove water from a sinking ship”.  

Governments, platforms, and other stakeholders must therefore focus on: (i) examining the systemic issues which remain unaddressed by content moderation systems; and (ii) ensuring that platforms implement adequate structural measures to effectively reduce the number of individual grievances as well as systemic issues.

The limitations of the current content moderation systems

Globally, a majority of platforms rely on an individual case-by-case approach for content moderation. Due to the limited scope of this method, platforms are unable to resolve, or even identify, several types of systemic issues. This, in turn, increases the number of content moderation cases.

To illustrate the problem, here are a few examples of systemic issues which are unaddressed by content moderation systems: (i) coordinated or periodic attacks (such as mass reporting of users/posts) which target a specific class of users (based on gender, sexuality, race, caste, religion, etc.); (ii) differing content moderation criteria in different geographical locations; and (iii) errors, biases or other issues with algorithms, programs or platform design which lead to increased flagging of users/posts for content moderation.

Considering the gravity of these systemic issues, platforms must adopt effective measures to improve the standards of content moderation and reduce the number of grievances.

Addressing the structural concerns in content moderation systems

Several legal scholars have recommended the adoption of a ‘systems thinking’ approach to address the various systemic concerns in content moderation. This approach requires platforms to implement corporate structural changes, administrative practices, and procedural accountability measures for effective content moderation and grievance redressal. 

Accordingly, revising the existing content moderation frameworks in India to include the following key ‘systems thinking’ principles would ensure fairness, transparency and accountability in content moderation.

  • Establishing independent content moderation systems. Although platforms have designated content moderation divisions, these divisions are, in many cases, influenced by the platforms’ corporate or financial interests, advertisers’ interests, or political interests, which directly impacts the quality and validity of their content moderation practices. Hence, platforms must implement organisational restructuring measures to ensure that content moderation and grievance redressal processes are (i) solely undertaken by a separate and independent ‘rule-enforcement’ division; and (ii) not overruled or influenced by any other divisions in the corporate structure of the platforms. Additionally, platforms must designate a specific individual as the authorised officer in-charge of the rule-enforcement division. This ensures transparency and accountability from a corporate governance viewpoint. 
  • Robust transparency measures. Across jurisdictions, there is a growing trend of governments issuing formal or informal orders to platforms, including orders to suspend or ban specific accounts, take down specific posts, etc. In addition to ensuring transparency of the internal functioning of platforms’ content moderation systems, platforms must also provide clarity on the number of measures undertaken (and other relevant details) in compliance with such governmental orders. Ensuring that platforms’ transparency reports separately disclose the frequency and total number of such measures will provide a greater level of transparency to users, and the public at large.
  • Aggregation and assessment of claims. As stated earlier, individual cases provide limited insight into the overall systemic issues present on the platform. Platforms can gain a greater level of insight  through (i) periodic aggregation of claims received by them; and (ii) assessment of  these aggregated claims for any patterns of harm or bias (for example: assessing for the presence of algorithmic/human bias against certain demographics). Doing so will illuminate algorithmic issues, design issues, unaccounted bias, or other systemic issues which would otherwise remain unidentified and unaddressed.
  • Annual reporting of systemic issues. In order to ensure internal enforcement of systemic reform, the rule-enforcement divisions must provide annual reports to the board of directors (or the appropriate executive authority of the platform), containing systemic issues observed, recommendations for certain systemic issues, and protective measures to be undertaken by the platforms (if any). To aid in identifying further systemic issues, the division must conduct comprehensive risk assessments on a periodic basis, and record its findings in the next annual report.
  • Implementation of accountability measures. As is established corporate practice for financial, accounting, and other divisions of companies, periodic quality assurance (‘QA’) and independent auditing of the rule-enforcement division will further ensure accountability and transparency.

Conclusion

Current discussions regarding content moderation regulations are primarily centred around the GAC, and the various procedural safeguards which can rectify its flaws. However, even if the GAC  becomes an effectively functioning independent appellate forum, the systemic problems plaguing content moderation will remain unresolved. It is for this reason that platforms must actively adopt the structural measures suggested above. Doing so will (i) increase the quality of content moderation and internal grievance decisions; (ii) reduce the burden on appellate forums; and (iii) decrease the likelihood of governments imposing stringent content moderation regulations that undermine  the free speech rights of users.

On the Exclusion of Regulatory Sandbox Provisions from Data Protection Law

On November 18, 2022, the Ministry of Electronics & Information Technology (‘MeitY’) released the new Digital Personal Data Protection Bill, 2022 (‘2022 Bill’) as the governing legislation for personal data. Prior to the 2022 Bill, the Personal Data Protection Bill, 2019 (‘2019 Bill’) was the proposed legislation to govern personal data and protect data privacy. The 2019 Bill was withdrawn during the Monsoon session of Parliament in August 2022, after receiving significant amendments and recommendations from the Joint Committee of the Parliament in 2021.

The 2022 Bill has removed several provisions from the 2019 Bill, one of which pertains to the creation of a regulatory sandbox for encouraging innovation in artificial intelligence, machine-learning, or any other emerging technologies (under Clause 40 of the 2019 Bill). While some experts have criticised the 2022 Bill for not retaining this provision, I contend that the removal of the regulatory sandbox provision is a positive aspect of the 2022 Bill. In general, regulatory sandbox provisions should not be incorporated into data protection laws for the following reasons: 

  1. The limited scope and purpose of data protection legislation

Data protection laws are drafted with the specific purpose of protecting personal data of individuals, creating a framework to process personal data, and laying down specific rights and responsibilities for data fiduciaries/processors. Although firms participating in a sandbox may process personal data, the functions of sandboxes are more expansive than regulating personal data processing. The primary purpose of regulatory sandboxes is to create isolated, controlled environments for the live testing, development, and restricted time-bound release of innovations. Sandboxes are also set-up to help regulatory authorities monitor and form adaptive regulations for these innovative technologies, as they are either partially or completely outside the purview of existing legislations.

Since the scope of regulatory sandboxes is broader than that of data protection legislations, it is insufficient for a sandbox provision to be included in a data protection legislation, with limited compliances and exemptions from the provisions of such legislation. A separate legislation is required to be drafted to regulate such emerging technologies. 

The regulatory sandbox framework under the European Union’s Proposed Artificial Intelligence Act, 2021 (‘EU AI Act’), as well as the regulatory sandboxes established by SEBI, RBI, and other authorities in India demonstrate this clearly. These frameworks are established separately from existing legislations, and provide a specific scope and purpose for the sandbox in a clear and detailed manner. 

  1. The limited expertise and conflicting mandate of a data protection authority

Data protection authorities (‘DPAs’) are appointed to protect the rights of data principals. They lack the necessary expertise over emerging technologies to also function as the supervisory authority for a regulatory sandbox. Hence, a regulatory sandbox is required to be monitored and supervised by a separate authority which has expertise over the specific areas for which the sandbox is created.

Moreover, it is not sufficient to merely constitute a separate authority for sandboxes within a data protection law. Since the supervisory authority for sandboxes is required to privilege innovation and development of technologies over the strict protection of personal data, the functions of this authority will be directly conflicting with those of the DPA. Therefore, the regulatory sandbox framework is required to be incorporated in a separate legislation altogether.

  1. Sector-specific compliance provisions for regulatory sandboxes

The desire to regulate artificial intelligence and emerging technologies under a data protection legislation is understandable, as these technologies process personal data. However, it is to be noted that AI systems and other emerging technologies also process non-personal data and anonymised data. 

The regulatory sandbox for these technologies are thus not only subject to the principles of data protection law, but are in fact a nexus for information technology law, anti-discrimination law, consumer protection law, e-commerce law, and other applicable laws. Accordingly, the framework for the regulatory sandbox cannot be placed within a data protection legislation or subordinate rules to such a legislation. It has to be regulated under a separate framework which ensures all the relevant laws are taken into account, and the safeguards are not just limited to personal data safeguards. 

Since the exemptions, mitigation of risks, and compliance for the different emerging technologies are to be specifically tailored to those technologies (across various laws), the regulatory mechanism for the same cannot be provided in a data protection legislation. 

Conclusion

The above arguments establish the basis for not incorporating sandbox provisions within a data protection legislation. Regulatory sandboxes, based on their framework alone, do not belong in a data protection legislation. The innovation-centric mandate of the sandbox framework and the functions of the supervisory authority conflict with the core principles of data protection law and the primary functions of DPAs. The limited scope of data protection law, coupled with the lack of expertise of DPAs decisively establish the incongruence between the regulatory sandbox provision and data protection legislations.

Commentators who critique the exclusion of the sandbox provision from the 2022 Bill are right to be concerned about rapid developments in artificial intelligence and other emerging technologies. But it is far more prudent for them to recommend that the Central government set-up an expert committee to analyse these developments and prepare a separate framework for the sector. Such a framework can comprehensively account for the various mechanisms (beyond data protection) required to govern these emerging technologies.

Comparative tracker for the Digital Personal Data Protection Bill, 2022

The Digital Personal Data Protection Bill, 2022 (“2022 Bill”) was released by the Ministry of Electronics and Information Technology on November 18, 2022, with the stated intent of being concise, comprehensible, and simplified for the citizens. For these reasons, the 2022 Bill has made significant changes to the framework of the earlier Personal Data Protection Bill, 2019 (“2019 Bill”), which was withdrawn earlier this August during the Monsoon session of the Parliament.

We have prepared this detailed tracker to record the changes made in the 2022 Bill, and compared the differences in the key provisions of the 2022 Bill and the 2019 Bill. This tracker can be a helpful reference while analysing the two Bills, or even a quick guide to the changes brought out in the 2022 Bill. 

This tracker has used the 2019 Bill as reference for the changes, as this was the last version of the Data Protection Bill which was introduced before the Parliament as a comprehensive legislation. We have analysed each clause and sub-clause of the 2022 Bill and compared it to the corresponding provisions of the 2019 Bill. We have provided the full text of the provisions (highlighting the differences) as well as a brief summary of changes under the 2022 Bill. Readers may use the 2022 Bill as the base, when looking for the changes made to specific provisions of the 2019 Bill.

As the public and expert analyses and opinions on the 2022 Bill are still being developed, we invite comments on any errors or omissions of corresponding provisions which may be present in this tracker.

The tracker can be accessed here.

Please note: a detailed list of the removed provisions from the 2019 Bill will be added to the tracker in the next few days.

(Compiled by Tejaswita Kharel and Vignesh Shanmugam)

(Update January 24, 2023: We have included in the tracker a detailed list of provisions removed from the 2022 Bill along with our comments on the same)