India’s foray into the vertical regulation of AI technologies

By Nidhi Singh

AI governance has been trending in regulatory and policy circles over the recent years. Given the economic potential of AI and rapid developments in the field, many calls have been made to strengthen regulations on AI applications. In this post, we discuss some of the approaches to AI governance currently emerging in India. 

AI Regulation: Vertical vs Horizontal Approach

Globally, there are two broad approaches to AI regulation – the horizontal approach and the vertical approach. The debate between these approaches revolves around the scope and specificity of the regulations. A horizontal regulatory framework, exemplified by the European Union’s AI Act, seeks to provide overarching guidelines that apply uniformly across various sectors and applications of AI. This means that the AI Act applies to all uses of AI across sectors, from facial recognition technologies and self-driving cars to the use of AI in video games. This approach lays down a basic level of protection for all AI applications used in the EU, and uses a risk-based framework to provide stricter regulation for AI which has a greater impact on human rights. 

In contrast, a vertical approach involves tailoring regulations to address specific applications of AI, resulting in targeted governance. This allows for sector specific governance, such as China’s regulation on recommendation algorithms or its draft rules on Generative AI. Vertical regulations allow for more nuanced, sector-specific laws which can target specific concerns which are likely to come up in specialised fields like healthcare, insurance or fintech.  

Indian scenario – Horizontal approach

India does not currently follow any one specific approach to AI governance. The first concrete foray into the field of AI governance in India can be traced back to NITI Aayog’s National Strategy on Artificial Intelligence, released in 2018. This was followed by a range of other policies including the AI for All principles released in 2020 and 2021, and the Department of Telecom’s document on the AI stack. All of these documents followed a broad principles-based approach to AI governance, and focused on the development and application of AI ethics across sectors, to all AI applications in India. 

AI for All also discusses the idea of “contextualising AI governance to the prevailing constitutional morality”. This speaks to the broader idea of embedding constitutional principles such as non-discrimination, privacy, and the right to freedom of speech and expression into AI regulation, though the document does not indicate how this would be implemented. The documents also laid down broad principles for responsible AI such as the principle of safety and reliability, equality, inclusivity and non-discrimination, privacy and security, transparency, accountability, and the protection and reinforcement of positive human values. 

The use of this broad principles-based approach speaks more to the horizontal regulatory approach which would be applicable across sectors and not specifically dependent on the use cases in which AI is being deployed. This means that these principles would be applicable across sectors, and would govern the application of AI-based systems for insurance, employment, education and even its use in smart cities and self-driving cars. 

Shifting to the vertical approach

The AI regulatory landscape in India has changed over the last two years. In March 2023, the Indian council for Medical Research (ICMR) released the “Ethical guidelines for the application of Artificial Intelligence in biomedical research and healthcare”. This was the first set of guidelines that applied to the use of AI in the healthcare sector. The guidelines aimed to ensure ethical conduct and provide a set of guiding principles which could be used by experts and ethics committees while reviewing research proposals which involve the use of AI-based technologies. 

(Source: Ethical Guidelines for Application of Artificial
Intelligence in Biomedical Research and Healthcare, ICMR)

The guidelines recognize the increasing scope for the use of AI in hospitals, research and health care apps, and lay down comprehensive principles for the intersection of AI with medical research and healthcare. The guidelines set out an extensive framework, laying down protocols on how the current medical ethical guidelines must be adapted and changed to incorporate the use of AI, and how this would be implemented by different stakeholder groups. 

In another sector, the Smart Cities Mission launched the ‘AI Playbook for Cities’ in 2022. Recognising the potential for the use of AI-based applications in urban planning, the playbook was launched as an instrument to aid administrators in adopting and deploying AI solutions. 

(Infographic by Nidhi Singh)

The playbook talks about the principles of responsible AI which were released by NITI Aayog in 2020, but goes one step further to provide ways to contextualise these principles in the context of Smart Cities and to manage and mitigate risks brought about by AI technologies. The playbook states that while the ethical principles lay down a broad framework, they must be supplemented with more specific principles which lay down enforceable, targeted responsibilities for different types of stakeholders, such as industry, academia, and citizens.  

Implications for AI governance

This shift in India’s AI strategies from the initial horizontal framework in 2018 to a more vertical approach reflects the recognition of the need for nuanced regulations that can address the unique challenges and opportunities presented by distinct AI applications. This evolution signifies a growing acknowledgment of the importance of adapting governance structures to the diverse and rapidly evolving landscape of AI technologies. Overall, we can see an evolution from a purely horizontal approach to a mixed approach which focuses on more sector-specific applications of the AI principles. 

Over the last few years, there has been a growing recognition of the economic potential of AI. With both States and private entities jumping into the foray, there has been a drastic increase in the number of AI applications being used in India, as well as the scope for its use. However, there are currently no immediate plans to have AI specific regulations in India akin to the EU AI Act. 

While the upcoming Digital India Act may contain some provisions which regulate AI, there is a clear lack of formal governance structures at the moment. Given the potential impact the AI can have on human rights, labour markets and its economic potential broadly, leaving it completely unregulated poses a significant threat to the well-being of individuals and society as a whole. Without proper regulations in place, there is a heightened risk of AI systems being deployed in ways that infringe upon fundamental human rights, such as privacy and freedom from discrimination. Additionally, the unchecked proliferation of AI in labour markets could exacerbate existing inequalities and lead to widespread job displacement without adequate measures to support those affected. Furthermore, any use of AI systems by the State for welfare measures without safeguards could lead to widespread discrimination against vulnerable communities. 

Therefore, it is imperative for India to establish comprehensive regulatory frameworks that address the unique challenges posed by AI, ensuring that its benefits are maximised while its risks are mitigated. 

(The opinions expressed in the blog are personal to the author/writer. The University does not subscribe to the views expressed in the article / blog and does not take any responsibility for the same.)

The Right to be Forgotten in India: An Evolution

by Ira Srivastava

(Ira is a 4th year law student at NLU Delhi)

The Right to be Forgotten (“RTBF”) is a right of a data principal (“DP”) to have their personal data removed or erased in certain circumstances. These typically correspond to situations where consent for data collection is withdrawn or the data collected has been processed or is requested by the DP to be pulled down for various reasons. Because of the ultimate effect of erasure as a remedy for exercising the Right to be Forgotten, this right is also known as the “right to erasure”. The Right is codified under Articles 17 & 19 of the European Union’s (“EU”) General Data Protection Regulation (“GDPR”). India’s Digital Personal Data Protection Act, 2023 lays down the “Right to correction and erasure of personal data” under its Section 12, thus codifying the Right to be Forgotten in India.

This two piece article seeks to trace the evolution of the Right to be Forgotten in India. This is part one of a two piece blog, which focuses on the legislative developments, which have led to the Right in its current form. It goes on to make suggestions on how gaps with respect to the Right can be filled and what steps can facilitate smooth implementation.

390 BCE.

One windy afternoon, in the Acropolis of Athens, two men are in conversation:

Kostas: Forget me Socrates, for I have sinned.

Socrates: What is your sin, my child?

Kostas: I have led a corrupt life in my past life and wish to be forgotten.

Socrates: Let us now join our hands and pray to the Pantheon of Gods.

Prayers begin.

There is an Explosion. Lethe, a river of the Underworld appears as a spring on the ground near the two men. He erases the memory of Kostas’ past life from not only his memory but also the memories of all those who knew him.

Fast forward to 2016, when the General Data Protection Regulation (“GDPR”) was passed. It formally introduced the right to erasure, more popularly known as the “Right to be Forgotten” under its Article 17. The formalisation of the Right to be Forgotten only took place through its codification under the GDPR, although other events complemented its growth.

After the Puttaswamy Judgment, the Justice BN Srikrishna Committee on “A Free and Fair Digital Economy” was constituted in July 2017, which submitted its Report in 2018. The Report had recommended that the right to be forgotten may be adopted based on five-point criteria, including:

  • Sensitivity of data
  • Scale of disclosure or degree of accessibility
  • Role of DP in public life
  • Relevance of data to public
  • Nature of disclosure and activities of data fiduciary

There existed a gap in the understanding of the RTBF. This stemmed from the conflicting views that on one hand, RTBF forms an essential part of privacy but on the other hand there exists no statutory backing. This called for some form of standardization – which was provided by the Personal Data Protection Bill, 2019 (“PDP Bill”). Clause 20 of the PDP Bill envisaged a “Right to be Forgotten”. It empowered the DP to restrict or prevent continuing disclosure of personal data in certain circumstances. These included when the purpose for collection is served, when consent was withdrawn or was not in accordance with the Act. The biggest hurdle that arose was with respect to enforcement. Clause 20(2) provided for enforcement only by an order of the Adjudicating Officer after following a grievance redressal mechanism, with no specified timeline. Some guidelines were also listed for the Adjudicating Officer to bear in mind while giving such an order.

Some of the key concerns flagged by stakeholders included that the nature and scope of the Right must be specified, enforcement measures to be given, and timeline should be prescribed for Privacy Officer to decide on an application.

The PDP Bill was then referred to a Joint Parliamentary Committee. The Committee, in its deliberations, took note of Article 17, GDPR. It noted that governing only disclosure narrows the scope of Clause 20 and must include data processing and accordingly recommended changes in Clause 20 to include “processing” within its scope. This drew much critique from stakeholders, claiming their key concerns had not been addressed. 

The Draft Digital Personal Data Protection Bill, 2022 contained a much watered-down version of this Right in Clause 13. It provided that the DP will have the right to correction and erasure of personal data and enumerated the rights available to the DP including correction of inaccuracies, completion, updating, and erasure of personal data no longer serving the purpose of processing.

The Digital Personal Data Protection Bill, 2023 – which was passed by both Houses of the Parliament – contains the “Right to correction and erasure of personal data” under its Section 12. It, too, lists the rights available to a DP. Additionally, it puts an obligation upon a data fiduciary (“DF”) to comply with requests for correction, completion or updating upon receipt of request from the DP unless necessary for legal compliance. The assumption here seems to be that the DF will comply. However, it must be noted that there is a vast difference in bargaining power, making the fiduciaries extremely powerful and effectively leaving compliance up to their discretion.

It is acknowledged that what works for Europe will not necessarily work in India due to the social, cultural, economic and other differences. However, borrowing from best practices will help in making India a competitive global market. Some of the major reasons for the effective implementation of the GDPR throughout the European Union include the strict measures of enforcement, hefty sums of fines and an efficient dispute resolving mechanism. One such example is seen in €50 million fine on Google by the French data protection authority CNIL, for forcing consent by only giving one option: consent in full to non-specific, poorly explained uses of your data or don’t proceed at all.

At present, the Digital Personal Data Protection Bill, 2023 has been passed by both Houses of the Parliament and received President’s assent to give the Digital Personal Data Protection Act, 2023 (“DPDP Act”). It awaits notification for coming into effect. This intervening time period must be leveraged in order to bridge gaps and address concerns raised by stakeholders. One way that it can be done is by ensuring that the Rules governing the modalities of the Act are comprehensive. That will also ensure smooth implementation, which is key to achieving larger objectives that this Act seeks to achieve in order to make India a competitive global market.

Particularly in the context of the RTBF, the two Rules that can be of use are:

  1. Specificity

The current version of the RTBF is too vague. The 5-point criteria in the Srikrishna Committee Report must be adopted as a framework for assessing the need for a particular data set to be erased or modified. At the very least, the circumstances listed under the 2019 Bill for when the RTBF could be exercised must be used as guidelines. Some of these circumstances included when the purpose for collection was served or when consent to collect the data was withdrawn or was not in accordance with the Act.

  1. Ensuring DFs’ proactive actions

The DPDP Act puts much of the compliance burden on DFs. This is a potential pitfall, as discussed above. One action to avoid the ill-effects is to prescribe:

  1. A timeline within which the RTBF request must necessarily be processed.

This will provide more certainty to the DPs as well. Responding within the timeline should be made compulsory for DFs.

  1. Hefty fines and penalties for wrongful non-compliance with the request.

A step that can realistically be borrowed from the GDPR is having hefty fines and penalties in place. That will also help bridge the gap of bargaining power between large corporations and individuals.

It has been a long journey from having a Judgment upholding the Right to Privacy to a legislation putting the same into force. The passing of the Bill in both Houses shows a legislative intent and with the President’s assent, a start in the right direction. However, its effectiveness will be seen by way of implementation mechanisms yet to be put into place.

As a country with a population of 1.42 billion, out of which at least 1.2 billion are mobile phone users, there comes a great responsibility to ensure data privacy of citizens, particularly of personal data. The passing of the DPDP Bill is a welcome first step but there is a long way to go. How the Right to be Forgotten clause and other clauses will be implemented is yet to be seen. Putting an individual’s right to data privacy at the core of policy decisions will be fundamental to effectively securing the Right to be Forgotten.

Metaverse and the Global South: Bridging the Digital Divide

By Nidhi Singh

The Metaverse has become a buzzword over the last year or so, since a popular tech giant announced its plans to rebrand themselves and focus on bringing the concept of the Metaverse to life. While buzz generated around the Metaverse has brought it into the public eye, it is by no means a novel concept. The idea of a Metaverse, or a shared virtual space where people can interact with each other and with virtual objects and experiences, has been around for decades. The term Metaverse was first coined in 1992 in the book “Snow Crash”, which considered the Metaverse to be an all-encompassing digital world which existed parallel to the physical world. However, with the recent advances in technology and the proliferation of the internet, the Metaverse is closer than ever to becoming a reality. The current buzz around the concept is also bolstered by the potential for economic growth. Certain projections estimate that the Metaverse may have the potential to generate up to 5 trillion USD in value by 2030, making it an opportunity too big to miss. 

What is the Metaverse? 

In simple terms, the most commonly known concept of the Metaverse today is a 3D model of the internet, envisioned as the next step in the development of information interaction online. In its original conception, it was ideally accessible through a single gateway, and as it develops, it would be equivalent to the real world and become the “the next evolution in social technology”. The idea of the Metaverse is however still in development, and while it appears that it may include some components of Virtual Reality (VR) and Augmented Reality (AR) technologies, its difficult to say how this definition will evolve over time. 

Different companies however still have different conceptions of the Metaverse technology ranging from the use of Extended Reality (XR) technology for a fully immersive experience, to simple video games which now host art galleries and concerts. As the Metaverse is currently in the process of being built, there is little agreement on what the future iteration of it will look like. Depending on how the technology evolves, the Metaverse could end up being anything from some niche applications which employ an increased use of VR and AR technology, to a full scale 3D model of the internet or anything in between.

How does the Metaverse work?

In current times, the basic functions of an immersive online world which allows for a digital economy, where users can create, buy, and sell goods already exist in certain video games. Games like Worlds of Warcraft allow users to create and sell digital goods inside the game, and Fortnite, has previously introduced some immersive experiences like concerts and installations within the game, providing a brief look into what the Metaverse could be. The current conception of the Metaverse is expected to be more expansive than this, where everyone would be able to log into a shared online space.

Operationalising the Metaverse

So when can we all expect to be part of this new shared virtual online world? While some experts believe that a large portion of the population will have some access to the Metaverse by 2030, there are some basic challenges which must be addressed before this technology can be operationalised, particularly in Global South countries. 

A very basic problem with the widespread implementation of the Metaverse in India is likely to stem from the cost of entry, including the cost of VR hardware and other technology which may be needed to operate the Metaverse. Additionally, the use of these technologies would also require higher computing power than what is currently available, and an almost 1000 time increase in computational efficiency. While a large portion of the country is now connected to the internet due to the low cost of data through their smartphones, the technologies required to implement Metaverse are still out of reach for a vast majority. This coupled with the lack of access to infrastructure such as fast internet and systems with high-computing power will pose considerable challenges hindering people from participating in the virtual world and participating in the Metaverse.

Another considerable barrier to access is the design of the Metaverse. The current conversation around the design and implementation of the Metaverse is dominated by the Global North, and it is likely that much of the virtual world which is currently being envisaged will be dominated by English language content and experiences which are designed for the western world. This would make it difficult for audiences from the Global South to fully engage in the new technology.

There are also concerns about how this technology could result in further deepening the digital divide. There is a risk that the Metaverse will exacerbate existing inequalities, by creating a virtual space where only those with access to technology and the resources to participate are able to engage. This would widen the digital divides between the Global North and the South, where the technology would cater predominantly to those who have easier access to the technology. 

Finally, the Metaverse also raises questions around data protection and privacy of users in the virtual world. In the absence of a cohesive legal and regulatory framework around data collection, use and protection, users are at a risk when they participate in virtual worlds and engage with the Metaverse. This is exacerbated in Global South countries, many of which are still in the process of formulating their data protection laws and do not have adequate legal and regulatory protections for data governance, 

Addressing these challenges would require a collaborative effort between governments, businesses, and communities in the Global South. By working together, it may be possible to ensure that the benefits of the Metaverse are more widely distributed and that everyone has an opportunity to participate. This would require substantial changes to the current conversations around the Metaverse, which lack inclusivity in design and deployment. 

CCG-NLUD’s Statement on International Cooperation to the Fifth Session of the Ad Hoc Committee to Elaborate a Comprehensive International Convention on Countering the Use of Information and Communication Technologies for Criminal Purposes

Sukanya Thapliyal

As an accredited stakeholder to the United Nations Ad-hoc Committee, tasked to elaborate a comprehensive international convention on countering the use of information and communications technologies (ICTs) for criminal purposes (“the Ad Hoc Committee”), CCG-NLUD recently participated in the Fifth Session of this key process setting the stage for first universal and legally binding convention on cybercrime.

As we reported earlier, the negotiating process has reached a pivotal stage, wherein the Member Countries are negotiating on the basis of a Consolidated Negotiating Document (CND). The CND is prepared by the Chair of the Ad Hoc Committee and succinctly incorporates various views, proposals, and submissions made by the Member States at previous sessions of the Committee.

The previous sessions of the Ad Hoc Committee witnessed the exchange of general views of the Member States on the scope, and objectives of the comprehensive convention, and agreement on the structure of the convention. This was followed by themed discussions and intense discussions on provisions relating to criminalisation, procedural measures and legal enforcement, international cooperation, technical assistance, preventive measures, among others.

The Fifth Session of the Ad hoc Committee is aimed to discuss the preamble, provisions on international cooperation, preventive measures, technical assistance and the mechanism of implementation and the final provisions. Besides the Member Countries, the multistakeholder group consisting of global and regional intergovernmental organisations, civil society organisations, academic institutions and the private sector are also weighing-in with their inputs to support and contribute to the process.

CCG-NLUD, welcomes the opportunity to submit its comments/ inputs on the present text of “Consolidated negotiating document on the preamble, the provisions on international cooperation, preventive measures, technical assistance and the mechanism of implementation and the final provisions of a comprehensive international convention on countering the use of information and communications technologies for criminal purposes.” CCG-NLUD presented the following statement on the “provision on international cooperation.”

The provisions on “international cooperation” are the crucial aspects of the Convention as it aims to encourage both formal and informal means of international cooperation for (i) investigation and prosecution of offences covered under this convention as well as (ii) collection of evidence in electronic form of a criminal offence. The CND also draws from common and well understood principles and standards in the areas of extradition, mutual legal assistance, transfer of criminal proceedings, and other effective measures, while being conversant with the divergent realities of participating member countries.

The CND text lays down general principles of international cooperation, specific provisions on extradition, transfer of sentenced persons and detailed provisions detailing mutual legal assistance amongst state legal enforcement agencies. The CND also recognises that the various provisions laid down under the chapter on international cooperation are aligned with the international human rights regime and ensure adequate protection to human rights and other fundamental freedoms.

The chapter aptly lays down the overarching principles in relation to international cooperation for it broadly outlines the scope and objective of international cooperation and recognises that power and procedure outlined under the Chapter are subject to conditions and safeguards pertaining to protection of human rights. The chapter also includes specific provisions relating to protection of personal data transmitted from one State to another and instils other important requirements such as purpose limitation and data minimisation to reduce harms manifesting to individuals.

CCG-NLUD is broadly in agreement with the above-mentioned provisions under the chapter on International Cooperation. However, we conveyed several reservations and concerns as explained below –

In light of the fact that the powers and procedures laid down in the chapter are highly intrusive and interfering, the scope of international cooperation should be restricted to a narrow set of cyber-dependent crimes that satisfy the criteria of “dual criminality”. Further, the chapter should expressly mention “applicable human rights instruments” and other necessary safeguards for protection of human rights and other fundamental freedoms. This will ensure that power and procedure laid out in this chapter are subject to adequate restrictions to protect against potential human rights abuses.

The provision on extradition should apply only in cases of “serious crimes” that include offences punishable by maximum deprivation of liberty of at least four years or a more serious penalty as defined under United Nations Convention Against Transnational Organized Crime (UNTOC). The Convention should enumerate sufficient evidentiary basis required for extradition and should also make specific references to the applicable international legal instruments such as International Covenant on Civil and Political Rights (UN ICCPR) and the UN Convention against Torture and Other Cruel, Inhuman or Degrading Treatment or Punishment and ensure adequate protection to human rights and other fundamental freedoms.

The powers and procedures laid down under the Convention mandates the State Parties develop guidelines in relation to the format and duration of preservation of digital evidence and information for service providers. We note that such an authority should not result in data retention for indefinite periods and should not unnecessarily interfere with the data minimisation efforts of service providers. It is important that such guidelines incorporate ex-ante procedures that require independent judicial authorisation, provision for adequate and timely notice to users, measures that are strictly necessary and proportionate to stated aims and an efficient mechanism for redressal, appeal, and review.

Readers can learn more about our submission on international cooperation below:

Examining ‘Deemed Consent’ for Credit-Scoring under India’s Draft Data Protection Law

By Shobhit Shukla

On November 22, 2022, the Ministry of Electronics and Information Technology released India’s draft data protection law, the Digital Personal Data Protection Bill, 2022 (‘Bill’).* The Bill sets out certain situations in which seeking an individual’s consent for processing of their personal data is “impracticable or inadvisable due to pressing concerns”. In such situations, the individual’s consent is assumed; further, they are not required to be notified of such processing. One such situation is for processing in ‘public interest’. The Bill also illustrates certain public-interest purposes and notably, includes ‘credit-scoring’ as a purpose, in Clause 8(8)(d). Put simply, the Bill allows an individual’s personal data to be processed non-consensually and without any notice to them, where such processing is for credit-scoring.

Evolution of credit-scoring in India

Credit-scoring is a process by which a lender (or its agent) assesses an individual’s creditworthiness i.e., their notional capacity to repay their prospective debt, as represented by a numerical credit score. Until recently, lenders in India relied largely on credit scores generated by credit information companies (‘CICs’), licensed by the Reserve Bank of India (‘RBI’) under the Credit Information Companies (Regulation) Act, 2005 (‘CIC Act’). CICs collect and process ‘credit information’, as defined under the CIC Act, to generate such scores. Such information, for an individual, comprises chiefly of the details of their outstanding loans and history of repayment/defaults. However, with the expansion of digital footprints and advancements in automated processing, the range of datasets deployed to generate credit scores has expanded significantly. Lenders are increasingly using credit scores generated algorithmically by third-party service-providers. Such agents aggregate and process a wide variety of alternative datasets relating to an individual, alongside credit information – these may include the individual’s employment history, social media activity, and web browsing history. This allows them to build a highly data-intensive credit profile of (and assign a more granular credit score to) the individual, to assist lenders in deciding whether to extend credit. Not only does this enable lenders to make notionally better-informed decisions, but also to assess and extend credit to individuals with meagre or no prior access to formal credit.

While neither the Bill nor its explanatory note explain why credit-scoring constitutes a public-interest ground for non-consensual processing, it may be viewed as an attempt to remove the procedural burden associated with notice-and-consent. In the context of credit-scoring, if lenders (or their agents) are required to provide notice and seek consent at each instance to process the numerous streams of an individual’s personal data, the procedural costs may disincentivise them from accessing certain data-streams. Consequently, with limited data to assess credit-risk, lenders may adopt a risk-averse approach and avoid extending credit to certain sections of individuals. Alternatively, they may decide to extend credit despite the supposed inadequacy of personal data, thereby exposing themselves to higher risk of repayment defaults. While the former approach would be inimical to financial inclusion, the latter could possibly result in accumulation of bad loans on lenders’ balance sheets. Thus, encouraging data-intensive credit-scoring (for better-informed credit-decisions and/or for widening access to credit) may conceivably be viewed as a legitimate public interest.

However, in this post, I contend that even if this were to be accepted, a complete exemption from notice-and-consent for credit-scoring, poses a disproportionate risk to individuals’ right to privacy and data protection. The efficacy of notice-and-consent in enhancing informational autonomy remains debatable; however, a complete exemption from the requirement, without any accompanying safeguards, ignores specific concerns associated with credit-scoring.

Deemed consent for credit-scoring: Understanding the risks

First, the provision allows non-consensual processing of all forms of personal data, regardless of any correlation of such data with creditworthiness. In effect, this would encourage lenders to leverage the widest possible range of personal datasets. As research has demonstrated, the deployment of disparate datasets increases incidences of inaccuracy as well as of spurious connections between the data-input and the output. In credit-scoring, historical data using which the underlying algorithm is trained may conclude, for instance, that borrowers from a certain social background are likelier to default in repayment. Credit-scores generated from such fallacious and/or unverifiable conclusions can embed systemic disadvantages into future credit-decisions and deepen the exclusion of vulnerable groups. The exemption from notice-and-consent would only increase the likelihood of such exclusion – this is since individuals would not have any knowledge of the data-inputs used, or the algorithm using which such data-inputs were processed and consequently, no recourse against any credit-decisions arrived at via such processing.

Second, the provision allows any entity to non-consensually process personal data for credit-scoring. Notably, CICs are specifically licensed by the RBI to, inter alia, undertake credit-scoring. Additionally, in November 2021, the RBI amended the Credit Information Companies Regulations, 2006, to provide an avenue for entities (other than CICs) to register with any CIC, subject to the fulfilment of certain eligibility criteria, and to consequently access and process credit information for lenders. By allowing any entity to process personal data (including credit information) for credit-scoring, the Bill appears to undercut the RBI’s attempt to limit the processing of credit information to entities under its purview.

Third, the provision allows non-consensual processing of personal data for credit-scoring at any instance. A plain reading suggests that such processing may be undertaken even before the individual has expressed any intention to avail credit. Effectively, this would provide entities a free rein to pre-emptively mine troves of an individual’s personal data. Such data could then be processed for profiling the individual and behaviourally targeting them with customised advertisements for credit products. Clearly, such targeted advertising, without any intimation to the individual and without any opt-out, would militate against the individual’s right to informational self-determination. Further, as an RBI-constituted Working Group has noted, targeted advertising of credit products can promote irresponsible borrowing by individuals, leading them to debt entrapment. At scale, predatory lending enabled by targeted advertisements could perpetuate unsustainable credit and pose concerns to economic stability.

Alternatives for stronger privacy-protection in credit-scoring

The above arguments demonstrate that the complete exemption from notice-and-consent for processing of personal data for credit-scoring, threatens individual rights disproportionately. Moreover, the exemption may undermine precisely the same objectives that policymakers may be attempting to fulfil via the exemption. Thus, Clause 8(8)(d) of the Bill requires serious reconsideration.

First, I contend that Clause 8(8)(d) may be deleted before the Bill is enacted into law. In view of the CIC Act, CICs and other entities authorised by the RBI under the CIC Act shall, notwithstanding the deletion of the provision, continue to be able to access and process credit information relating to individual without their consent – such processing shall remain subject to the safeguards contained in the CIC Act, including the right of the individual to obtain a copy of such credit information from the lender.

Alternatively, the provision may be suitably modified to limit the exemption from notice-and-consent to certain forms of personal data. Such personal data may be limited to ‘credit information’ (as defined under the CIC Act) or ‘financial data’ (as may be defined in the Bill before its enactment) – resultantly, the processing of such data for credit-scoring would not require compliance with notice-and-consent. The non-consensual processing of such forms of  data (as opposed to all personal data), which carry logically intuitive correlations with creditworthiness, shall arguably correspond more closely to the individual’s reasonable expectations in the context of credit-scoring. An appropriate delineation of this nature would provide transparency in processing and also minimise the scope of fallacious and/or discriminatory correlations between data-inputs and creditworthiness.

Finally, as a third alternative, Clause 8(8)(d) may be modified to empower a specialised regulatory authority to notify credit-scoring as a purpose for non-consensual processing of data, but within certain limitations. Such limitations could relate to the processing of certain forms of personal data (as suggested above) and/or to certain kinds of entities specifically authorised to undertake such processing. This position would resemble proposals under previous versions of India’s draft data protection law, i.e. the Personal Data Protection Bill, 2019 and the Personal Data Protection Bill, 2018 – both draft legislations required any exemption from notice-and-consent to be notified by regulations. Further, such notification was required to be preceded by a consideration of, inter alia, individuals’ reasonable expectations in the context of the processing. In addition to this balancing exercise, the Bill may be modified to require the regulatory authority to consult with the RBI, before notifying any exemption for credit-scoring. Such consultation would facilitate harmonisation between data protection law and sectoral regulation surrounding financial data.

*For our complete comments on the Digital Personal Data Protection Bill, 2022, please click here – https://bit.ly/3WBdzXg) 

Introduction to AI Bias

By Nidhi Singh, CCG

Note: This article is adapted from an op-ed published in the Hindu Business Line which can be accessed here

A recent report by Nasscom talks about the integrated adoption of artificial intelligence (AI) and data utilisation strategy, which can add an estimated USD 500 billion to the Indian economy. In June 2022, Meity published the Draft National Data Governance Framework Policy, which aims to enhance the access, quality, and use of non-personal data in ‘line with the current emerging technology needs of the decade.’ This is another step, in the world-wide push by governments to adopt machine learning and AI models, which are trained on individuals’ data, into the sphere of governance. 

While India is currently considering the legislative and regulatory safeguards which must be implemented for the use of this data and its use in AI systems, many countries have begun implementing these AI systems. For example, in January 2021, the Dutch government resigned en masse in response to a child welfare fraud scandal that involved the alleged misuse of benefit schemes. 

The Dutch tax authorities used a ‘self-learning’ algorithm to assess benefit claims and classify them according to the potential risk for fraud. The algorithm flagged certain applications as being at a higher risk for fraud, and these applications were then forwarded to an official for manual scrutiny. While the officials would receive applications from the system stating that they had a higher likelihood of containing false claims, they were not told why the system flagged these applications as being high-risk. 

Following the adoption of an overly strict interpretation of the government policy on identifying fraudulent claims, the AI system being used by the tax authorities began to flag every data inconsistency — including actions like failure to sign a page of the form — as an act of fraud. Additionally, the Dutch government’s zero tolerance for tax fraud policy meant that the erroneously flagged families would have to return benefits not only from the time period in which the fraud was alleged to be committed but up to 5 years before that as well. Finally, the algorithm also learnt to systematically identify claims which were filed by parents with dual citizenship — as being high-risk. These were subsequently marked as potentially fraudulent. This meant that out of the people who were labelled as fraudsters by the algorithm, a disproportionately high number of them had an immigrant background. 

What makes the situation more complicated is that it is difficult to narrow down to a single factor that caused the ‘self-learning algorithm’ to arrive at the biassed output due to the ‘black box effect’ and the lack of transparency about how an AI system makes its decisions. This biassed output delivered by the AI system is an example of AI bias.

The problems of AI Bias

AI bias is said to occur when there is an anomaly in the output produced by a machine learning algorithm. This may be caused due to prejudiced assumptions made during the algorithm’s development process or prejudices in the training data. The concerns surrounding potential AI bias in the deployment of algorithms are not new. For almost a decade, researchers, journalists, activists, and even tech workers have repeatedly warned about the consequences of bias in AI. The process of creating a machine learning algorithm is based upon the concept of ‘training’. In a machine learning process, the computer is exposed to vast amounts of data, which it uses as a sample to study how to make judgements or predictions. For example, an algorithm designed to judge a beauty contest would be trained upon pictures and data relating to beauty pageants from the past. AI systems use algorithms made by human researchers, and if they are trained on flawed data sets, they may end up hardcoding bias into the system. In the example of the algorithm used for the beauty contest, the algorithm failed its desired objective as it eventually made its choice of winners based solely on skin colour, thereby excluding contestants who were not light-skinned.

This brings us to one of the most fundamental problems in AI systems – ‘Garbage in – Garbage out’. AI systems are heavily dependent on the use of accurate, clean, and well-labeled training data to learn from, which will, in turn, produce accurate and functional results. A vast majority of the time in the deployment of AI systems is spent in the process of preparing the data through processes like data collection, cleaning, preparation, and labeling, some of which tend to be very human-intensive. Additionally, AI systems are usually designed and operationalised by teams that tend to be more homogenous in their composition, that is to say, they are generally composed of white men. 

There are several factors that make AI bias hard to oppose. One of the main problems of AI systems is that the very foundations of these systems are often riddled with errors. Recent research has shown that ten key data sets, which are often used for machine learning and data science, including ImageNet (a large dataset of annotated photographs intended to be used as training data) are in fact riddled with errors. These errors can be traced to the quality of data the system was trained on or, for instance, biases being introduced by the labelers themselves, such as labelling more men as doctors and more women as nurses in pictures. 

How do we fix bias in AI systems?

This is a question that many researchers, technologists, and activists are trying to answer. Some of the more common approaches to this question include inclusivity – both in the context of data collection as well as the design of the system. There have also been calls about the need for increased transparency and explainability, which would allow people to understand how AI systems make their decisions. For example, in the case of the Dutch algorithm, while the officials received an assessment from the algorithm stating that the application was likely to be fraudulent, it did not provide any reasons as to why the algorithm detected fraud. If the officials in charge of the second round of review had more transparency about what the system would flag as an error, including missed signatures or dual citizenship, it is possible that they may have been able to mitigate the damage.

One possible mechanism to address the problem of bias is — the blind taste test mechanism – The mechanism works to check if the results produced by an AI system are dependent upon a specific variable such as sex, race, economic status or sexual orientation. Simply put, the mechanism tries to ensure that protected characteristics like gender, skin colour, or race should not play a role in decision-making processes.

The mechanism includes testing the algorithm twice, the first time with the variable, such as race, and the second time without it. Therefore in the first set, the model is trained on all the variables including race, and the second time the model is trained on all variables, excluding race.If the model returns the same results, then the AI system can be said to make predictions that are blind to the factor, but if the predictions change with the inclusion of a variable, such as by inclusion of dual citizenship status in the case of the Dutch algorithm, or the inclusion of skin colour in the beauty contest the AI system would have to be investigated for bias. This is just one of the potential mitigation tests. States are also experimenting with other technical interventions such as the use of synthetic data, which can be used to create less biased data sets. 

Where do we go from here 

The Dutch case is merely one of the examples in a long line of instances that warrant higher transparency and accountability requirements for the deployment of AI systems. There are many approaches that have been, and are still being developed and considered to counter bias in AI systems. However, the crux remains that it may be impossible to fully eradicate bias from AI systems due to the biased nature of human developers and engineers, which is bound to be reflected within technological systems. The effects of these biases can be devastating depending upon the context and the scale at which they are implemented. 

While new and emerging technical measures can be used as stopgaps, in order to comprehensively deal with bias in AI systems, we must address the issues of bias in those who design and operationalise the system. In the interim, regulators and states must step up to carefully scrutinise, regulate or in some cases halt the use of AI systems which are being used to provide essential services to people. An example of such regulation could include the framing and adoption of risk based assessment frameworks for the adoption of AI systems, wherein the regulatory requirements for AI systems are dependent upon the level of risk they pose to individuals. This could include permanently banning the deployment of AI systems in areas where AI systems may pose a threat to people’s safety, livelihood, or rights, such as credit scoring systems, or other systems which could manipulate human behaviour. For AI systems which are scored to be lower risk, such as AI chatbots being used for customer service, there may be a lower threshold for the prescribed safeguards. The debate on whether or not AI systems can ever truly be free from bias may never be fully answered; however, we can say that the harms that these biases cause can be mitigated with proper regulatory and technical measures. 

On the Exclusion of Regulatory Sandbox Provisions from Data Protection Law

On November 18, 2022, the Ministry of Electronics & Information Technology (‘MeitY’) released the new Digital Personal Data Protection Bill, 2022 (‘2022 Bill’) as the governing legislation for personal data. Prior to the 2022 Bill, the Personal Data Protection Bill, 2019 (‘2019 Bill’) was the proposed legislation to govern personal data and protect data privacy. The 2019 Bill was withdrawn during the Monsoon session of Parliament in August 2022, after receiving significant amendments and recommendations from the Joint Committee of the Parliament in 2021.

The 2022 Bill has removed several provisions from the 2019 Bill, one of which pertains to the creation of a regulatory sandbox for encouraging innovation in artificial intelligence, machine-learning, or any other emerging technologies (under Clause 40 of the 2019 Bill). While some experts have criticised the 2022 Bill for not retaining this provision, I contend that the removal of the regulatory sandbox provision is a positive aspect of the 2022 Bill. In general, regulatory sandbox provisions should not be incorporated into data protection laws for the following reasons: 

  1. The limited scope and purpose of data protection legislation

Data protection laws are drafted with the specific purpose of protecting personal data of individuals, creating a framework to process personal data, and laying down specific rights and responsibilities for data fiduciaries/processors. Although firms participating in a sandbox may process personal data, the functions of sandboxes are more expansive than regulating personal data processing. The primary purpose of regulatory sandboxes is to create isolated, controlled environments for the live testing, development, and restricted time-bound release of innovations. Sandboxes are also set-up to help regulatory authorities monitor and form adaptive regulations for these innovative technologies, as they are either partially or completely outside the purview of existing legislations.

Since the scope of regulatory sandboxes is broader than that of data protection legislations, it is insufficient for a sandbox provision to be included in a data protection legislation, with limited compliances and exemptions from the provisions of such legislation. A separate legislation is required to be drafted to regulate such emerging technologies. 

The regulatory sandbox framework under the European Union’s Proposed Artificial Intelligence Act, 2021 (‘EU AI Act’), as well as the regulatory sandboxes established by SEBI, RBI, and other authorities in India demonstrate this clearly. These frameworks are established separately from existing legislations, and provide a specific scope and purpose for the sandbox in a clear and detailed manner. 

  1. The limited expertise and conflicting mandate of a data protection authority

Data protection authorities (‘DPAs’) are appointed to protect the rights of data principals. They lack the necessary expertise over emerging technologies to also function as the supervisory authority for a regulatory sandbox. Hence, a regulatory sandbox is required to be monitored and supervised by a separate authority which has expertise over the specific areas for which the sandbox is created.

Moreover, it is not sufficient to merely constitute a separate authority for sandboxes within a data protection law. Since the supervisory authority for sandboxes is required to privilege innovation and development of technologies over the strict protection of personal data, the functions of this authority will be directly conflicting with those of the DPA. Therefore, the regulatory sandbox framework is required to be incorporated in a separate legislation altogether.

  1. Sector-specific compliance provisions for regulatory sandboxes

The desire to regulate artificial intelligence and emerging technologies under a data protection legislation is understandable, as these technologies process personal data. However, it is to be noted that AI systems and other emerging technologies also process non-personal data and anonymised data. 

The regulatory sandbox for these technologies are thus not only subject to the principles of data protection law, but are in fact a nexus for information technology law, anti-discrimination law, consumer protection law, e-commerce law, and other applicable laws. Accordingly, the framework for the regulatory sandbox cannot be placed within a data protection legislation or subordinate rules to such a legislation. It has to be regulated under a separate framework which ensures all the relevant laws are taken into account, and the safeguards are not just limited to personal data safeguards. 

Since the exemptions, mitigation of risks, and compliance for the different emerging technologies are to be specifically tailored to those technologies (across various laws), the regulatory mechanism for the same cannot be provided in a data protection legislation. 

Conclusion

The above arguments establish the basis for not incorporating sandbox provisions within a data protection legislation. Regulatory sandboxes, based on their framework alone, do not belong in a data protection legislation. The innovation-centric mandate of the sandbox framework and the functions of the supervisory authority conflict with the core principles of data protection law and the primary functions of DPAs. The limited scope of data protection law, coupled with the lack of expertise of DPAs decisively establish the incongruence between the regulatory sandbox provision and data protection legislations.

Commentators who critique the exclusion of the sandbox provision from the 2022 Bill are right to be concerned about rapid developments in artificial intelligence and other emerging technologies. But it is far more prudent for them to recommend that the Central government set-up an expert committee to analyse these developments and prepare a separate framework for the sector. Such a framework can comprehensively account for the various mechanisms (beyond data protection) required to govern these emerging technologies.

Working paper release: ‘Tackling the dissemination and redistribution of NCII’

Aishwarya Girdhar & Vasudev Devadasan

Today, the Centre for Communication Governance (CCG) is happy to release a working paper titled ‘Tackling the dissemination and redistribution of NCII’ (accessible here). The dissemination and redistribution of non-consensual intimate images (“NCII”) is an issue that has plagued platforms, courts, and lawmakers in recent years. The difficulty of restricting NCII is particularly acute on ‘rogue’ websites that are unresponsive to user complaints. In India, this has prompted victims to  petition courts to block webpages hosting their NCII. However, even when courts do block these webpages, the same NCII content may be re-uploaded at different locations. 

The goal of our proposed solution is to: (i) reduce the time, cost, and effort associated with victims having to go to court to have their NCII on ‘rogue’ websites blocked; (ii) ensure victims do not have to re-approach courts for the blocking of redistributed NCII; and (iii) provide administrative, legal, and social support to victims. 

Our working paper proposes the creation of an independent body (“IB”) to: maintain a hash database of known NCII content; liaise with government departments to ensure the blocking of webpages hosting NCII; potentially crawl targeted areas of the web to detect known NCII content; and work with victims to increase the awareness of NCII related harms and provide administrative and legal support. Under our proposed solution, victims would be able to simply submit URLs hosting their NCII to a centralised portal maintained by the IB. The IB would then vet the victim’s complaint, coordinate with government departments to block the URL, and eventually hash and add the content to a database to combat redistribution. 

This will significantly reduce the time, money, and effort exerted by victims to have their NCII blocked, whether at the stage of dissemination or redistribution. The issue of redistribution can also potentially be tackled through a targeted, proactive crawl of websites by the IB for known NCII pursuant to a risk impact assessment. Our solution envisages several safeguards to ensure that the database is only used for NCII, and that lawful content is not added to the database. Chief amongst these is the use of multiple human reviewers to vet the complaints made by victims and a public interest exemption where free speech and privacy interests may need to be balanced. 

A full summary of our recommendations are as follows:

  • Efforts should be made towards setting up an independently maintained hash database for NCII content. 
  • The hash database should be maintained by the IB, and it must undertake stringent vetting processes to ensure that only NCII content is added to the database.
  • Individuals and vetted technology platforms should be able to submit NCII content for inclusion into the database; NCII content removed pursuant to a court order can also be included in the database.
  • The IB may be provided with a mandate to proactively crawl the web in a targeted manner to detect copies of identified NCII content pursuant to a risk impact assessment. This will help shift the burden of identifying copies of known NCII away from victims. 
  • The IB can supply the DoT with URLs hosting known NCII content, and work with victims to alleviate the burdens of locating and identifying repeat instances of NCII content. 
  • The IB should be able to work with organisations to provide social, legal, and administrative support to victims of NCII; it would also be able to coordinate with law enforcement and regulatory agencies in facilitating the removal of NCII.

Our working paper draws on recent industry efforts to curb NCII, as well as the current multi-stakeholder approach used to combat child-sex abuse material online. However, our regulatory solution is specifically targeted at restricting the dissemination and redistribution of NCII on ‘rogue’ websites that are unresponsive to user complaints. We welcome inputs from all stakeholders as we work towards finalising our proposed solution. Please send comments and suggestions to <ccg@nludelhi.ac.in>.

Link to Working Paper.

AI Law and Policy Diploma Course

The Centre for Communication Governance at the National Law University, Delhi is excited to announce the first edition of the AI Law and Policy Diploma Course – an 8 month online diploma course curated and delivered by expert academics and researchers at CCG and NLU Delhi. The Course is an exciting opportunity to learn the legal, public policy, socio-political and economic contours of AI systems and their implications on our society and its governance. The course provides students the opportunity to interact with and learn from renowned policy practitioners and experienced professionals in the domain of technology law and policy. The course will commence in October 2022 and end in May 2022. Registration for the course is now open and will close by 3rd October 2022 11:59 PM IST. 

About the Centre 

The Centre for Communication Governance at National Law University Delhi (CCG) was established in 2013 to ensure that Indian legal education establishments engage more meaningfully with information technology law and policy and contribute to improved governance and policy making. CCG is the only academic research centre dedicated to undertaking rigorous academic research on information law and policy in India and in a short span of time has become a leading institution in Asia. 

CCG has built an extensive network and works with a range of international academic institutions and policy organisations. These include the United Nations Development Programme, Law Commission of India, NITI Aayog, various Indian government ministries and regulators, International Telecommunications Union, UNGA WSIS, Paris Call, Berkman Klein Center for Internet and Society at Harvard University, the Center for Internet and Society at Stanford University, Columbia University’s Global Freedom of Expression and Information Jurisprudence Project, the Hans Bredow Institute at the University of Hamburg, the Programme in Comparative Media Law and Policy at the University of Oxford, the Annenberg School for Communication at the University of Pennsylvania, and the Singapore Management University’s Centre for AI and Data Governance.

About the Course 

The Course is designed to ensure the nuanced engagement of the students with the legal, public policy, socio-political and economic contours of AI systems and their implications on our society and its governance. 

The course will engage with key themes in the interaction of artificial intelligence with law and policy including implications of AI on our society, emerging use cases of AI and related opportunities and challenges, domestic and global approaches to AI governance, ethics in AI, the application of data protection principles to AI systems, and AI discrimination and bias. Students will be exposed to proposed legislation and policy frameworks on artificial intelligence in India and globally, international policy developments, current uses of AI technology and emerging challenges.

This course will equip students with the necessary understanding and knowledge required to effectively navigate the rapidly evolving space of AI law and policy, and assess the contemporary developments.

Course objectives and learning outcomes 

The course aims to ensure that students are:

  1. The students of the course will be introduced to AI technology and will become cognisant of its opportunities and challenges, and its potential impacts on society, individuals and the law.
  2. The course will provide an overview of the interactions between AI and Law and delve into the current domestic and international frameworks which seek to govern AI technology.
  3. The students will be equipped to navigate the interaction between AI and ethics, and consider the ethical principles within which the use of AI technologies are being situated. They will be provided with a breakdown of the ethical principles which have emerged surrounding the use of AI.  
  4. Students will become familiar with the regional and international policy processes which surround  AI technology and the role of intergovernmental organisations in AI governance.
  5. Students will be equipped with knowledge of data protection principles and their interaction with AI systems. 
  6. Students will delve into problems surrounding AI discrimination and explore how bias creeps into AI systems at various stages, and the implications that this may have upon individuals and our society. 
  7. The students will become conversant with global practices, and governance and regulatory frameworks around AI, focusing on multilateral processes which are currently underway as well as specific domestic approaches. 
  8. The course also has a specialised module on AI in India, focusing upon the regulatory and governance framework around the deployment of AI systems.
  9. Students will also become familiar with the novel use of AI in India, including the use of AI systems for FRT as well as its use in judicial systems.
  10. The students will explore the emerging application and use cases of AI technologies. Students will familiarise themselves with the new uses of AI technologies such as facial recognition, emotional recognition, predictive policing, AI use in workplaces, AI use in healthcare, etc. and consider how this may impact individuals and society. 

For the detailed course outline please visit here

Eligibility 

  • Lawyers/advocates, professionals involved in information technology, professionals in the corporate, industry, government, media, and civil society sector, technology policy professionals, academicians, and research scholars interested in the field of technology and information technology law and policy and under graduates from any discipline are well positioned to apply for the course.
  • Candidates having a 10+2 degree from any recognized board of education, with a minimum of 55% marks, are eligible to apply for this course.
  • There shall be no restriction as to age, nationality, gender, employment status in the admission process

Time Commitment

We recommend students set aside an average of 4-8 hours per week for attending the scheduled monthly live online sessions on weekends and for completing the mandatory coursework (including viewing recorded lectures, any assessment exercises) and prescribed readings.

Seats Available 

A total of 50 seats are available for the course. 

Registration 

Interested candidates may register for the course through the online link provided here

Deadline

Last date to apply: 3rd October 2022 (11:59pm IST)

Course Fee 

INR 90,000/- (all inclusive and non-refundable) to be paid at the time of registration. 

Contact us: For inquiries please contact us at ccgcourse@nludelhi.ac.in with the subject line ‘CCG NLUD Diploma Course on AI Law and Policy’. Emails sent without the subject line ‘CCG NLUD  Diploma Course on AI Law and Policy’ may go unnoticed.

Call for Applications for the Positions (i) Community and Engagement Associates, (i) Community and Engagement Officers, (ii) Strategic Development and Partnerships Associates, and (ii) Strategic Development and Partnerships Officers

The National Law University Delhi (‘University’), through its Centre for Communication Governance (‘CCG’/‘Centre’) is inviting applications for the posts of (i) Community and Engagement Associates and Community and Engagement Officers and (ii) Strategic Development and Partnership Associates and Strategic Development and Partnership Officers, to work at the Centre. 

About the Centre for Communication Governance

The Centre for Communication Governance at National Law University Delhi was established in 2013 to ensure that Indian legal education establishments engage more meaningfully with information technology law and policy, and to contribute to improved governance and policy making. CCG is the only academic research centre dedicated to working on information technology law and policy in India, and in a short span of time has become a leading institution in the sector. 

Through its Technology and Society team, CCG seeks to embed constitutional values and good governance within information technology law and policy and examine the evolution of existing rights frameworks to accommodate new media and emerging technology. It seeks to support the development of the right to freedom of speech, right to dignity and equality, and the right to privacy in the digital age, through rigorous academic research, policy intervention, and capacity building. The team’s ongoing work is on subjects such as —privacy and data governance/protection, regulation of emerging technologies like artificial intelligence, blockchain, 5G and IoT, platform regulation, misinformation, intermediary liability and digital access and inclusion.

This complements the work of the Technology and National Security team at CCG that focuses on issues that arise at the intersection of technology and national security law, including cyber security, information warfare, and the interplay of international legal norms with domestic regulation. The team’s work aims to build a better understanding of national security issues in a manner that identifies legal and policy solutions that balance the legitimate security interests and national security choices with constitutional rights and the rule of law, in the context of technology law and policy. The team undertakes analysis of international law as well as domestic laws and policies that have implications for national security. Our goal is to develop detail-oriented, principled and pragmatic recommendations for policy makers on national security issues faced by India, with an emphasis on cyber security and cyber conflict. 

The work at CCG is designed to build competence and raise the quality of discourse in research and policy around issues concerning constitutional rights and rule of law in the digital age, cybersecurity and global internet governance. The academic research and policy output is intended to catalyse effective research-led policy making and informed public debate around issues in technology, internet governance and information technology law and policy.

Role

CCG is a young, continuously evolving organisation and the members of the Centre are expected to be active participants in building a collaborative, merit-led institution and a lasting community of highly motivated young professionals. If selected, you will contribute to the institution’s growth and development by playing a key role in advancing our community engagement / strategic development and partnerships. You will be part of a dynamic team of young researchers, policy analysts and lawyers. Please note that our interview panel has the discretion to determine which role would be most suitable for each applicant based on their qualifications and experience. 

We are inviting applications for the following roles-

(i) Community and Engagement Associates (2 position)

(ii) Community and Engagement Officers (2 position)

(iii) Strategic Development and Partnership Associates (2 position)

(iv) Strategic Development and Partnership Officers (2 position)

i. Community and Engagement Associates and Community and Engagement Officers

Some of the key roles and responsibilities of the Community & Engagement Associates and Community & Engagement Officers may include:

  • Developing and supporting the team in community and engagement strategy. The candidate will have to work both independently and collaboratively with the team leadership, researchers and various other members of the team.
  • Building engagement with key stakeholders and community members of the Digital Society ecosystem at the domestic and international level.
  • Conceptualising and implementing events, workshops, roundtables, etc. to engage with stakeholders in the ecosystem.
  • Creating relevant content in the form of posters, social media posts, and other allied material for the various events conducted by CCG. 
  • Strategising and creating visual and written content for newsletters, email communications and other modes of engagement.
  • Strategising and creating internal and external communication material including relevant posts, images and posters, and other allied content for social media dissemination, including Twitter, Instagram, LinkedIn, and Facebook.
  • Strategising and creating visual representations, infographics and other graphical representations to make research and analysis available in an accessible manner.
  • Managing social media accounts and maintaining a social media calendar and database of disseminated content. Working with social media on campaigns using tools like hootsuite, oneup, etc., and oversight and management of websites and blogs.
  • Editorial design and layout for reports, presentations, and other written outputs.
  • Aiding in conceptualising, recording and editing audio, podcasts, and/or video material. 
  • Engaging with CCG’s media networks and other key stakeholders.
  • Identifying opportunities for media engagement for the dissemination of CCG’s work.
  • Maintaining records of media and social media coverage and collecting data for analytics and metrics.
  • Strategising, editing, developing, managing and implementing content for the CCG website, CCG Blog, etc. 

This is an indicative list of some of the responsibilities the person will be involved in and is not inclusive of all activities one might be engaged with. We welcome applicants with an interest in any of the areas that CCG broadly works in to apply.

ii. Strategic Development and Partnership Associates and Strategic Development and Partnership Officers

Some of the key roles and responsibilities of the Strategic Development and Partnership Associates and Strategic Development and Partnership Officers may include:

  • Identifying potential funders and partners (domestic and international) to develop CCG’s work and engaging with them.
  • Developing funding opportunities and networks for CCG programs and research.
  • Drafting grant proposals, presentations and applications in coordination with CCG leadership and researchers and spearheading all phases of the grant process (pre-award, award and post-award phase).
  • Ensuring timely funder reporting, project completion reports, and preparation of project narratives.
  • Proactively managing, building and developing new and existing partnerships (domestic and international) portfolios in consultation with senior leadership at CCG.
  • Building engagement with key stakeholders and community members of the Digital Society ecosystem at the domestic and international level across academia, media, civil society, industry, regulatory bodies, other experts, members of parliament, senior government officers, judges, senior lawyers, scholars, and journalists. We are looking for someone who is very constructive and is not only able to help our community get the most out of CCG’s work but is also able to connect people with each other, playing an enabling, generative role that encourages and supports the ecosystem.
  • Identifying opportunities for CCG to present and highlight its programs and research and working towards applying for and implementing these opportunities.
  • Making use of effective programme/project management tools within the team (leadership, research, admin and community and engagement) to ensure strategic development of CCG’s goals.
  • Identifying opportunities for capacity building for the CCG team and organising and implementing relevant activities.
  • Conceptualising and implementing events, workshops, roundtables, etc. to engage with stakeholders in the ecosystem.
  • Strategising, developing, co-ordinating, organising and implementing events, fellowships, moots and courses such as Summer School, Courses (Certificate Course, etc.), Workshops, DIGITAL Fellowship, Oxford Price Media South Asia Rounds, and Capacity Building events.
  • Strategising, editing, developing, managing and implementing content for the CCG website, CCG Blog, etc.
  • Strategising and supporting the development of engagement and outreach modes such as social media, podcasts, newsletters, events, meetings, etc.
  • Developing and supporting the team in a community and engagement strategy. 
  • Engaging with CCG’s media networks and other key stakeholders and identifying opportunities for media engagement for the dissemination of CCG’s work.
  • Maintaining records of media coverage and collecting data for analytics and metrics.
  • Developing and implementing CCG’s DEI initiatives and programs.

This is an indicative list of some of the responsibilities the person will be involved in and is not inclusive of all activities one might be engaged with. We welcome applicants with an interest in any of the areas that CCG broadly works in to apply.

Qualifications for the Roles

  • The Centre welcomes applications from candidates with degrees in design, media and communication, law, public policy, development studies, BBA, journalism, english and social sciences or other relevant/applicable fields.
  • For the Associate role, preference may be given to candidates with an advanced degree in related fields or 2+ years of PQE and previous experience of working on related issues.
  • For the Officer role, preference may be given to candidates with an advanced degree in related fields or 4+ years of PQE and previous experience of working on related issues.
  • Candidates must have a demonstrable capacity for high-quality, independent work.
  • Strong communication, digital and writing/presentation skills are important.
  • Interest and previous experience in information technology law and policy is preferred. 
  • A Master’s degree from a highly regarded programme might count towards work experience.

However, the length of your resume is less important than the other qualities we are looking for. As a young, rapidly-expanding organisation, CCG anticipates that all members of the Centre will have to manage large burdens of substantive as well as institutional work. We are looking for highly motivated candidates with a deep commitment to building policies that support and enable constitutional values and democratic discourse. We are looking for people who see good research and policy designs as a way to build a better and more equitable world. At CCG, we aim high, and we demand a lot from each other in the workplace.

We look for individuals with work-style traits that include the ability to work both collaboratively and independently in a fast-paced environment, while being empathetic towards colleagues. We aim to create high-quality research outputs. It is therefore vital that you be a good team player, as well as be kind and respectful to colleagues. At the same time, you should also be self-motivated, proactive, creative as well as be capable of independently driving your work when required. We like to maintain the highest ethical standards in our work and workplace, and look for people who manage all of this while being as kind and generous as possible to colleagues, collaborators and everyone else within our networks. A sense of humour will be most welcome. Even if you do not necessarily fit the requirements outlined but bring to us the other qualities we look for, we will be glad to hear from you. 

Remuneration and Location

The remuneration will be competitive, and will be commensurate with qualifications and experience. Where the candidate demonstrates exceptional competence in the opinion of the selection panel, there is a possibility for greater remuneration. These are full time positions based out of Delhi. 

Application Process

Interested candidates may fill the application form provided by 05:00 pm IST on June 20, 2022. Please note that applications will only be accepted via the Google Form. In case of any doubts please contact us at ccg@nludelhi.ac.in with the subject line “Application for Community and Engagement/Strategic Development and Partnerships”. We encourage applicants to apply at the earliest.

 A complete application form will require the following: 

  • A signed and completed Application Form, available here.
  • The form requires a Statement of Motivation which applicants have to answer in a maximum of 800 words. The Statement of Motivation should ideally engage with the following aspects: 

(i) Why do you wish to work with CCG? 

(ii) For those applying for the role of Community and Engagement Associate/Officer: What will be your likely contribution to our work? How would you develop CCG’s community and engagement with stakeholders, the ecosystem and use CCG’s work to add value to the public discourse? 

Or

For those applying for the role of Strategic Development and Partnership Associate/Officer: What will be your likely contribution to our work? How would you undertake strategic development of CCG’s work, fundraising for CCG’s research and programs and build partnerships? 

(iii) What past experiences and skills optimally position you to do so? 

(iv) How does working with CCG connect with your plans for the future?

  • A sample or portfolio of your previous work or writing sample, as relevant. If the candidate does not have anything relevant this is an optional step. However, we encourage candidates to submit any relevant samples they may have of their work. If the 100 MB limit for the upload of the sample is insufficient, please upload an illustrative sample on the google form and the candidate can share a more detailed version of their sample at  ccg@nludelhi.ac.in with the subject line “Call for Strategic Communication and Engagement/ Development and Partnership Associates/Officers – Portfolio”.
  • Please combine the CV, sample of your previous work and statement of motivation in a single PDF file labelled as “Your name – CCG”. The PDF should be uploaded on the link provided in the application form. The single PDF file should contain: (1) a Curriculum Vitae (maximum two pages) (2) a sample or portfolio of your previous work or writing sample as relevant, and (3) Statement of Motivation, to be uploaded in the application form.
  • Applicants should note that they cannot save their work on the application form and return to it later, so they may find it advisable to prepare their Statement of Motivation and merge relevant documents into a PDF document beforehand.
  • Names and contact details of two referees who can be contacted for an oral or a short written reference (to be filled in the form).

Since we require applicants to upload their CV and writing sample, accessing the form requires a Google (Gmail) login. For applicants not having a Google (Gmail) account, we encourage them to create an account, following the quick and simple steps here.

Note

  • National Law University Delhi is an equal opportunity employer.
  • National Law University Delhi reserves the right to conduct telephonic or video interviews. National Law University Delhi is unable to cover the costs of travel, accommodation, etc. for any interviews. 
  • National Law University Delhi reserves the right not to fill these positions.
  • Our selection panel has the discretion to determine which profile/role would be most suitable for each applicant based on their experience, domain understanding and qualifications.
  • The roles, responsibilities and activities enumerated here are indicative and may encompass additional duties related to these.
  • The position is a contractual position and shall be paid under the grants received by the Centre for Communication Governance at National Law University Delhi.
  • We will contact only shortlisted candidates.