Examining ‘Deemed Consent’ for Credit-Scoring under India’s Draft Data Protection Law

By Shobhit Shukla

On November 22, 2022, the Ministry of Electronics and Information Technology released India’s draft data protection law, the Digital Personal Data Protection Bill, 2022 (‘Bill’).* The Bill sets out certain situations in which seeking an individual’s consent for processing of their personal data is “impracticable or inadvisable due to pressing concerns”. In such situations, the individual’s consent is assumed; further, they are not required to be notified of such processing. One such situation is for processing in ‘public interest’. The Bill also illustrates certain public-interest purposes and notably, includes ‘credit-scoring’ as a purpose, in Clause 8(8)(d). Put simply, the Bill allows an individual’s personal data to be processed non-consensually and without any notice to them, where such processing is for credit-scoring.

Evolution of credit-scoring in India

Credit-scoring is a process by which a lender (or its agent) assesses an individual’s creditworthiness i.e., their notional capacity to repay their prospective debt, as represented by a numerical credit score. Until recently, lenders in India relied largely on credit scores generated by credit information companies (‘CICs’), licensed by the Reserve Bank of India (‘RBI’) under the Credit Information Companies (Regulation) Act, 2005 (‘CIC Act’). CICs collect and process ‘credit information’, as defined under the CIC Act, to generate such scores. Such information, for an individual, comprises chiefly of the details of their outstanding loans and history of repayment/defaults. However, with the expansion of digital footprints and advancements in automated processing, the range of datasets deployed to generate credit scores has expanded significantly. Lenders are increasingly using credit scores generated algorithmically by third-party service-providers. Such agents aggregate and process a wide variety of alternative datasets relating to an individual, alongside credit information – these may include the individual’s employment history, social media activity, and web browsing history. This allows them to build a highly data-intensive credit profile of (and assign a more granular credit score to) the individual, to assist lenders in deciding whether to extend credit. Not only does this enable lenders to make notionally better-informed decisions, but also to assess and extend credit to individuals with meagre or no prior access to formal credit.

While neither the Bill nor its explanatory note explain why credit-scoring constitutes a public-interest ground for non-consensual processing, it may be viewed as an attempt to remove the procedural burden associated with notice-and-consent. In the context of credit-scoring, if lenders (or their agents) are required to provide notice and seek consent at each instance to process the numerous streams of an individual’s personal data, the procedural costs may disincentivise them from accessing certain data-streams. Consequently, with limited data to assess credit-risk, lenders may adopt a risk-averse approach and avoid extending credit to certain sections of individuals. Alternatively, they may decide to extend credit despite the supposed inadequacy of personal data, thereby exposing themselves to higher risk of repayment defaults. While the former approach would be inimical to financial inclusion, the latter could possibly result in accumulation of bad loans on lenders’ balance sheets. Thus, encouraging data-intensive credit-scoring (for better-informed credit-decisions and/or for widening access to credit) may conceivably be viewed as a legitimate public interest.

However, in this post, I contend that even if this were to be accepted, a complete exemption from notice-and-consent for credit-scoring, poses a disproportionate risk to individuals’ right to privacy and data protection. The efficacy of notice-and-consent in enhancing informational autonomy remains debatable; however, a complete exemption from the requirement, without any accompanying safeguards, ignores specific concerns associated with credit-scoring.

Deemed consent for credit-scoring: Understanding the risks

First, the provision allows non-consensual processing of all forms of personal data, regardless of any correlation of such data with creditworthiness. In effect, this would encourage lenders to leverage the widest possible range of personal datasets. As research has demonstrated, the deployment of disparate datasets increases incidences of inaccuracy as well as of spurious connections between the data-input and the output. In credit-scoring, historical data using which the underlying algorithm is trained may conclude, for instance, that borrowers from a certain social background are likelier to default in repayment. Credit-scores generated from such fallacious and/or unverifiable conclusions can embed systemic disadvantages into future credit-decisions and deepen the exclusion of vulnerable groups. The exemption from notice-and-consent would only increase the likelihood of such exclusion – this is since individuals would not have any knowledge of the data-inputs used, or the algorithm using which such data-inputs were processed and consequently, no recourse against any credit-decisions arrived at via such processing.

Second, the provision allows any entity to non-consensually process personal data for credit-scoring. Notably, CICs are specifically licensed by the RBI to, inter alia, undertake credit-scoring. Additionally, in November 2021, the RBI amended the Credit Information Companies Regulations, 2006, to provide an avenue for entities (other than CICs) to register with any CIC, subject to the fulfilment of certain eligibility criteria, and to consequently access and process credit information for lenders. By allowing any entity to process personal data (including credit information) for credit-scoring, the Bill appears to undercut the RBI’s attempt to limit the processing of credit information to entities under its purview.

Third, the provision allows non-consensual processing of personal data for credit-scoring at any instance. A plain reading suggests that such processing may be undertaken even before the individual has expressed any intention to avail credit. Effectively, this would provide entities a free rein to pre-emptively mine troves of an individual’s personal data. Such data could then be processed for profiling the individual and behaviourally targeting them with customised advertisements for credit products. Clearly, such targeted advertising, without any intimation to the individual and without any opt-out, would militate against the individual’s right to informational self-determination. Further, as an RBI-constituted Working Group has noted, targeted advertising of credit products can promote irresponsible borrowing by individuals, leading them to debt entrapment. At scale, predatory lending enabled by targeted advertisements could perpetuate unsustainable credit and pose concerns to economic stability.

Alternatives for stronger privacy-protection in credit-scoring

The above arguments demonstrate that the complete exemption from notice-and-consent for processing of personal data for credit-scoring, threatens individual rights disproportionately. Moreover, the exemption may undermine precisely the same objectives that policymakers may be attempting to fulfil via the exemption. Thus, Clause 8(8)(d) of the Bill requires serious reconsideration.

First, I contend that Clause 8(8)(d) may be deleted before the Bill is enacted into law. In view of the CIC Act, CICs and other entities authorised by the RBI under the CIC Act shall, notwithstanding the deletion of the provision, continue to be able to access and process credit information relating to individual without their consent – such processing shall remain subject to the safeguards contained in the CIC Act, including the right of the individual to obtain a copy of such credit information from the lender.

Alternatively, the provision may be suitably modified to limit the exemption from notice-and-consent to certain forms of personal data. Such personal data may be limited to ‘credit information’ (as defined under the CIC Act) or ‘financial data’ (as may be defined in the Bill before its enactment) – resultantly, the processing of such data for credit-scoring would not require compliance with notice-and-consent. The non-consensual processing of such forms of  data (as opposed to all personal data), which carry logically intuitive correlations with creditworthiness, shall arguably correspond more closely to the individual’s reasonable expectations in the context of credit-scoring. An appropriate delineation of this nature would provide transparency in processing and also minimise the scope of fallacious and/or discriminatory correlations between data-inputs and creditworthiness.

Finally, as a third alternative, Clause 8(8)(d) may be modified to empower a specialised regulatory authority to notify credit-scoring as a purpose for non-consensual processing of data, but within certain limitations. Such limitations could relate to the processing of certain forms of personal data (as suggested above) and/or to certain kinds of entities specifically authorised to undertake such processing. This position would resemble proposals under previous versions of India’s draft data protection law, i.e. the Personal Data Protection Bill, 2019 and the Personal Data Protection Bill, 2018 – both draft legislations required any exemption from notice-and-consent to be notified by regulations. Further, such notification was required to be preceded by a consideration of, inter alia, individuals’ reasonable expectations in the context of the processing. In addition to this balancing exercise, the Bill may be modified to require the regulatory authority to consult with the RBI, before notifying any exemption for credit-scoring. Such consultation would facilitate harmonisation between data protection law and sectoral regulation surrounding financial data.

*For our complete comments on the Digital Personal Data Protection Bill, 2022, please click here – https://bit.ly/3WBdzXg) 

Introduction to AI Bias

By Nidhi Singh, CCG

Note: This article is adapted from an op-ed published in the Hindu Business Line which can be accessed here

A recent report by Nasscom talks about the integrated adoption of artificial intelligence (AI) and data utilisation strategy, which can add an estimated USD 500 billion to the Indian economy. In June 2022, Meity published the Draft National Data Governance Framework Policy, which aims to enhance the access, quality, and use of non-personal data in ‘line with the current emerging technology needs of the decade.’ This is another step, in the world-wide push by governments to adopt machine learning and AI models, which are trained on individuals’ data, into the sphere of governance. 

While India is currently considering the legislative and regulatory safeguards which must be implemented for the use of this data and its use in AI systems, many countries have begun implementing these AI systems. For example, in January 2021, the Dutch government resigned en masse in response to a child welfare fraud scandal that involved the alleged misuse of benefit schemes. 

The Dutch tax authorities used a ‘self-learning’ algorithm to assess benefit claims and classify them according to the potential risk for fraud. The algorithm flagged certain applications as being at a higher risk for fraud, and these applications were then forwarded to an official for manual scrutiny. While the officials would receive applications from the system stating that they had a higher likelihood of containing false claims, they were not told why the system flagged these applications as being high-risk. 

Following the adoption of an overly strict interpretation of the government policy on identifying fraudulent claims, the AI system being used by the tax authorities began to flag every data inconsistency — including actions like failure to sign a page of the form — as an act of fraud. Additionally, the Dutch government’s zero tolerance for tax fraud policy meant that the erroneously flagged families would have to return benefits not only from the time period in which the fraud was alleged to be committed but up to 5 years before that as well. Finally, the algorithm also learnt to systematically identify claims which were filed by parents with dual citizenship — as being high-risk. These were subsequently marked as potentially fraudulent. This meant that out of the people who were labelled as fraudsters by the algorithm, a disproportionately high number of them had an immigrant background. 

What makes the situation more complicated is that it is difficult to narrow down to a single factor that caused the ‘self-learning algorithm’ to arrive at the biassed output due to the ‘black box effect’ and the lack of transparency about how an AI system makes its decisions. This biassed output delivered by the AI system is an example of AI bias.

The problems of AI Bias

AI bias is said to occur when there is an anomaly in the output produced by a machine learning algorithm. This may be caused due to prejudiced assumptions made during the algorithm’s development process or prejudices in the training data. The concerns surrounding potential AI bias in the deployment of algorithms are not new. For almost a decade, researchers, journalists, activists, and even tech workers have repeatedly warned about the consequences of bias in AI. The process of creating a machine learning algorithm is based upon the concept of ‘training’. In a machine learning process, the computer is exposed to vast amounts of data, which it uses as a sample to study how to make judgements or predictions. For example, an algorithm designed to judge a beauty contest would be trained upon pictures and data relating to beauty pageants from the past. AI systems use algorithms made by human researchers, and if they are trained on flawed data sets, they may end up hardcoding bias into the system. In the example of the algorithm used for the beauty contest, the algorithm failed its desired objective as it eventually made its choice of winners based solely on skin colour, thereby excluding contestants who were not light-skinned.

This brings us to one of the most fundamental problems in AI systems – ‘Garbage in – Garbage out’. AI systems are heavily dependent on the use of accurate, clean, and well-labeled training data to learn from, which will, in turn, produce accurate and functional results. A vast majority of the time in the deployment of AI systems is spent in the process of preparing the data through processes like data collection, cleaning, preparation, and labeling, some of which tend to be very human-intensive. Additionally, AI systems are usually designed and operationalised by teams that tend to be more homogenous in their composition, that is to say, they are generally composed of white men. 

There are several factors that make AI bias hard to oppose. One of the main problems of AI systems is that the very foundations of these systems are often riddled with errors. Recent research has shown that ten key data sets, which are often used for machine learning and data science, including ImageNet (a large dataset of annotated photographs intended to be used as training data) are in fact riddled with errors. These errors can be traced to the quality of data the system was trained on or, for instance, biases being introduced by the labelers themselves, such as labelling more men as doctors and more women as nurses in pictures. 

How do we fix bias in AI systems?

This is a question that many researchers, technologists, and activists are trying to answer. Some of the more common approaches to this question include inclusivity – both in the context of data collection as well as the design of the system. There have also been calls about the need for increased transparency and explainability, which would allow people to understand how AI systems make their decisions. For example, in the case of the Dutch algorithm, while the officials received an assessment from the algorithm stating that the application was likely to be fraudulent, it did not provide any reasons as to why the algorithm detected fraud. If the officials in charge of the second round of review had more transparency about what the system would flag as an error, including missed signatures or dual citizenship, it is possible that they may have been able to mitigate the damage.

One possible mechanism to address the problem of bias is — the blind taste test mechanism – The mechanism works to check if the results produced by an AI system are dependent upon a specific variable such as sex, race, economic status or sexual orientation. Simply put, the mechanism tries to ensure that protected characteristics like gender, skin colour, or race should not play a role in decision-making processes.

The mechanism includes testing the algorithm twice, the first time with the variable, such as race, and the second time without it. Therefore in the first set, the model is trained on all the variables including race, and the second time the model is trained on all variables, excluding race.If the model returns the same results, then the AI system can be said to make predictions that are blind to the factor, but if the predictions change with the inclusion of a variable, such as by inclusion of dual citizenship status in the case of the Dutch algorithm, or the inclusion of skin colour in the beauty contest the AI system would have to be investigated for bias. This is just one of the potential mitigation tests. States are also experimenting with other technical interventions such as the use of synthetic data, which can be used to create less biased data sets. 

Where do we go from here 

The Dutch case is merely one of the examples in a long line of instances that warrant higher transparency and accountability requirements for the deployment of AI systems. There are many approaches that have been, and are still being developed and considered to counter bias in AI systems. However, the crux remains that it may be impossible to fully eradicate bias from AI systems due to the biased nature of human developers and engineers, which is bound to be reflected within technological systems. The effects of these biases can be devastating depending upon the context and the scale at which they are implemented. 

While new and emerging technical measures can be used as stopgaps, in order to comprehensively deal with bias in AI systems, we must address the issues of bias in those who design and operationalise the system. In the interim, regulators and states must step up to carefully scrutinise, regulate or in some cases halt the use of AI systems which are being used to provide essential services to people. An example of such regulation could include the framing and adoption of risk based assessment frameworks for the adoption of AI systems, wherein the regulatory requirements for AI systems are dependent upon the level of risk they pose to individuals. This could include permanently banning the deployment of AI systems in areas where AI systems may pose a threat to people’s safety, livelihood, or rights, such as credit scoring systems, or other systems which could manipulate human behaviour. For AI systems which are scored to be lower risk, such as AI chatbots being used for customer service, there may be a lower threshold for the prescribed safeguards. The debate on whether or not AI systems can ever truly be free from bias may never be fully answered; however, we can say that the harms that these biases cause can be mitigated with proper regulatory and technical measures. 

On the Exclusion of Regulatory Sandbox Provisions from Data Protection Law

On November 18, 2022, the Ministry of Electronics & Information Technology (‘MeitY’) released the new Digital Personal Data Protection Bill, 2022 (‘2022 Bill’) as the governing legislation for personal data. Prior to the 2022 Bill, the Personal Data Protection Bill, 2019 (‘2019 Bill’) was the proposed legislation to govern personal data and protect data privacy. The 2019 Bill was withdrawn during the Monsoon session of Parliament in August 2022, after receiving significant amendments and recommendations from the Joint Committee of the Parliament in 2021.

The 2022 Bill has removed several provisions from the 2019 Bill, one of which pertains to the creation of a regulatory sandbox for encouraging innovation in artificial intelligence, machine-learning, or any other emerging technologies (under Clause 40 of the 2019 Bill). While some experts have criticised the 2022 Bill for not retaining this provision, I contend that the removal of the regulatory sandbox provision is a positive aspect of the 2022 Bill. In general, regulatory sandbox provisions should not be incorporated into data protection laws for the following reasons: 

  1. The limited scope and purpose of data protection legislation

Data protection laws are drafted with the specific purpose of protecting personal data of individuals, creating a framework to process personal data, and laying down specific rights and responsibilities for data fiduciaries/processors. Although firms participating in a sandbox may process personal data, the functions of sandboxes are more expansive than regulating personal data processing. The primary purpose of regulatory sandboxes is to create isolated, controlled environments for the live testing, development, and restricted time-bound release of innovations. Sandboxes are also set-up to help regulatory authorities monitor and form adaptive regulations for these innovative technologies, as they are either partially or completely outside the purview of existing legislations.

Since the scope of regulatory sandboxes is broader than that of data protection legislations, it is insufficient for a sandbox provision to be included in a data protection legislation, with limited compliances and exemptions from the provisions of such legislation. A separate legislation is required to be drafted to regulate such emerging technologies. 

The regulatory sandbox framework under the European Union’s Proposed Artificial Intelligence Act, 2021 (‘EU AI Act’), as well as the regulatory sandboxes established by SEBI, RBI, and other authorities in India demonstrate this clearly. These frameworks are established separately from existing legislations, and provide a specific scope and purpose for the sandbox in a clear and detailed manner. 

  1. The limited expertise and conflicting mandate of a data protection authority

Data protection authorities (‘DPAs’) are appointed to protect the rights of data principals. They lack the necessary expertise over emerging technologies to also function as the supervisory authority for a regulatory sandbox. Hence, a regulatory sandbox is required to be monitored and supervised by a separate authority which has expertise over the specific areas for which the sandbox is created.

Moreover, it is not sufficient to merely constitute a separate authority for sandboxes within a data protection law. Since the supervisory authority for sandboxes is required to privilege innovation and development of technologies over the strict protection of personal data, the functions of this authority will be directly conflicting with those of the DPA. Therefore, the regulatory sandbox framework is required to be incorporated in a separate legislation altogether.

  1. Sector-specific compliance provisions for regulatory sandboxes

The desire to regulate artificial intelligence and emerging technologies under a data protection legislation is understandable, as these technologies process personal data. However, it is to be noted that AI systems and other emerging technologies also process non-personal data and anonymised data. 

The regulatory sandbox for these technologies are thus not only subject to the principles of data protection law, but are in fact a nexus for information technology law, anti-discrimination law, consumer protection law, e-commerce law, and other applicable laws. Accordingly, the framework for the regulatory sandbox cannot be placed within a data protection legislation or subordinate rules to such a legislation. It has to be regulated under a separate framework which ensures all the relevant laws are taken into account, and the safeguards are not just limited to personal data safeguards. 

Since the exemptions, mitigation of risks, and compliance for the different emerging technologies are to be specifically tailored to those technologies (across various laws), the regulatory mechanism for the same cannot be provided in a data protection legislation. 

Conclusion

The above arguments establish the basis for not incorporating sandbox provisions within a data protection legislation. Regulatory sandboxes, based on their framework alone, do not belong in a data protection legislation. The innovation-centric mandate of the sandbox framework and the functions of the supervisory authority conflict with the core principles of data protection law and the primary functions of DPAs. The limited scope of data protection law, coupled with the lack of expertise of DPAs decisively establish the incongruence between the regulatory sandbox provision and data protection legislations.

Commentators who critique the exclusion of the sandbox provision from the 2022 Bill are right to be concerned about rapid developments in artificial intelligence and other emerging technologies. But it is far more prudent for them to recommend that the Central government set-up an expert committee to analyse these developments and prepare a separate framework for the sector. Such a framework can comprehensively account for the various mechanisms (beyond data protection) required to govern these emerging technologies.

Working paper release: ‘Tackling the dissemination and redistribution of NCII’

Aishwarya Girdhar & Vasudev Devadasan

Today, the Centre for Communication Governance (CCG) is happy to release a working paper titled ‘Tackling the dissemination and redistribution of NCII’ (accessible here). The dissemination and redistribution of non-consensual intimate images (“NCII”) is an issue that has plagued platforms, courts, and lawmakers in recent years. The difficulty of restricting NCII is particularly acute on ‘rogue’ websites that are unresponsive to user complaints. In India, this has prompted victims to  petition courts to block webpages hosting their NCII. However, even when courts do block these webpages, the same NCII content may be re-uploaded at different locations. 

The goal of our proposed solution is to: (i) reduce the time, cost, and effort associated with victims having to go to court to have their NCII on ‘rogue’ websites blocked; (ii) ensure victims do not have to re-approach courts for the blocking of redistributed NCII; and (iii) provide administrative, legal, and social support to victims. 

Our working paper proposes the creation of an independent body (“IB”) to: maintain a hash database of known NCII content; liaise with government departments to ensure the blocking of webpages hosting NCII; potentially crawl targeted areas of the web to detect known NCII content; and work with victims to increase the awareness of NCII related harms and provide administrative and legal support. Under our proposed solution, victims would be able to simply submit URLs hosting their NCII to a centralised portal maintained by the IB. The IB would then vet the victim’s complaint, coordinate with government departments to block the URL, and eventually hash and add the content to a database to combat redistribution. 

This will significantly reduce the time, money, and effort exerted by victims to have their NCII blocked, whether at the stage of dissemination or redistribution. The issue of redistribution can also potentially be tackled through a targeted, proactive crawl of websites by the IB for known NCII pursuant to a risk impact assessment. Our solution envisages several safeguards to ensure that the database is only used for NCII, and that lawful content is not added to the database. Chief amongst these is the use of multiple human reviewers to vet the complaints made by victims and a public interest exemption where free speech and privacy interests may need to be balanced. 

A full summary of our recommendations are as follows:

  • Efforts should be made towards setting up an independently maintained hash database for NCII content. 
  • The hash database should be maintained by the IB, and it must undertake stringent vetting processes to ensure that only NCII content is added to the database.
  • Individuals and vetted technology platforms should be able to submit NCII content for inclusion into the database; NCII content removed pursuant to a court order can also be included in the database.
  • The IB may be provided with a mandate to proactively crawl the web in a targeted manner to detect copies of identified NCII content pursuant to a risk impact assessment. This will help shift the burden of identifying copies of known NCII away from victims. 
  • The IB can supply the DoT with URLs hosting known NCII content, and work with victims to alleviate the burdens of locating and identifying repeat instances of NCII content. 
  • The IB should be able to work with organisations to provide social, legal, and administrative support to victims of NCII; it would also be able to coordinate with law enforcement and regulatory agencies in facilitating the removal of NCII.

Our working paper draws on recent industry efforts to curb NCII, as well as the current multi-stakeholder approach used to combat child-sex abuse material online. However, our regulatory solution is specifically targeted at restricting the dissemination and redistribution of NCII on ‘rogue’ websites that are unresponsive to user complaints. We welcome inputs from all stakeholders as we work towards finalising our proposed solution. Please send comments and suggestions to <ccg@nludelhi.ac.in>.

Link to Working Paper.

AI Law and Policy Diploma Course

The Centre for Communication Governance at the National Law University, Delhi is excited to announce the first edition of the AI Law and Policy Diploma Course – an 8 month online diploma course curated and delivered by expert academics and researchers at CCG and NLU Delhi. The Course is an exciting opportunity to learn the legal, public policy, socio-political and economic contours of AI systems and their implications on our society and its governance. The course provides students the opportunity to interact with and learn from renowned policy practitioners and experienced professionals in the domain of technology law and policy. The course will commence in October 2022 and end in May 2022. Registration for the course is now open and will close by 3rd October 2022 11:59 PM IST. 

About the Centre 

The Centre for Communication Governance at National Law University Delhi (CCG) was established in 2013 to ensure that Indian legal education establishments engage more meaningfully with information technology law and policy and contribute to improved governance and policy making. CCG is the only academic research centre dedicated to undertaking rigorous academic research on information law and policy in India and in a short span of time has become a leading institution in Asia. 

CCG has built an extensive network and works with a range of international academic institutions and policy organisations. These include the United Nations Development Programme, Law Commission of India, NITI Aayog, various Indian government ministries and regulators, International Telecommunications Union, UNGA WSIS, Paris Call, Berkman Klein Center for Internet and Society at Harvard University, the Center for Internet and Society at Stanford University, Columbia University’s Global Freedom of Expression and Information Jurisprudence Project, the Hans Bredow Institute at the University of Hamburg, the Programme in Comparative Media Law and Policy at the University of Oxford, the Annenberg School for Communication at the University of Pennsylvania, and the Singapore Management University’s Centre for AI and Data Governance.

About the Course 

The Course is designed to ensure the nuanced engagement of the students with the legal, public policy, socio-political and economic contours of AI systems and their implications on our society and its governance. 

The course will engage with key themes in the interaction of artificial intelligence with law and policy including implications of AI on our society, emerging use cases of AI and related opportunities and challenges, domestic and global approaches to AI governance, ethics in AI, the application of data protection principles to AI systems, and AI discrimination and bias. Students will be exposed to proposed legislation and policy frameworks on artificial intelligence in India and globally, international policy developments, current uses of AI technology and emerging challenges.

This course will equip students with the necessary understanding and knowledge required to effectively navigate the rapidly evolving space of AI law and policy, and assess the contemporary developments.

Course objectives and learning outcomes 

The course aims to ensure that students are:

  1. The students of the course will be introduced to AI technology and will become cognisant of its opportunities and challenges, and its potential impacts on society, individuals and the law.
  2. The course will provide an overview of the interactions between AI and Law and delve into the current domestic and international frameworks which seek to govern AI technology.
  3. The students will be equipped to navigate the interaction between AI and ethics, and consider the ethical principles within which the use of AI technologies are being situated. They will be provided with a breakdown of the ethical principles which have emerged surrounding the use of AI.  
  4. Students will become familiar with the regional and international policy processes which surround  AI technology and the role of intergovernmental organisations in AI governance.
  5. Students will be equipped with knowledge of data protection principles and their interaction with AI systems. 
  6. Students will delve into problems surrounding AI discrimination and explore how bias creeps into AI systems at various stages, and the implications that this may have upon individuals and our society. 
  7. The students will become conversant with global practices, and governance and regulatory frameworks around AI, focusing on multilateral processes which are currently underway as well as specific domestic approaches. 
  8. The course also has a specialised module on AI in India, focusing upon the regulatory and governance framework around the deployment of AI systems.
  9. Students will also become familiar with the novel use of AI in India, including the use of AI systems for FRT as well as its use in judicial systems.
  10. The students will explore the emerging application and use cases of AI technologies. Students will familiarise themselves with the new uses of AI technologies such as facial recognition, emotional recognition, predictive policing, AI use in workplaces, AI use in healthcare, etc. and consider how this may impact individuals and society. 

For the detailed course outline please visit here

Eligibility 

  • Lawyers/advocates, professionals involved in information technology, professionals in the corporate, industry, government, media, and civil society sector, technology policy professionals, academicians, and research scholars interested in the field of technology and information technology law and policy and under graduates from any discipline are well positioned to apply for the course.
  • Candidates having a 10+2 degree from any recognized board of education, with a minimum of 55% marks, are eligible to apply for this course.
  • There shall be no restriction as to age, nationality, gender, employment status in the admission process

Time Commitment

We recommend students set aside an average of 4-8 hours per week for attending the scheduled monthly live online sessions on weekends and for completing the mandatory coursework (including viewing recorded lectures, any assessment exercises) and prescribed readings.

Seats Available 

A total of 50 seats are available for the course. 

Registration 

Interested candidates may register for the course through the online link provided here

Deadline

Last date to apply: 3rd October 2022 (11:59pm IST)

Course Fee 

INR 90,000/- (all inclusive and non-refundable) to be paid at the time of registration. 

Contact us: For inquiries please contact us at ccgcourse@nludelhi.ac.in with the subject line ‘CCG NLUD Diploma Course on AI Law and Policy’. Emails sent without the subject line ‘CCG NLUD  Diploma Course on AI Law and Policy’ may go unnoticed.

Call for Applications for the Positions (i) Community and Engagement Associates, (i) Community and Engagement Officers, (ii) Strategic Development and Partnerships Associates, and (ii) Strategic Development and Partnerships Officers

The National Law University Delhi (‘University’), through its Centre for Communication Governance (‘CCG’/‘Centre’) is inviting applications for the posts of (i) Community and Engagement Associates and Community and Engagement Officers and (ii) Strategic Development and Partnership Associates and Strategic Development and Partnership Officers, to work at the Centre. 

About the Centre for Communication Governance

The Centre for Communication Governance at National Law University Delhi was established in 2013 to ensure that Indian legal education establishments engage more meaningfully with information technology law and policy, and to contribute to improved governance and policy making. CCG is the only academic research centre dedicated to working on information technology law and policy in India, and in a short span of time has become a leading institution in the sector. 

Through its Technology and Society team, CCG seeks to embed constitutional values and good governance within information technology law and policy and examine the evolution of existing rights frameworks to accommodate new media and emerging technology. It seeks to support the development of the right to freedom of speech, right to dignity and equality, and the right to privacy in the digital age, through rigorous academic research, policy intervention, and capacity building. The team’s ongoing work is on subjects such as —privacy and data governance/protection, regulation of emerging technologies like artificial intelligence, blockchain, 5G and IoT, platform regulation, misinformation, intermediary liability and digital access and inclusion.

This complements the work of the Technology and National Security team at CCG that focuses on issues that arise at the intersection of technology and national security law, including cyber security, information warfare, and the interplay of international legal norms with domestic regulation. The team’s work aims to build a better understanding of national security issues in a manner that identifies legal and policy solutions that balance the legitimate security interests and national security choices with constitutional rights and the rule of law, in the context of technology law and policy. The team undertakes analysis of international law as well as domestic laws and policies that have implications for national security. Our goal is to develop detail-oriented, principled and pragmatic recommendations for policy makers on national security issues faced by India, with an emphasis on cyber security and cyber conflict. 

The work at CCG is designed to build competence and raise the quality of discourse in research and policy around issues concerning constitutional rights and rule of law in the digital age, cybersecurity and global internet governance. The academic research and policy output is intended to catalyse effective research-led policy making and informed public debate around issues in technology, internet governance and information technology law and policy.

Role

CCG is a young, continuously evolving organisation and the members of the Centre are expected to be active participants in building a collaborative, merit-led institution and a lasting community of highly motivated young professionals. If selected, you will contribute to the institution’s growth and development by playing a key role in advancing our community engagement / strategic development and partnerships. You will be part of a dynamic team of young researchers, policy analysts and lawyers. Please note that our interview panel has the discretion to determine which role would be most suitable for each applicant based on their qualifications and experience. 

We are inviting applications for the following roles-

(i) Community and Engagement Associates (2 position)

(ii) Community and Engagement Officers (2 position)

(iii) Strategic Development and Partnership Associates (2 position)

(iv) Strategic Development and Partnership Officers (2 position)

i. Community and Engagement Associates and Community and Engagement Officers

Some of the key roles and responsibilities of the Community & Engagement Associates and Community & Engagement Officers may include:

  • Developing and supporting the team in community and engagement strategy. The candidate will have to work both independently and collaboratively with the team leadership, researchers and various other members of the team.
  • Building engagement with key stakeholders and community members of the Digital Society ecosystem at the domestic and international level.
  • Conceptualising and implementing events, workshops, roundtables, etc. to engage with stakeholders in the ecosystem.
  • Creating relevant content in the form of posters, social media posts, and other allied material for the various events conducted by CCG. 
  • Strategising and creating visual and written content for newsletters, email communications and other modes of engagement.
  • Strategising and creating internal and external communication material including relevant posts, images and posters, and other allied content for social media dissemination, including Twitter, Instagram, LinkedIn, and Facebook.
  • Strategising and creating visual representations, infographics and other graphical representations to make research and analysis available in an accessible manner.
  • Managing social media accounts and maintaining a social media calendar and database of disseminated content. Working with social media on campaigns using tools like hootsuite, oneup, etc., and oversight and management of websites and blogs.
  • Editorial design and layout for reports, presentations, and other written outputs.
  • Aiding in conceptualising, recording and editing audio, podcasts, and/or video material. 
  • Engaging with CCG’s media networks and other key stakeholders.
  • Identifying opportunities for media engagement for the dissemination of CCG’s work.
  • Maintaining records of media and social media coverage and collecting data for analytics and metrics.
  • Strategising, editing, developing, managing and implementing content for the CCG website, CCG Blog, etc. 

This is an indicative list of some of the responsibilities the person will be involved in and is not inclusive of all activities one might be engaged with. We welcome applicants with an interest in any of the areas that CCG broadly works in to apply.

ii. Strategic Development and Partnership Associates and Strategic Development and Partnership Officers

Some of the key roles and responsibilities of the Strategic Development and Partnership Associates and Strategic Development and Partnership Officers may include:

  • Identifying potential funders and partners (domestic and international) to develop CCG’s work and engaging with them.
  • Developing funding opportunities and networks for CCG programs and research.
  • Drafting grant proposals, presentations and applications in coordination with CCG leadership and researchers and spearheading all phases of the grant process (pre-award, award and post-award phase).
  • Ensuring timely funder reporting, project completion reports, and preparation of project narratives.
  • Proactively managing, building and developing new and existing partnerships (domestic and international) portfolios in consultation with senior leadership at CCG.
  • Building engagement with key stakeholders and community members of the Digital Society ecosystem at the domestic and international level across academia, media, civil society, industry, regulatory bodies, other experts, members of parliament, senior government officers, judges, senior lawyers, scholars, and journalists. We are looking for someone who is very constructive and is not only able to help our community get the most out of CCG’s work but is also able to connect people with each other, playing an enabling, generative role that encourages and supports the ecosystem.
  • Identifying opportunities for CCG to present and highlight its programs and research and working towards applying for and implementing these opportunities.
  • Making use of effective programme/project management tools within the team (leadership, research, admin and community and engagement) to ensure strategic development of CCG’s goals.
  • Identifying opportunities for capacity building for the CCG team and organising and implementing relevant activities.
  • Conceptualising and implementing events, workshops, roundtables, etc. to engage with stakeholders in the ecosystem.
  • Strategising, developing, co-ordinating, organising and implementing events, fellowships, moots and courses such as Summer School, Courses (Certificate Course, etc.), Workshops, DIGITAL Fellowship, Oxford Price Media South Asia Rounds, and Capacity Building events.
  • Strategising, editing, developing, managing and implementing content for the CCG website, CCG Blog, etc.
  • Strategising and supporting the development of engagement and outreach modes such as social media, podcasts, newsletters, events, meetings, etc.
  • Developing and supporting the team in a community and engagement strategy. 
  • Engaging with CCG’s media networks and other key stakeholders and identifying opportunities for media engagement for the dissemination of CCG’s work.
  • Maintaining records of media coverage and collecting data for analytics and metrics.
  • Developing and implementing CCG’s DEI initiatives and programs.

This is an indicative list of some of the responsibilities the person will be involved in and is not inclusive of all activities one might be engaged with. We welcome applicants with an interest in any of the areas that CCG broadly works in to apply.

Qualifications for the Roles

  • The Centre welcomes applications from candidates with degrees in design, media and communication, law, public policy, development studies, BBA, journalism, english and social sciences or other relevant/applicable fields.
  • For the Associate role, preference may be given to candidates with an advanced degree in related fields or 2+ years of PQE and previous experience of working on related issues.
  • For the Officer role, preference may be given to candidates with an advanced degree in related fields or 4+ years of PQE and previous experience of working on related issues.
  • Candidates must have a demonstrable capacity for high-quality, independent work.
  • Strong communication, digital and writing/presentation skills are important.
  • Interest and previous experience in information technology law and policy is preferred. 
  • A Master’s degree from a highly regarded programme might count towards work experience.

However, the length of your resume is less important than the other qualities we are looking for. As a young, rapidly-expanding organisation, CCG anticipates that all members of the Centre will have to manage large burdens of substantive as well as institutional work. We are looking for highly motivated candidates with a deep commitment to building policies that support and enable constitutional values and democratic discourse. We are looking for people who see good research and policy designs as a way to build a better and more equitable world. At CCG, we aim high, and we demand a lot from each other in the workplace.

We look for individuals with work-style traits that include the ability to work both collaboratively and independently in a fast-paced environment, while being empathetic towards colleagues. We aim to create high-quality research outputs. It is therefore vital that you be a good team player, as well as be kind and respectful to colleagues. At the same time, you should also be self-motivated, proactive, creative as well as be capable of independently driving your work when required. We like to maintain the highest ethical standards in our work and workplace, and look for people who manage all of this while being as kind and generous as possible to colleagues, collaborators and everyone else within our networks. A sense of humour will be most welcome. Even if you do not necessarily fit the requirements outlined but bring to us the other qualities we look for, we will be glad to hear from you. 

Remuneration and Location

The remuneration will be competitive, and will be commensurate with qualifications and experience. Where the candidate demonstrates exceptional competence in the opinion of the selection panel, there is a possibility for greater remuneration. These are full time positions based out of Delhi. 

Application Process

Interested candidates may fill the application form provided by 05:00 pm IST on June 20, 2022. Please note that applications will only be accepted via the Google Form. In case of any doubts please contact us at ccg@nludelhi.ac.in with the subject line “Application for Community and Engagement/Strategic Development and Partnerships”. We encourage applicants to apply at the earliest.

 A complete application form will require the following: 

  • A signed and completed Application Form, available here.
  • The form requires a Statement of Motivation which applicants have to answer in a maximum of 800 words. The Statement of Motivation should ideally engage with the following aspects: 

(i) Why do you wish to work with CCG? 

(ii) For those applying for the role of Community and Engagement Associate/Officer: What will be your likely contribution to our work? How would you develop CCG’s community and engagement with stakeholders, the ecosystem and use CCG’s work to add value to the public discourse? 

Or

For those applying for the role of Strategic Development and Partnership Associate/Officer: What will be your likely contribution to our work? How would you undertake strategic development of CCG’s work, fundraising for CCG’s research and programs and build partnerships? 

(iii) What past experiences and skills optimally position you to do so? 

(iv) How does working with CCG connect with your plans for the future?

  • A sample or portfolio of your previous work or writing sample, as relevant. If the candidate does not have anything relevant this is an optional step. However, we encourage candidates to submit any relevant samples they may have of their work. If the 100 MB limit for the upload of the sample is insufficient, please upload an illustrative sample on the google form and the candidate can share a more detailed version of their sample at  ccg@nludelhi.ac.in with the subject line “Call for Strategic Communication and Engagement/ Development and Partnership Associates/Officers – Portfolio”.
  • Please combine the CV, sample of your previous work and statement of motivation in a single PDF file labelled as “Your name – CCG”. The PDF should be uploaded on the link provided in the application form. The single PDF file should contain: (1) a Curriculum Vitae (maximum two pages) (2) a sample or portfolio of your previous work or writing sample as relevant, and (3) Statement of Motivation, to be uploaded in the application form.
  • Applicants should note that they cannot save their work on the application form and return to it later, so they may find it advisable to prepare their Statement of Motivation and merge relevant documents into a PDF document beforehand.
  • Names and contact details of two referees who can be contacted for an oral or a short written reference (to be filled in the form).

Since we require applicants to upload their CV and writing sample, accessing the form requires a Google (Gmail) login. For applicants not having a Google (Gmail) account, we encourage them to create an account, following the quick and simple steps here.

Note

  • National Law University Delhi is an equal opportunity employer.
  • National Law University Delhi reserves the right to conduct telephonic or video interviews. National Law University Delhi is unable to cover the costs of travel, accommodation, etc. for any interviews. 
  • National Law University Delhi reserves the right not to fill these positions.
  • Our selection panel has the discretion to determine which profile/role would be most suitable for each applicant based on their experience, domain understanding and qualifications.
  • The roles, responsibilities and activities enumerated here are indicative and may encompass additional duties related to these.
  • The position is a contractual position and shall be paid under the grants received by the Centre for Communication Governance at National Law University Delhi.
  • We will contact only shortlisted candidates. 

Understanding CERT-In’s Cybersecurity Directions, 2022

Sukanya Thapliyal

“Cyber Specialists” by Khahn Tran is licensed under CC BY 4.0

INTRODUCTION

The Indian Government is set to initiate a widely discussed cybersecurity regulation later this month. On April 28, 2022, India’s national agency for computer incident response, also known as the Indian Computer Emergency Response Team (CERT-In), released Directions relating to information security practices, the procedure, prevention, response, and reporting of cyber incidents for Safe & Trusted Internet. These Directions were introduced under section 70B(6) of India’s Information Technology Act, 2000 (IT Act). This provision allows CERT-In to call for information and issue Directions to carry out its obligations relating to:
1. facilitating the collection, analysis and dissemination of information related to cyber incidents,
2. releasing forecasts and alerts, and
3. taking emergency measures.

According to the IT Act, the new Directions are mandatory in nature, and non-compliance attracts criminal penalties which includes imprisonment of up to one year. The notification states that the Directions will become effective 60 days from the days of issuance i.e. on June 28, 2022. The Directions were later followed by a separate Frequently Asked Questions (FAQ) document, released as a response to stakeholder queries and concerns.

These Directions have been introduced in response to increasing instances of cyber security incidents which undermine national security, public order, essential government functions, economic development, and security threats against individuals operating through cyberspace. Further, recognizing that the private sector is a crucial component of the digital ecosystem, the Directions also push for closer cooperation between private organisations and government enforcement agencies. Consequently, the Directions have identified sharing of information for analysis, investigation, and coordination concerning the cyber security incidents as one of its prime objectives.

POLICY SIGNIFICANCE OF DIRECTIONS

Presently, Indian cybersecurity policy lacks a definite form. The National Cyber Security Policy (NCSP) was released in 2013 serves as an “umbrella framework for defining and guiding the actions related to security of cyberspace”. However, the policy has seen very limited implementation and has been mired in a multi-year reform which awaits completion. The new cybersecurity strategy is still in the works, and there is no single agency to oversee all relevant entities and hold them accountable.

Cybersecurity policymaking and governance are progressing through different government departments at national and state levels in silos and in a piecemeal manner. Several cybersecurity experts have also identified the lack of adequate technical skills and resource constraints as a significant challenge for government bodies. The Indian cybersecurity policy landscape needs to address these existing and emerging threats and challenges by instilling appropriate security standards, efficient implementation of modern technologies, framing of effective and laws and security policies, and adapting multi-stakeholder approaches within cybersecurity governance.

Industry associations and lobby groups such as US Chamber of Commerce (USCC), US-India Business Council (USIBC), The Software Alliance (BSA), and Information Technology Industry Council (ITI) have responded to the Directions with criticism. These organisations have stated that these Directions, in present format, would negatively impact Indian and global enterprises and undermine cybersecurity. Moreover, the Directions were released without any public consultations and therefore, lack necessary stakeholder inputs from across industry, civil society, academia and technologists.

The new CERT-In Directions mandate covered entities (service providers, intermediaries, data centers, body corporate and governmental organisations) to comply with prescriptive requirements that include time synchronisation of ICT clocks, excessive data retention requirements, 6 hr reporting requirement of cyber incidents, among others. The next section critically evaluates salient features of the Directions.

SALIENT FEATURES OF THE DIRECTIONS

Time Synchronisation: Clause (i) of the Directions mandates service providers, intermediaries, data centers, body corporate and governmental organisations to connect to the Network Time Protocol (NTP) Server of National Informatics Centre (NIC) or National Physical Laboratory (NPL) or with NTP servers traceable to these NTP servers, for synchronisation of all their ICT systems clocks. For organisations whose operations span multiple jurisdictions, the Directions allow relaxation by allowing them to use alternative servers. However, the time source of concerned servers should be the same as that of NPL or NIC. Several experts have raised that the requirement as extremely cumbersome, resource-intensive, and not in conformity with industry best practices. As per the established practice, companies often base their decision regarding NTP servers on practicability (lower latency) and technical efficiency. The experts have raised concerns over the technical and resource constraints with NIC and NPL servers in managing traffic volumes, and thus questioning the practical viability of the provision. .

Six-hour Reporting Requirement: Clause (ii) requires covered entities to mandatorily report cyber incidents within six hours of noticing such incidents or being notified about such incidents. The said Direction imposes a stricter requirement than what has been prescribed under Information Technology (The Indian Computer Emergency Response Team and Manner of Performing Functions and Duties) Rules, 2013 (CERT-In Rules) that allows the covered entities to report the reportable cyber incident within “a reasonable time of occurrence or noticing the incident to have scope for timely action”. The six hour reporting requirement is also stricter than the established norms in other jurisdictions, including the USA, EU, UK, and Australia. Such reporting requirements normally range from 24 hours to 72 hours, depending upon the affected sector, type of cyber intrusion, and attack severity. The CERT-In Directions make no such distinctions in its reporting requirement. Further, the reportable cyber security incidents under Annexure 1 feature an expanded list of cyber incidents (compared to what are mentioned in the CERT-In Rules). These reportable cyber incidents are defined very broadly and range from unauthorised access to systems, identity theft, spoofing and phishing attacks to data branches and data theft. Considering that an average business entity with digital presence engages in multiple digital activities and there is no segregation on the basis of scale or severity of incident, the Direction may be impractical to achieve, and may create operational/compliance challenges for many smaller business entities covered under the Directions. Government agencies often require business entities to comply with incident/breach reporting requirements to understand macro cybersecurity trends, cross-cutting issues, and sectoral weaknesses. Therefore, governments must design cyber incident reporting requirements tailormade to sectors, severity, risk and scale of impact. Not making these distinctions can make reporting exercise resource-intensive and futile for both affected entities and government enforcement agencies.

Maintenance of logs for 180 days for all ICT systems within India: Clause (iv) mandates covered entities to maintain logs of all the ICT systems for a period of 180 days and to store the same within Indian jurisdiction. Such details may be provided to CERT-In while reporting a cyber incident or otherwise when directed. Several experts have raised concerns over a lack of clarity regarding scope of the provision. The term “all ICT systems” in its present form could include a huge trove of log information that may extend up to 1 Terabyte a day. It further requires the entities to retain log information for 180 days as opposed to the current industry practice (30 days). This Direction is not in line with the purpose limitation and the data minimisation principles recognized widely in several other jurisdictions including EU’s General Data Protection Regulation (GDPR) and does not provide adequate safeguard against indiscriminate data collection that may negatively impact the end users. Further, many experts have pointed out that the concerned Direction lacks transparency and is detrimental to the privacy of the users. As the log information often carries personally indefinable information (PII), the provision may conflict with users informational privacy rights. CERT-In’s Directions are not sufficiently clear on the safeguard measures to balance legal enforcement objectives with the fundamental rights.

Strict data retention requirements for VPN and Cloud Service Providers: Clause (v) requires “Data Centres, Virtual Private Server (VPS) providers, Cloud Service providers, and Virtual Private Network Service (VPN Service) providers” to register accurate and detailed information regarding subscribers or customers hiring the services for a period of 5 years or longer after any cancellation or withdrawal of the registration. Such information shall include the name, address, and contact details of subscribers/ customers hiring the services, their ownership pattern, the period of hire of such services, and e-mail ID, IP address, and time stamp used at the time of registration. Clause (vi) directs virtual asset service providers, virtual asset exchange providers, and custodian wallet providers to maintain all KYC records and details of all financial transactions for a five year period. These Directions are resource-intensive and would substantially increase the compliance cost for many companies. It is also important to note that bulk data retention for a longer time period also creates greater vulnerabilities and attack surfaces of private/sensitive/commercial ICT use. As India is still to enact its data protection law, and the Directions are silent on fundamental rights safeguards, it has also led to serious privacy concerns. Further, some entities covered under this direction, including VPS or VPN providers, are privacy and security advancing services that operate on a strict no-log policy. VPN services provide a secure channel for storing and sharing information by individuals and businesses. VPNs are readily used by the business and individuals to protect themselves on unsecured, public Wifi networks, prevent website tracking, protect themselves from malicious websites, against government surveillance, and for transferring sensitive and confidential information. While VPNs have come under fire for being used by cybercriminals and other malicious actors, a blanket requirement for maintaining logs and excessive data retention requirement goes against the very nature of the service and may render these services pointless (and even insecure) for many users. The Frequently Asked Questions (FAQs), released following the CERT-In Directions have absolved the Enterprise/Corporate VPNs from the said requirement. However, the Directions still stand for VPN Service providers that provide “Internet proxy like services” to general Internet subscribers/users. As a result, some of the largest VPN service providers including NordVPN, and PureVPN have indicated the possibility of pulling their servers out of India and quitting their operations in India.

In a separate provision [Clause (iii)], CERT-In has also directed the service providers, intermediaries, data centers, body corporate, and government organisations to designate a point of contact to interface with CERT-In. The Directions have also asked the covered entities to provide information or any other assistance that CERT-In may require as part of cyber security mitigation actions and enhanced cyber security situational awareness.

CONCLUSION

Our ever-growing dependence on digital technology and its proceeds has exposed us to several vulnerabilities. Therefore, the State plays a vital role in intervening through concrete and suitable policies, institutions and digital infrastructures to protect against future cyber threats and attacks. However, the task is too vast to be handled by the governments alone and requires active participation by the private sector, civil society, and academia. While the government has a broader perspective of potential threats through law enforcement and intelligence organisations and perceives cybersecurity concerns from a national security lens, the commercial and fundamental rights dimensions of cybersecurity would benefit from inputs from the wider stakeholder community across the cybersecurity ecosystem.

Although in recent years, India has shown some inclination of embracing multi-stakeholder governance within cybersecurity policymaking, the CERT-In Directions point in the opposite direction. Several of the directions mentioned by the CERT-In, such as the six-hour reporting requirement, excessive data retention requirements, synchronisation of ICT clocks indicate that the government appear to adopt a “command and control” approach which may not be the most beneficial way of approaching cybersecurity issues. Further, the Directions have also failed to address the core issue of capacity constraints, lack of skilled specialists and lack of awareness which could be achieved by establishing a more collaborative approach by partnering with the private sector, civil society and academia to achieve the shared goal of cybersecurity. The multi stakeholder approaches to policy making have stood the test of time and have been successfully applied in a range of policy space including climate change, health, food security, sustainable economic development, among others. In cybersecurity too, the need for effective cross-stakeholder collaboration is now recognised as a key to solving difficult and challenging policy issues and produce credible and workable solutions. The government, therefore, needs to affix institutions and policies that fully recognize the need and advantages of taking up multi stakeholder approaches without compromising accountability systems that give due consideration to security threats and safeguard citizen rights.

Cybersecurity and Trade: Understanding Linkages for the Global South

Sukanya Thapliyal*

  1. BACKGROUND: 

Cybersecurity concerns are increasingly creeping into the international trade arena. Emerging technologies such as Big Data, Artificial Intelligence (AI), Internet of things (IoT), among others, have led to the digitalisation of the economy and society and has transformed our day-to-day lives. In addition, the COVID-19 pandemic has further accelerated the digitalisation process. As a result, countries, businesses and individuals worldwide are embracing this shift and are becoming increasingly reliant on digital technologies. The digital economy has significantly contributed to the increase in services trade, reduced trade costs, and increased participation of micro, small and medium enterprises (MSMEs) within international trade. The shift towards the digital economy has also empowered enterprises in amassing and analysing massive amounts of data. This helps businesses or organisations improve their operations and develop better products and services for existing and prospective consumers. 

However, ensuing interconnectivity and reliance on digital technologies exposes society/economies to several risks. These include threats of cyberattacks such as ransomware, political espionage, economic espionage, identity theft, and intellectual property theft.  These threats impact national defence authorities, critical infrastructures, commercial enterprises, and enforcement agencies alike. Such threats can emerge from both State and Non-State actors. However, countries vary greatly in their ability to understand and address these challenges. A recent study by Kaspersky Labs has identified Asia-Pacific Countries (APAC) as among the most prominent targets of cyberattacks owing to their rapidly increasing usage of digital technologies coupled with lack of awareness regarding cybersecurity, and limited resources deployed towards mitigation. India features among the top five countries most prone to cyberattacks along with China and Pakistan.

This piece seeks to map the dominant discourse on Cyber Security and International Trade. First, it examines the current World Trade Organization (WTO) framework and selects certain Free Trade Agreements (FTAs) to understand how cybersecurity concerns are presently understood only as related to national security or potential non-tariff barriers (NTB). Rooted in the fact that cybersecurity is inextricably linked to the technical capacity of a Member State to identify vulnerabilities, it argues that there is an urgent need to repurpose cybersecurity as an issue within the capacity building and technology transfer discussions.

image by geralt. Licensed via CC0.
  1. CYBERSECURITY ISSUES UNDER WORLD TRADE ORGANIZATION (WTO)

Despite rising cybersecurity concerns, international trade rules have minimal engagement in this area. Prominent international trade organisations (such as WTO) and other legal instruments like Free Trade Agreements (FTAs) have primarily focused on setting rules for digital commerce and have addressed cybersecurity as an incidental and secondary issue.  Within WTO’s existing framework, cybersecurity issues do not fall within a single set of rules.1 Depending on the context and subject of the dispute, several WTO Agreements, including General Agreement on Tariffs and Trade in Goods (GATT), General Agreement on Trade in Services (GATS), Agreement on Trade-Related Aspects of Intellectual Property Rights (TRIPS) and WTO Agreement on Technical Barriers to Trade (TBT Agreement), can have some bearing on the result of the dispute. As a result, the emerging cybersecurity issues can only be understood and interpreted on a case-by-case basis.2 

Currently, countries impose cybersecurity measures that range from complete prohibition on the trade of goods or services, tariff and non-tariff barriers, imposition of certification requirements and imposition of domestic standards, among others. Although none of these cybersecurity measures has been challenged at the WTO’s Dispute Settlement System so far, concerns were raised against China’s imposition of cybersecurity measures on ICT products and services by the European Union, USA, Canada, Japan and Australia in 2017. In another instance, China raised concern over Australia banning Chinese companies from supplying equipment for a 5G mobile  network on the grounds of national security

Propelled by similar developments, where Member States imposed different types of cybersecurity measures (prohibition on trade in technology goods, imposition of certification requirements and domestic standards), the discourse on cybersecurity and trade primarily focused on the cybersecurity measures as potential non-tariff barriers. As WTO primarily focuses on strengthening economic cooperation and reducing or eliminating trade barriers (tariff and non-tariff), the primary discourse has been centered only around these concerns. Numerous studies have identified the need to distinguish between genuine domestic cybersecurity policy measures taken by the Member States from those that are merely disguised protectionism or purely political in nature. 

Scholars also highlighted that Member States might justify such actions based on national security exceptions articulated under the GATT (Article XXI), GATS (Article XIV bis), TRIPS (Article 73) and other WTO Agreements. The national security exception, as broadly understood, allows Member States to take measures as they consider necessary for the protection of their essential security interests. This is problematic from several perspectives. 

The security exception was long touted as a self-judging provision and outside the purview of judicial review of the Dispute Settlement Body (DSB). This understanding was substantially modified in the context of GATT’s security exception in Russia – Traffic in Transit by the WTO Panel Report in 2019. The Panel opined that Article XXI (b) is not totally self-judging and that the term “essential security interests” are restricted to specific scenarios related to military facilities, nuclear facilities and measures taken in time of “war” or “other emergency in international relations”. Further, the Panel also emphasised that such a measure must be invoked in “good faith”. While Russia – Traffic in Transit Panel Report does provide a straightforward interpretation of the scope of the provision, several scholars, including Sarah Alturki and Neha Mishra have examined the security exceptions laid down under GATT and GATS as problematic in addressing cybersecurity measures. They maintained that the existing security exceptions under the WTO framework provisions are dated and were not conceived to cover cyber conflicts. Although the DSB may undertake to read such provisions in an evolutionary manner, the ambiguous nature of cyber-threats coupled with the lack of international consensus on cybersecurity governance makes it extremely challenging to resolve cybersecurity-related disputes. 

  1. CYBERSECURITY PROVISIONS UNDER FREE TRADE AGREEMENTS (FTAs)

Besides security exceptions under the WTO framework, some Free Trade Agreements, in their digital trade/e-commerce chapters, have dedicated provisions concerning inter-State cooperation in cybersecurity. For instance, Article 14.16 of the Comprehensive and Progressive Agreement for Trans-Pacific Partnership (CPTPP) recognises the importance of capacity building and collaborating mechanisms to identify and mitigate malicious intrusions or dissemination of malicious code that affect the electronic networks of countries which are Party to the Agreement. Article 12.13 of the Regional Comprehensive Economic Partnership (RCEP) features an identical provision. Further, Article 19.15 of the United States-Mexico-Canada Agreement (USMCA) features an expanded version of this condition. The provision obligates the Member States to share information and best practices and employ risk-based approaches that rely on consensus-based standards to detect, respond to, and recover from cybersecurity events.

To contain the misuse of cybersecurity measures that can harm free trade and economic cooperation among participating countries, several FTAs have included a provision to deter such behavior. Such provisions include the prohibition on disclosure of source code3, prohibition on the requirement to locate computing facilities in a specific jurisdiction4 and provisions mandating cross-border transfer of information by electronic means5. The measures relating to prohibition on disclosure of source code, restriction on mandating location of the computing facilities and others often find themselves in the cross-fire of a host of concerns emanating from economic development, transparency and cybersecurity. 

It is also important to note that these provisions also target policies restraining the free flow of cross-border data (data-localisation policies) prevalent in a number of countries including India, China, Vietnam, among others. 

  1.  OTHER POSSIBLE FRONTIERS FOR CYBERSECURITY AND INTERNATIONAL TRADE IN RESPECT OF GLOBAL SOUTH 

Beyond the above mentioned concerns, cybersecurity is also a question of technical competence and resources available for several developing and least-developed countries. Several studies and reports, including the recent Kaspersky projections for 2022, indicate a wide gap in countries’ ability to detect, assess and effectively respond to cyber-attacks. There has been a steep rise in the adoption of digital tools often outpacing the establishment of necessary state institutions, legal regulations and capacity to manage new challenges.  Digital solutions are seen as the gateway to economic growth and social development. These developments should not be seen in isolation from cybersecurity capacity building. The unbridled adoption of digital solutions without being secured can have far reaching implications for the economy and can lead to poor infrastructures and hollow digital development for countries in the global south. 

As mentioned above, the current provisions, under the FTAs and discussions at the WTO surrounding cybersecurity concerns for international trade, extend only up to sharing information and best-practices. Such glaring vulnerabilities can only be addressed through development assistance that includes technology transfers and offering cybersecurity capacity building and requires active cooperation from the developed countries. The discussions around digital development must be embedded in digital security. Developing countries, including India, should leverage their positions in economic forums and constructively channel the discussions around tech-transfer and technology facilitation mechanisms (TFM) on cybersecurity, as they have done in the past in the context of drug development and climate change. Existing tools for developing and least-developed countries incorporated under Article 66 and 67 of the TRIPS Agreement are insufficient, have seen weak implementation, and are unlikely to bridge this gap. As India is assuming the G20 presidency on December 1, 2022, it can lead the path for such momentous changes and offer the global south perspective the world needs.


*The author is grateful for the comments and contributions by Ms Garima Prakash, Deputy Manager, NASSCOM.

References:

  1. It is important to note that the WTO Agreements dates back to 1994 did not treat cyber issues specifically, but their rules nevertheless have application to cyber-related policies. See: Kathleen Claussen, ‘Economic cybersecurity law’ in Routledge Handbook of International Cybersecurity, pp.341-353 (Routledge, 1, 2020). See also: Dongchul Kwak, “No More Strategical Neutrality on Technological Neutrality: Technological Neutrality as a Bridge Between the Analogue Trading Regime and Digital Trade” World Trade Review (2021), 1–15.
  2. Post-2017, around 70 WTO Member States spearheaded by the USA and other developed countries have initiated “exploratory work together towards future WTO negotiations on trade-related aspects of electronic commerce.”  India and South Africa are not part of this initiative. Nevertheless, the result of these discussions shall have some bearing on the future of cybersecurity and trade.
  3.  Article 19.16 of USMCA (Similar provisions are incorporated under other trade agreements including CPTPP and RCEP).
  4. Article 19.12 of USMCA. (Similar provisions are incorporated under other trade agreements including CPTPP and RCEP).
  5. Article 19.11 of USMCA. (Similar provisions are incorporated under other trade agreements including CPTPP and RCEP).

Technology & National Security Reflection Series Paper 13: Flipping the Narrative on Data Localisation and National Security

Romit Kohli*

About the Author: The author is a fifth year student of the B.A. LL.B. (Hons.) programme at the National Law University, Delhi.

Editor’s Note: This post is part of the Reflection Series showcasing exceptional student essays from CCG-NLUD’s Seminar Course on Technology & National Security Law. This post was written in Summer, 2021. Therefore, it does not reflect recent policy developments in the field of data governance and data protection such as the December 2021 publication of the Joint Parliamentary Committee Report and its proposed Data Protection Bill, 2021.

I. Introduction

Countries all over the world are seeking to preserve and strengthen their cyber-sovereignty in various ways. One popular mechanism for the same is labelled with the nebulous phrase ‘data localisation’. Data localisation refers to requirements imposed by countries which necessitate the physical storage of data within their own national boundaries. However, the degree of data localisation varies across jurisdictions. At one end of the spectrum, we have ‘controlled localisation’ that favours the free-flow of data across borders, subject to only mild restrictions.  A prominent example of controlled localisation is the European Union’s (“EU”) General Data Protection Regulation (GDPR). At the other end of the spectrum, we have jurisdictions like China which impose much stricter localisation requirements on businesses operating within their national boundaries.

In India data localisation has become a significant policy issue over the last few years. Various government documents have urged lawmakers to introduce a robust framework for data localisation in India. The seminal policy document in this regard is the Justice BN Srikrishna Committee report, which provided the basis for the Personal Data Protection Bill of 2019.This bill proposed a framework which would result in a significant economy-wide shift in India’s data localisation practices. At the same time, various government departments have sought to implement sector-specific data localisation requirements with different levels of success.

This blog post argues that far from being a facilitator of national security, data localisation measures may present newer threats to national security in their implementation. We seek to establish this in three steps. First, we analyse the link between India’s national security concerns and the associated objectives of data localisation. This analysis demonstrates that the mainstream narrative regarding the link between national security and data localisation is inherently flawed. Thereafter, we discuss the impact of data localisation on the economic growth objective, arguing that India’s localisation mandate fails to consider certain unintended consequences of data localisation which restrict the growth of the Indian economy. Lastly, the article argues how this adverse impact on economic growth poses a threat to India’s national security, which requires us to adopt a  more holistic outlook of what constitutes national security. 

Image by World Bank Photo Collection’s Photostream. Copyrighted under CC BY 2.0.

II. The Mainstream Narrative

The Srikrishna Committee report underscores national security concerns as a basis for two distinct policy objectives supporting the introduction of data localisation measures. First, the report refers to the need for law enforcement agencies to have access to data which is held and controlled by data fiduciaries, stating that such access is essential for ‘… effectively [securing] national security and public safety…’ since it facilitates the detection of crime and the process of evidence gathering in general (Emphasis Added). However, experts argue that such an approach is ‘… unlikely to help India achieve objectives that actually require access to data’. Instead, the government’s objectives would be better-served by resorting to light-touch localisation requirements, such as mandating the storage of local copies of data in India while still allowing the data to be processed globally. They propose complementing these domestic measures with negotiations towards bilateral and multilateral frameworks for cross-border access to data.

Second, the report states that the prevention of foreign surveillance is ‘critical to India’s national security interests’ due to the lack of democratic oversight that can be exercised over such a process (Emphasis Added). However, we believe that data localisation fails as an effective policy measure to address this problem because notwithstanding the requirements imposed by data localisation policies, foreign governments can access locally stored data through extra-territorial means, including the use of malware and gaining the assistance of domestic entities. What is required,, is a more nuanced and well-thought-out solution which leverages the power of sophisticated data security tools. 

The above analysis demonstrates that the objectives linked to national security in India’s data localisation policy can be better served through other means. Accordingly, the mainstream narrative which seeks to paint data localisation as a method of preserving national security in the sense of cyber or data security is flawed. 

III. The (Unintended) Impact on the Indian Economy

The Srikrishna Committee Report ostensibly refers to the ‘… positive impact of server localisation on creation of digital infrastructure and digital industry’. Although there is no disputing the impact of the digital economy on the growth of various industries generally, the report ignores the fact that such growth has been fuelled by the free flow of cross-border data. Further, the Srikrishna Committee Report fails to consider the costs imposed by mandatory data localisation requirements on businesses which will be forced to forgo the liberty of storing their data in the most cost-effective way possible. These costs will be shifted onto unsuspecting Indian consumers. 

The results of three seminal studies help illustrate the potential impact of data localisation on the Indian economy. The first study, which aimed at quantifying the loss that data localisation might cause to the economy, found that mandatory localisation requirements would reduce India’s GDP by almost 1% and that ‘… any gains stemming from data localisation are too small to outweigh losses in terms of welfare and output in the general economy’. A second study examined the impact of data localisation on individual businesses and found that due to a lack of data centres in India, such requirements would impose a 30-60% increase in operating costs on such businesses, who would be forced to store their data on local servers. The last study analysed the sector-specific impact of localisation, quantifying the loss in total factor productivity at approximately 1.35% for the communications sector, 0.5% for the business services sector, and 0.2% for the financial sector. More recent articles have also examined the prejudicial impact of data localisation on Indian start-ups, the Indian IT sector, the cyber vulnerability of small and medium enterprises, and India’s Ease of Doing Business ranking. 

At this point, it also becomes important to address a common argument relied upon by proponents of data localisation, which is the fact that localisation boosts local employment, particularly for the computer hardware and software industries. Although attractive on a prima facie level, this argument has been rebutted by researchers on two grounds. First, while localisation might lead to the creation of more data centres in India, the majority of the capital goods needed for such creation will nonetheless be imported from foreign suppliers. Second, while the construction of these centres might generate employment for construction workers at a preliminary stage, their actual functioning will fail to generate substantial employment due to the nature of skilled work involved. 

The primary lesson to be drawn from this analysis is that data localisation will adversely impact the growth of the Indian economy—a lesson that seems to have been ignored by the Srikrishna Committee report. Further, when discussing the impact of data localisation on economic growth in India, the report makes no reference to national security. We believe that this compartmentalisation of economic growth and national security as unrelated notions reflects an inherently myopic view of the latter. 

IV. Towards a Novel Narrative

National security is a relative concept—it means different things to different people in different jurisdictions and socio-economic contexts. At the same time, a noticeable trend vis-à-vis this relative concept is that various countries have started incorporating the non-traditional factor of economic growth in their conceptions of national security. This is because the economy and national security are inextricably linked, with several interconnections and feedback loops. 

Although the Indian government has made no explicit declaration in this regard, academic commentary has sought to characterise India’s economic slowdown as a national security concern in the past. We believe that this characterisation is accurate since India is a relatively low-income country and therefore, its national security strategy will necessarily depend upon the state of its economy. Further, although there have been objections surrounding a dismal defence-to-GDP ratio in India, it is believed that these objections are based on ‘trivial arithmetic’. This is because the more appropriate way of remedying the current situation is by concentrating policy efforts on increasing India’s GDP and accelerating economic growth, rather than lamenting low spends on defence. 

This goal, however, requires an upgradation of India’s national security architecture. While the nuances of this reform fall outside the precise scope of this blog post, any comprehensive reform will necessarily require a change in how Indian policymakers view the notion of national security. These policymakers must realise that economic growth underpins our national security concerns and consequently, it is a factor which must not be neglected.

This notion of national security must be used by Indian policymakers to examine the economic viability of introducing any new law, including the localisation mandate. When seen through this broader lens, it becomes clear that the adverse economic impact of data localisation policies will harm India’s national security by inter alia increasing the costs of doing business in India, reducing the GDP, and prejudicing the interests of Indian start-ups and the booming Indian IT sector. 

V. Conclusion

This blog post has attempted to present the link between data localisation and national security in a different light. This has been done by bringing the oft-ignored consequences of data localisation on the Indian economy to the forefront of academic debate. At the center of the article’s analysis lies an appeal to Indian policymakers to examine the notion of national security through a wider lens and consequently rethink their flawed approach of addressing national security concerns through a localisation mandate. This, in turn, will ensure sustained economic growth and provide India with the technological advantage it necessarily requires for preserving its national interests.  


*Views expressed in the blog are personal and should not be attributed to the institution.

Technology and National Security Law Reflection Series Paper 12 (B): Contours of Access to Internet as a Fundamental Right

Shreyasi Tripathi*

About the Author: The author is a 2021 graduate of National Law University, Delhi. She is currently working as a Research Associate with the Digital Media Content Regulatory Council.

Editor’s Note: This post is part of the Reflection Series showcasing exceptional student essays from CCG-NLUD’s Seminar Course on Technology & National Security Law.  Along with a companion piece by Tejaswita Kharel, the two essays bring to a life a fascinating debate by offering competing responses to the following question:

Do you agree with the Supreme Court’s pronouncement in Anuradha Bhasin that access to the internet is an enabler of other rights, but not a fundamental right in and of itself? Why/why not? Assuming for the sake of argument, that access to the internet is a fundamental right (as held by the Kerala High Court in Faheema Shirin), would the test of reasonableness of restrictions be applied differently, i.e. would this reasoning lead to a different outcome on the constitutionality (or legality) of internet shutdowns?

Both pieces were developed in the spring semester, 2020 and do not reflect an updated knowledge of subsequent factual developments vis-a-vis COVID-19 or the ensuing pandemic.

  1. INTRODUCTION 

Although it did little to hold the government accountable for its actions in Kashmir, it would be incorrect to say that the judgment of Anuradha Bhasin v. The Union of India is a complete failure. This reflection paper evaluates the lessons learnt from Anuradha Bhasin and argues in favour of access to the internet as a fundamental right, especially in light of the COVID-19 pandemic. 

Image by Khaase. Licensed under Pixabay License.
  1. EXAMINING INDIA’S LEGAL POSITION ON RIGHT TO INTERNET 

Perhaps the greatest achievement of the Anuradha Bhasin judgement is the fact that the Government is no longer allowed to pass confidential orders to shut down the internet for a region. Moreover, the reasons behind internet shutdown orders must not only be available for public scrutiny but also be reviewed by a Committee. The Committee will need to scrutinise the reasons for the shutdown and must benchmark it against the proportionality test. This includes evaluating the pursuit of a legitimate aim, exploration of suitable alternatives, and adoption of the least restrictive measure while also making the order available for judicial review. The nature of the restriction,  its territorial and temporal scope will be relevant factors to determine whether it is proportionate to the aim sought to be achieved. The court also expanded fundamental rights to extend to the virtual space with the same protections. In this regard, the Court  made certain important pronouncements on the right to freedom of speech and expression. These elements will not be discussed here as they fall outside the scope of this paper. 

A few months prior in 2019, the Kerala High Court recognised access to the internet as a fundamental right. Its judgement in Faheema Sharin v. State of Kerala, the High Court addressed a host of possible issues that arise with a life online. Specifically, the High Court recognised how the internet extends individual liberty by giving people a choice to access the content of their choice, free from control of the government. The High Court relied on a United Nations General Assembly Resolution to note that the internet “… facilitates vast opportunities for affordable and inclusive education globally, thereby being an important tool to facilitate the promotion of the right to education…” – a fact that has only strengthened in value during the pandemic. The Kerala High Court held that since the Right to Education is an integral part of the right to life and liberty enshrined under Article 21 of the Constitution, access to the internet becomes an inalienable right in and of itself. The High Court also recognised the value of the internet to the freedom of speech and expression to say that the access to the internet is protected under Art. 19(1)(a) of the Constitution and can be restricted on grounds consistent with Art. 19(2).

  1. ARGUING IN FAVOUR OF RIGHT TO INTERNET  

In the pandemic, a major reason why some of us have any semblance of freedom and normalcy in our lives is because of the internet. At a time when many aspects of our day to day lives have moved online, including education, healthcare, shopping for essential services, etc. – the fundamental importance of the internet should not even be up for debate. The Government also uses the internet to disseminate essential information. In 2020 it used a contact tracing app (Aarogya Setu) which relied on the internet for its functioning. There also exists a WhatsApp chatbot to give accurate information about the pandemic. The E-Vidya Programme was launched by the Government to allow schools to become digital. In times like this, the internet is not one of the means to access constitutionally guaranteed services, it is the only way (Emphasis Added)

In  this context, the right of access to the internet should be read as part of the Right to Life and Liberty under Art. 21. Therefore, internet access should be subject to restrictions only based on procedures established by law. To better understand what shape such restrictions could take, lawmakers and practitioners can seek guidance from another recent addition to the list of rights promised under Art. 21- the right to privacy. The proportionality test was laid down in the Puttaswamy I judgment and reiterated in  Puttaswamy II (“Aadhaar Judgement”). In the Aadhar Judgement  when describing the proportionality for reasonable restrictions, the Supreme Court stated –

…a measure restricting a right must, first, serve a legitimate goal (legitimate goal stage); it must, secondly, be a suitable means of furthering this goal (suitability or rational connection stage); thirdly, there must not be any less restrictive but equally effective alternative (necessity stage); and fourthly, the measure must not have a disproportionate impact on the right-holder (balancing stage).” –

This excerpt from Puttaswamy II provides as a defined view on the proportionality test upheld by the court in Anuradha Bhasin. This means that before passing an order to shut down the internet the appropriate authority must assess whether the order aims to meet a goal which is of sufficient importance to override a constitutionally protected right. More specifically, does the goal fall under the category of reasonable restrictions as provided for in the Constitution. Next, there must be a rational connection between this goal and the means of achieving it. The appropriate authority must ensure that an alternative method cannot achieve this goal with just as much effectiveness. The authority must ensure that the method being employed is the least restrictive. Lastly, the internet shutdown must not have a disproportionate impact on the right holder i.e. the citizen, whose right to freedom of expression or right to health is being affected by the shutdown. These reasons must be put down in writing and be subject to judicial review.

Based on the judgment in Faheema Sharin, an argument can be made how the pandemic has further highlighted the importance of access to the internet, not created it. The reliance of the Government on becoming digital with e-governance and digital payment platforms shows an intention to herald the country in a world that has more online presence than ever before. 

  1. CONCLUSION 

People who are without access to the internet right now* – people in Kashmir, who have access to only 2G internet on mobile phones, or those who do not have the socio-economic and educational means to access the internet – are suffering. Not only are they being denied access to education, the lack of access to updated information about a disease about which we are still learning could prove fatal. Given the importance of the internet at this time of crisis, and for the approaching future, where people would want to avoid being in crowded classrooms, marketplaces, or hospitals- access to the internet should be regarded as a fundamental right.

This is not to say that the Court’s recognition of this right can herald India into a new world. The recognition of the right to access the internet will only be a welcome first step towards bringing the country into the digital era. The right to access the internet should also be made a socio-economic right. Which, if implemented robustly, will have far reaching consequences such as ease of social mobility, increased innovation, and fostering of greater creativity.


*Views expressed in the blog are personal and should not be attributed to the institution.