Today, the Centre for Communication Governance (CCG) is happy to release a working paper titled ‘Tackling the dissemination and redistribution of NCII’ (accessible here). The dissemination and redistribution of non-consensual intimate images (“NCII”) is an issue that has plagued platforms, courts, and lawmakers in recent years. The difficulty of restricting NCII is particularly acute on ‘rogue’ websites that are unresponsive to user complaints. In India, this has prompted victims to petition courts to block webpages hosting their NCII. However, even when courts do block these webpages, the same NCII content may be re-uploaded at different locations.
The goal of our proposed solution is to: (i) reduce the time, cost, and effort associated with victims having to go to court to have their NCII on ‘rogue’ websites blocked; (ii) ensure victims do not have to re-approach courts for the blocking of redistributed NCII; and (iii) provide administrative, legal, and social support to victims.
Our working paper proposes the creation of an independent body (“IB”) to: maintain a hash database of known NCII content; liaise with government departments to ensure the blocking of webpages hosting NCII; potentially crawl targeted areas of the web to detect known NCII content; and work with victims to increase the awareness of NCII related harms and provide administrative and legal support. Under our proposed solution, victims would be able to simply submit URLs hosting their NCII to a centralised portal maintained by the IB. The IB would then vet the victim’s complaint, coordinate with government departments to block the URL, and eventually hash and add the content to a database to combat redistribution.
This will significantly reduce the time, money, and effort exerted by victims to have their NCII blocked, whether at the stage of dissemination or redistribution. The issue of redistribution can also potentially be tackled through a targeted, proactive crawl of websites by the IB for known NCII pursuant to a risk impact assessment. Our solution envisages several safeguards to ensure that the database is only used for NCII, and that lawful content is not added to the database. Chief amongst these is the use of multiple human reviewers to vet the complaints made by victims and a public interest exemption where free speech and privacy interests may need to be balanced.
A full summary of our recommendations are as follows:
Efforts should be made towards setting up an independently maintained hash database for NCII content.
The hash database should be maintained by the IB, and it must undertake stringent vetting processes to ensure that only NCII content is added to the database.
Individuals and vetted technology platforms should be able to submit NCII content for inclusion into the database; NCII content removed pursuant to a court order can also be included in the database.
The IB may be provided with a mandate to proactively crawl the web in a targeted manner to detect copies of identified NCII content pursuant to a risk impact assessment. This will help shift the burden of identifying copies of known NCII away from victims.
The IB can supply the DoT with URLs hosting known NCII content, and work with victims to alleviate the burdens of locating and identifying repeat instances of NCII content.
The IB should be able to work with organisations to provide social, legal, and administrative support to victims of NCII; it would also be able to coordinate with law enforcement and regulatory agencies in facilitating the removal of NCII.
Our working paper draws on recent industry efforts to curb NCII, as well as the current multi-stakeholder approach used to combat child-sex abuse material online. However, our regulatory solution is specifically targeted at restricting the dissemination and redistribution of NCII on ‘rogue’ websites that are unresponsive to user complaints. We welcome inputs from all stakeholders as we work towards finalising our proposed solution. Please send comments and suggestions to <firstname.lastname@example.org>.
Our everyday lives are increasingly being mediated by technology. Social networks are shaping our interpersonal communications, algorithms are driving our decisions and behaviours, and smart devices are modulating our home and workplace environment. Haraway postulated this rising penetration of technology in practically all aspects of life through the image of “cyborg bodies” entangled in its discourses and effects to the point where “who makes and who is made in the interaction between human and machine” is impossible to decipher. With the ever-increasing dependence of our everyday lives on technology, the internet directly interpellates subjects while also engaging with other social discourses that contribute to subject formation. As technology becomes fundamental in shaping not only our everyday lives but also our subjectivities, the question of security in cyberspace becomes increasingly personal.
Although cybersecurity has been recognized as a global concern, there is no agreement on how it should be conceptualized. The question of “who or what is to be protected” lies at the heart of these debates. A growing body of literature moves beyond the protection of “cyberspace and the underlying ICT infrastructure”, and defines cyber security as the protection of those “who function in the cyberspace, i.e. individuals, organisations, and nations”. In practice, however, it is seen that the sovereignty of the state is considered as the dominant objective of cyber security and powerful actors like states, military and corporates drive the discourse at the risk of invisibilizing the ordinary user.
A feminist approach to cybersecurity must place humans at its centre. It must also recognize that our experiences in the online world are shaped by our identity and the power structures prevalent in society. Consequently, cybersecurity threats are perceived and experienced differently by minorities, women, non-binary people who are also routinely absent or underrepresented in such discussions. This blog argues that women’s experiences, particularly those at the margins, must be at the centre of how cybersecurity is conceptualized in technical design and legislation. The piece begins by examining questions of representation and its implications. It further probes how gender-blind design and the underlying assumptions of public/private dichotomy lead to gendered threats like technology-facilitated intimate partner violence being excluded from or trivialised in cybersecurity discussions. Finally, it looks at the case of intimate image abuse and examines the framework of bodily integrity as a key tool to centre womens’ experiences in cybersecurity.
Women in Cybersecurity
Only 25% of the global cyber security workforce identify as women. The work culture of incident response teams which are predominantly staffed by men helps in reinforcing the association of technical expertise to masculinity. Feminist theory not only advocates for greater representation of women in cybersecurity design, defence and response but also questions the basis of how the epistemic authority is allocated. At the heart of a feminist approach to cybersecurity lies the question, “Who is considered the bearer of knowledge?”. Since, in both technology and law, technocratic expertise is the primary epistemic authority, experiences of ordinary citizens are invisibilized and often considered problems that need to be solved by experts through behavioural change or legislation from the top. It is this top-down approach to cybersecurity that is challenged by the feminist standpoint epistemology, where the subjective experience of those at the margins is the key source of knowledge.
Another important aspect of feminist research is the centrality of political action and the dismantling of the separation between theory and practice. Thus, a feminist approach towards cybersecurity will actively engage with ordinary users, especially those who are marginalised through multiple axes of oppression, in building knowledge, understanding threats and bringing about change through political action and solidarity. Oxford Internet Institute’s Reconfigure Network consisting of a group of feminist cybersecurity practitioners and researchers is a step in this direction. Under this project, ordinary citizens, through a series of community workshops, engage in defining threats based on their understanding and experiences.
The public/private binary
A gender-blind approach towards cybersecurity doesn’t take into account how threats are experienced differently by people depending on their social positions. This is because, contrary to popular belief, technical deliberation is not objective and value-neutral. The design, construction and regulation of technology are embedded with socio-political values. Often gendered threats faced by women and individuals of marginalised gender and sexual identities are overlooked or trivialized in design considerations. A common example of this is systems using personal information questions as backups to passwords, e.g. the name of your first pet or middle name of your parent. This assumes that the “bad actor” will always be a stranger and not an intimate partner/former partner. Similarly, Slupska has shown how threat modelling of major smart home systems does not take into account intimate partner violence(IPV). The owner of the device is never seen as a security threat to other users of the device in any use case. This is attributed to the public/private binary, where the home is constructed as a safe place in spite of the rising cases of gender-based violence facilitated by smart home devices.
Feminist scholars have long critiqued the public/private binary which relegates the ambit of gendered violence to the domain of the private. Technology-facilitated sexual violence like intimate image abuse (commonly referred to as “revenge porn”) are often constructed as concerns of individual privacy instead of cybersecurity. Even the expectations for users are gendered; women are expected to maintain complete control of their digital footprint and activate privacy settings on social networks to protect themselves. Any failure to do so results in victim-blaming, thereby shifting the onus of ensuring cybersecurity completely onto the individual victims.
This is also evident in the language of “revenge porn” which reduces the scope of the crime and its severity by invoking narratives of relationship feuds and disgruntled partners. These issues have traditionally been placed in the domain of the private and emotional, which is constructed as inferior and less serious to the public domain of rational security. It can also become a limiting factor in legislative reforms as it considers the “intent to harm or harass” the victim a necessity to prove the crime. Not only does this narrow conception fail to take into consideration the economy surrounding the distribution of such imagery, but it also makes proving of intent to harass challenging.
Centering Women’s experiences & Bodily Integrity in a digitally mediated world
Consequently, it is argued that “revenge pornography” be seen as a part of the “continuum of image-based sexual abuse”. This is based on Kelly’s seminal work on the continuum of sexual violence which challenges the “legal-analytical categorization” of sexual offences which often don’t focus on women’s experiences and also lead to a hierarchy of sexual offences. There is a range of abusive practices like revenge porn, sextortion, upskirting, voyeurism, deep fake pornography etc. that come under the umbrella of image-based sexual abuse.
Franks has advocated for the violation of privacy to be the fundamental harm that needs to be criminalised in these legislations under the rubric of non-consensual pornography. However, scholars have advocated going beyond models that look at intimate image abuse as merely content/information privacy violations to the framework of bodily integrity in terms of self-determination and inviolability. By circulating intimate images non–consensually, the victim’s right to self-determination is curbed. Centering womens’ experiences of bodily harm is captured in Durham’s essay,
“Although virtual worlds offer a putative escape from the constraints of the corporeal, bodies still haunt the mediascape, and the experiential connections between symbolized and real world bodies must be acknowledged as central to feminism’s liberatory goals.”
Since the body is the site where gender is inscribed, bodily integrity provides a framework to understand what values and protections society attributes to different bodies. It is thus essential to note how the bodies of trans-persons, Dalits, Bahujan, Adivasis and minorities are most vulnerable as they are seen as sites to exert power.
It is also important to understand that online images of the body are not mere representations but act as digital prostheses embodying our subjectivity. That is to say; today we experience the world and our beingness through “an assemblage of organic body, conventional prostheses and digital prostheses”. This is fundamental to understanding the continuity of experience between the offline and online world which can prevent us from discounting the severity of intimate image abuse and the impact it has on the overall lives of the victims. Many victims experience a feeling of violation through unintended exposure that they liken to sexual assault and rape. Further, this framework can prevent a narrow definition of online intimate image abuse which excludes images that do not traditionally classify as “intimate”. Thus, repeated instances of non-consensually sourced images of Muslim women put up for auction on apps should be recognized as targeted sexual harassment and intimate image abuse in addition to being a hate crime. Further, Deep fake nudes, which are not actual representations of the body but nonetheless impact the online subjectivity of an individual can be recognized as an important emergent form of intimate image abuse.
Bodily integrity, thus, provides a framework through which womens’ diverse experiences can be placed at the centre of understanding and responding to a cybersecurity threat. Approaches like these can pave the way for centering the safety and well-being of human beings, especially those who have been historically marginalised, in cybersecurity debates and discussions. This can prevent us from replicating the same power hierarchies and patterns of exploitation in this new world of augmented subjectivity where technology is ubiquitous.
Kelly L, ‘The Continuum of Sexual Violence’, Women, violence and social control (Springer 1987)
Maschmeyer L, Deibert RJ and Lindsay JR, ‘A Tale of Two Cybers – How Threat Reporting by Cybersecurity Firms Systematically Underrepresents Threats to Civil Society’ (2021) 18 Journal of Information Technology & Politics 1 <https://doi.org/10.1080/19331681.2020.1776658> accessed 12 February 2022
McGlynn C, Rackley E and Houghton R, ‘Beyond “Revenge Porn”: The Continuum of Image-Based Sexual Abuse’ (2017) 25 Feminist Legal Studies 25 <https://doi.org/10.1007/s10691-017-9343-2> accessed 10 February 2022
Patella-Rey P, ‘Beyond Privacy: Bodily Integrity as an Alternative Framework for Understanding Non-Consensual Pornography’ (2018) 21 Information, Communication & Society 786 <https://doi.org/10.1080/1369118X.2018.1428653> accessed 7 February 2022
Rey P and Boesel WE, ‘The Web, Digital Prostheses, and Augmented Subjectivity’  PJ Rey and Whitney Erin Boesel//Routledge handbook of science, technology, and society.–NY: Routledge 173