Will Fake News Decide the World’s Largest Election?

Varsha Rao


Social media platforms like Facebook and Twitter have been hauled up by many an authority for their role as conduits in the dissemination of ‘fake news’, including the United States Senate and India’s Parliamentary Standing Committee on Information Technology.

In the run-up to the Lok Sabha Elections, social media and messaging platforms have put in place strategies to check attempts at spreading misinformation.

To prevent voter suppression (acts aimed at reducing voter turnout), Facebook bans misrepresentations about the voting process, such as claims that people can vote online. The  launch of political advertisement archives in India by Facebook, Google and Twitter has also contributed to the fight against misinformation campaigns.

Fact-checkers are the second line of defence after artificial intelligence and machine learning has had its turn in identifying potential pieces of false news. The judgement and research ability of the fact-checker determines the rating that the highlighted content will receive as well as its priority on platforms, such as on Facebook’s News Feed.

However, entities involved in misinformation campaigns remain undeterred.



Cutting Off a Hydra’s Head: Challenges Befalling Social Media Intermediaries

The existence of multiple forms of social media and competing platforms allows malicious actors to engage in ‘platform shopping’ and utilize methods which throw up fewer obstacles.

An example of this is podcasts or series of audio files. Scrutinizing the content of podcasts for hate speech and misinformation is much more difficult than identifying buzzwords in articles. When platforms disseminating podcasts do not have transparent policies on taking down content, there is no guarantee that flagging a podcast for problematic content will contain its reach. Furthermore, since podcasts are in a nascent stage of popularity, platforms may not have the resources or funding to engage in extensive fact-checking or hire third party fact-checkers.

The emergence of ‘deepfakes’ or artfully doctored photos and videos has contributed to the flow of misinformation on social media as well. Lack of awareness regarding the existence and popularity of ‘deepfakes’ along with the difficulty in spotting manipulations in the footage exacerbates its ability to influence the target audience.

Social media companies are well aware that they are going up against determined actors with the capacity to generate creative solutions on the fly. One such example was observed by an American reporter on a chat channel, Discord. When a Twitter user complained about Twitter’s proactive measures in deleting accounts connected to voter suppression attempts, another user suggested the use of Snapchat filters on photos found online when creating a fake account to evade reverse image searches.

It does not help that certain challenges faced by social media companies have no immediate solutions. In its press releases, Facebook has highlighted the scarcity of professional fact-checkers worldwide, the time it takes for a complex news item to be scrutinized and the lack of meaningful data in local dialects to aid machine learning.

Furthermore, while solutions have been implemented by social media companies in good faith, they have been shown to remain unsuccessful in tackling the problem as a whole. A reporter for The Atlantic drew attention to a loophole in Facebook’s Ad Library authentication process, an otherwise effective dragnet in a sea of insidious advertising. By setting up a limited liability company to act as the publisher of the ad, special interest groups can obscure their identity and continue to sponsor ads on Facebook. The inability to predict users’ behavioural tendencies may also lead to the failure of the solution, such as in the case of WhatsApp where the labelling of forwarded messages may not encourage the recipient to question the legitimacy of the message if the recipient has faith in the credibility of the sender.

While scrutinizing the strategies offered by social media platforms and other intermediaries, it is important to keep in mind that the problem of ‘fake news’ is not a new phenomenon. The introduction of the printing press in the 15th century also unleashed a wave of ‘fake news’ regarding witches and religious fanaticism which would be printed alongside scientific discoveries. Thus, while social media may have amplified its reach – much like a microphone does in the hands of a speaker – it is ultimately the individual spewing vitriol that is the true culprit. The burden of generating solutions cannot be solely borne by the intermediaries.


Is the Government’s Heart in the Right Place: Misplaced Solutions for an Insidious Problem

Unfortunately, as part of their contribution to curb the dissemination of ‘fake news’, the Government has made scarce headway. In April 2018, a directive was issued by the Ministry of Information and Broadcasting stating that the accreditation of journalists found to have generated or circulated ‘fake news’ would be suspended for a time period to be determined according to the frequency of violations and would be cancelled in case of a third violation. The guidelines were immediately withdrawn on the direction of the Prime Minister’s Office after the Government was heavily criticized by journalists and media bodies for attempting to muzzle the free press.

In December 2018, the Ministry of Electronics and Information Technology published the draft Information Technology [Intermediaries Guidelines (Amendment) Rules], 2018, which requires – inter alia – that intermediaries enable tracing of the origin of information and deploy automated tools to proactively identify and remove unlawful content. WhatsApp, a platform with end-to-end encryption, took a stand against breaking encryption and pointed to privacy and free speech concerns to justify their position.

As countries attempt to block the dissemination of ‘fake news’ on the internet and regulate the flow of information on social media platforms, it is imperative to ensure that overbroad definitions and strategies do not end up promoting political censorship.

China’s crackdown on ‘online rumours’ since 2013 is an example of the State controlling information flow. Not only must ‘rumours’ – including content undermining morality and the socialist system – be removed by social network operators, but also their publication could result in a jail term of 3 years for the creator. The licenses required by social media networks to operate in China may be held hostage if their interpretation of ‘rumours’ does not align with the Chinese authorities. This incentivizes overly cautious intermediaries to block or report content that seems ‘fake’ by the Government’s standards, thus leading to collateral censorship.


‘What Is’ versus ‘What Could Have Been’: The Pitfalls of Election Campaigning  

The lack of significant engagement and progress on the ‘fake news’ and misinformation front is certainly a cause for concern as it points to a lack of political will.

The misuse of social media and messaging platforms by the ruling party as well as the Opposition has been widely reported by news outlets. BJP President allegedly told the party’s IT cell volunteers that the 32 lakh-strong WhatsApp groups allow the BJP to deliver any message to the public, even if it is fake. Last month, Facebook took down pages connected to the Congress IT cell as well as an IT firm behind the NaMo app for coordinated inauthentic behaviour and spam. WhatsApp’s head of communications has also interacted with political parties to highlight that WhatsApp is not a broadcast platform and accounts engaging in bulk messaging will be banned.

For political parties, there is much to gain by manipulating public opinion in a country where elections are tightly-contested along narrow margins, and election results have a long-lasting impact on the intricate fabric of national identity. Back in 2013, the Internet and Mobile Association of India (IAMAI) had gathered from a Social Media survey conducted in 35 Indian cities that the votes of only 3-4% of social media users could be swung. Of course, this was before the 2016 U.S. Presidential Elections which saw social media disinformation campaigns being executed with a renewed vigour.

As a starting point, political parties could have agreed to refrain from executing misinformation campaigns and instead, opted to encourage healthy debate based on verifiable facts to influence the electorate. Mud-slinging and propaganda campaigns are tactics that could potentially win elections. However, political candidates cannot ignore the lethal consequences of ‘fake news’ in India and carry on as if it is business as usual, especially when ‘fake news’ has become a life-and-death issue.

In the run-up to its federal elections in 2017, major political parties in Germany entered into a ‘gentleman’s agreement’ to disregard information leaked as a result of cyberattacks instead of exploiting it. An agreement by Indian political parties on the ethics that ought to govern social media use would have underscored the same spirit.

Instead of attempting to increase the burden on intermediaries, the Government could also have undertaken extensive digital literacy campaigns to build resilience against attempts at manipulation, be it domestic or foreign. The campaigns could have been structured to highlight the techniques by which false information is propagated to manipulate the psychology of the voter.

Social media platforms, political parties and the Election Commission form a trinity that shares the responsibility of protecting the authenticity of content informing a voter’s choice. While the degree of responsibility may be different, without collaboration, the goal will remain unachievable. The shortcomings of the political parties do not absolve the social media intermediaries of their responsibility. It took Twitter until half of polling had been completed to launch an anti-voter suppression feature on the microblogging platform. There have also been multiple instances of ‘fake news’ being taken down on other social media platforms but remaining in circulation on Twitter.

The impact of misinformation campaigns on the Lok Sabha elections will be uncovered only once the elections come to an end. The best-case scenario it that it has a negligible impact on the election result. The worst-case scenario? The influence is so pervasive that we will follow in the footsteps of the U.S. and take a minimum of two years to uncover its reach.

Regardless of what ultimately happens, perhaps there is one thing we can all agree on – not enough has been done to protect this “festival of democracy” from being manipulated.


(Varsha is a researcher with the Centre for Communication Governance at National Law University Delhi.)

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s