Grisly video footage showing police personnel wearing riot gear, coercing five bruised, bloodied men to sing the national anthem, emerged on social media during the communal violence in Delhi earlier this year. One of the men, Faizan, died later from his injuries.
Shocking as it was, the video was one among many that helped shine light on police brutality, forcing the police to initiate an inquiry into the matter. Along similar lines, videos of the January 2020 attack on Jawaharlal Nehru University, uploaded on social media, helped unveil the identity of the assailants.
Social media is used both by the public as well as human rights and humanitarian organisations to gather, store, analyse and disseminate information as part of early warning, prevention and mitigation systems during localised conflicts, riots and other forms of mass violence. There is immense evidentiary potential of such information in relation to atrocity crimes, whether as pictures or videos. But there are hurdles faced in the preservation of such online content in an atmosphere of censorship of social media, both by platforms themselves and governments.
Online content as evidence of atrocity crimes
Judicial mechanisms have often failed to hold perpetrators of atrocity crimes because of insufficient evidence. In 2007, constrained by a lack of evidence, the International Court of Justice was unable to hold Serbia responsible for the Bosnian genocide. Similarly, in the case of Gbagbo and Ble Goude – charged with crimes against humanity committed during post-electoral violence in the Ivory Coast – the ICC observed that the prosecutor had failed to submit “sufficient evidence.” In India, criminal courts acquitted dozens of accused persons in the 2002 Gujarat riots for lack of evidence.
But in today’s digital age, social media may be one platform that remains relatively inexpensive and accessible to civilians, even in conflict-ridden areas or at high-risk for mass violence. Platforms like Facebook, Twitter and YouTube enable victims or even bystanders to quickly take photos and videos, and upload them to inform the world about ongoing atrocities or mass violence at their location at a moment’s notice. This can serve several purposes, including as warnings to others, beacons to direct relief efforts, or even evidence that can later be used to prosecute the perpetrators.
Also read: How Reliable and Effective Are the Mobile Apps Being Used to Fight COVID-19?
For instance, video footage taken by an activist group on cell phones and uploaded to social media was crucial in indicting high-level officers for police atrocities against civilians in Rio de Janeiro’s favelas. Horrific videos showing the execution of some individuals uploaded on forums like Facebook led to the issuance of an arrest warrant against Mahmoud Al-Werfalli, a commanding officer of the Libyan National Army in 2017, the first of its kind issued by the ICC relying on social media evidence. Similarly, thousands of high-resolution images captured by Syrian defector “Caesar”, depicting torture in Syrian government facilities, were submitted to the German Federal Prosecutor’s Office to consolidate the evidentiary record against Syrian intelligence services and military forces in a universal jurisdiction case. Eventually, the German Federal Court of Justice issued an arrest warrant on the basis, among others, of the “Caesar images.”
Don’t throw the baby out with the bathwater
Easy access to social media may be a blessing in some ways for human rights activists, but its potential as an evidence repository is underutilised and faces several obstacles. Popular platforms including Facebook, Twitter and YouTube all use a mix of artificial intelligence and human moderators to filter, assess and regulate content. Machine learning algorithms are used to track and filter out content that may be in violation of their community rules. Content that cannot be filtered by the algorithm gets sent to the human content moderators who then make the decision with respect to that content.
But these platforms are all increasingly coming under pressure from governments to combat false and inflammatory content on their platforms. For instance, incidents such as New Zealand’s Christchurch shooting, which was live-streamed on Facebook and remained on the site for hours before being taken down, have led to the endorsement of the Christchurch Call by tech companies and governments, including India, to eliminate terrorist and violent extremist content online.
As a result, these companies are ramping up efforts to police their platforms, resorting to technology to take down content deemed harmful or illegal. Facebook asserted in 2018 that 99.5% of terrorism-related content was taken down by algorithms before anyone could see it.
Also read: How Poor Data Protection Can Endanger Communities During Communal Riots
The problem with this is that it ends up removing several posts that document atrocities as well, throwing the baby out with the bathwater. In their current stage, artificial intelligence-enabled algorithms cannot properly understand the context in which content is posted – a highly subjective task that even human moderators struggle with. YouTube, for instance, has taken down hundreds of videos which were evidence of government-led attacks in Syria, because they were flagged as violent. The video on Facebook showing Werfalli ordering the execution of a group of individuals and kickstarting his prosecution by the ICC has also been deleted. (An archived copy exists, however.)
How these machine-learning algorithms work is also not disclosed by companies, making it harder for civil society or other organisations to understand how they function and work with them.
Apart from content monitoring by platforms, censorship by the government can also pose obstacles. In India, a combination of Section 69A of the Information Technology Act, 2000, and the rules framed under it, i.e., the Information Technology (Procedure and Safeguards for Blocking for Access of Information by Public) Rules, 2009 (Blocking Rules, 2009), provide the Central government with the power to issue takedown orders that platforms have to comply with under a procedure that appears to have checks and balances, but in actuality involves extensive executive discretion and lacks transparency.
No member of the judiciary is involved in any stage of this process. Even the Review Committee that sits periodically and reviews the validity of the blocking order does not have any judicial member. If it finds that the order is not valid, it can direct unblocking of the particular post, but this rarely happens in practice. Of course, these orders are subject to judicial review in the form of challenge in courts, but Rule 16 of the Blocking Rules ensures that these takedown orders are confidential, making any challenge in court difficult.
The Supreme Court’s judgement in Shreya Singhal vs Union of India upheld the validity of these rules, ostensibly on the ground that these apparent safeguards are sufficient. The judgement is laudable for striking down the notorious Section 66A of the Information Technology Act, 2000, and making intermediaries’ lives easier by clarifying that they are only required to monitor and take down illegal content when asked to in the form of an order from the court or a competent government authority. But the challenge to the Blocking Rules, 2009, ought to have been more seriously considered.
Also read: Jammu and Kashmir Needs Normal, High-Speed Internet Now More Than Ever
Such unchecked power in the hands of the government when it comes to social media regulation is problematic. Takedown orders issued to these platforms under Section 69A are usually shrouded in secrecy on account of the confidentiality obligation under the Blocking Rules, 2009. In 2019, news reports emerged of alleged requests made to Twitter by the Indian government to take down accounts for spreading “anti-India propaganda” in Jammu and Kashmir. Instances where platforms have pushed back strongly against unreasonable blocking orders are few.
All of this gains significance in the context of the Delhi violence, which saw several pictures and videos uploaded on social media platforms such as Twitter. Among these were videos documenting police inaction while crowds attacked individuals, and police personnel even breaking surveillance cameras. Such videographic evidence can be critical – it can assist in investigations and potentially be used as evidence in court (subject to proof of authenticity), with two FIRs having already been filed in respect of Faizan’s death.
Context is everything
Governments the world over are clamping down on violent and extremist content online. The European Union is considering passing a regulation that will require online platforms to remove access to terrorism-related content within a record one hour of receipt of the removal order, failing which they will be fined up to 4% of their global turnover in the previous business year. Australia passed legislation last year under which executives of social media companies can be imprisoned if “abhorrent violent material” is not removed quickly.
But context is everything. Every online video, photograph or other documentary record – even those that graphically depict violence – is part of a larger story or record. Often, such online content can help consolidate the evidentiary record for mass atrocities. Failing to acknowledge this, as is often currently the case, will continue to cause problems for atrocity documentation. One way platforms can help is by retaining the data they takedown, as Facebook did when it confirmed that it was “preserving data” from pages removed for inciting violence against the Rohingya.
At the same time, the authenticity of such pictorial and videographic evidence may be disputed. Deep fakes have made it possible to falsify images that can escape detection even by algorithms. But solutions are being developed, such as the eyeWitness to Atrocities app developed by the International Bar Association which adds a time-stamp and GPS fixed location to the recordings, which can then be encrypted and uploaded to data banks from anywhere. In Syria’s “Caesar” case, the metadata underlying the images was also submitted and used to verify the authenticity of the images, further enhancing the images’ evidentiary value.
Social media can be a powerful force for good. With the government’s much-awaited amendment to the Information Technology (Intermediary Guidelines) Rules, 2011, around the corner, any measure dealing with these tough issues should ensure that it stays that way.
Sharngan Aravindakshan is a Programme Officer in the Centre for Communication Governance at National Law University, Delhi. Radhika Kapoor is a Harvard Kaufman Fellow at the Public International Law and Policy Group, Washington DC. The views and opinions expressed in this article are those of the authors and do not necessarily reflect those of their respective organisations.