New Delhi: Social media platforms cite ‘free speech’ in order to absolve themselves of their role in spreading disinformation. They have framed the discourse around disinformation and its resolution as a content-moderation problem. In reality, pervasive disinformation is spread more by social media’s amplification of disinformation-laden content rather than its failure to remove it. It is only at the removal stage that the question of free speech arises.
This and more form the Future of India Foundation’s ‘Politics of Disinformation’ report, which claims to be an attempt to cut through crosstalk and obfuscation on the issue of disinformation.
The report pegs disinformation (the deliberate use of misinformation) as a political problem and finds that its solution does not solely lie in laws enacted by a government and its execution.
When it comes to disinformation, the report notes that social media platforms have a more central role to play than they claim to have.
As long as amplification is driven by engagement instead of the quality of content or the trustworthiness of the content’s sources, current moderation efforts by social media platforms are likely to fall short, finds the report.
Neutrality
However, platforms have neatly bypassed the discussion around amplified distribution and have found it convenient to exclusively frame measures to reduce misinformation as being in “tension” with freedom of expression – an issue which can arise only in the case of outright removal, the report says.
Twitter, YouTube, and Facebook are all on the record stating their aversion to be the ‘arbiters of truth’ and that the platforms should be a marketplace of ideas. However, the report says that their very emphasis on free speech is because it is a grand business model.
As private companies, the issue in the case of outright removal of content, is not freedom of speech but political neutrality of the platform, the report says, citing instances where Facebook did not restrict a post by former US president Donald Trump but Twitter did. The argument used by both companies was public interest.
Although the current report did not consider ads, the question of political neutrality of social media platforms came up recently when an analysis on spending conducted by the Reporters’ Collective and ad.watch, revealed that Facebook’s internal algorithms offered cheaper advertisement deals for the Bharatiya Janata Party (BJP) when compared to opposition political parties.
In 2020, an explosive report by the Wall Street Journal suggested that key Facebook employees in India were in conflict with the company’s pledge to remain neutral in elections around the world.
Ankhi Das – the head of Facebook’s public policy in India – wrote posts for internal consumption each time the BJP, particularly Modi, benefitted electorally.
Beyond ideology
The report urges accountability from platforms and says they must either stop amplification, revert to a chronological feed, or take ownership of their distribution choices.
A third possible method via media, says the report, is to amplify only those content providers who have gone through a vetting process to ensure that amplified content has gone through some due process for integrity and quality of messaging, irrespective of ideological affiliation.
This is especially necessary now, considering that discussions with young people from eight states in India led the writers of the report to the conclusion that anti-minority hate has been so mainstreamed and legitimised that it is now difficult to establish a shared foundation of the truth at all.
The discussions focused on ascertaining the following:
1. how young people get and consume information;
2. how they determine which information is trustworthy;
3. how they sift between competing narratives on the same event/issue;
4. do they care to ascertain whether a piece of information is accurate;
5. the purpose and use of information;
6. awareness of and reliance on fact-checking sites; and
7. the impact of online misinformation.
The report finds that the key takeaway from the focus group discussions is that not only have social media platforms disrupted the information ecosystem in India, but that they have allowed themselves to be weaponised by vested interests in ways which are leading to real world harm.
No meaningful safeguards
The report calls social media platforms’ efforts to combat disinformation “anaemic” and notes that fact-checking as an approach to combat false information applies only to a tiny subset of content actually selected for fact-checking by platforms or independent fact-checking organisations.
Content moderation, which social media sites posit as a bulwark to battle disinformation,”is more a public relations exercise for platforms instead of being geared to stop the spread of misinformation” the report finds.
Also read: Social Media Companies Fail to Deal With Rampant COVID-19 Misinformation in Hindi
Since, advertising revenue is directly proportional to the amount of time users spend on the platforms, platforms often boost user engagement without caring about the impact of false information.
The report says that a meaningful framework to combat disinformation at scale must be built on the understanding that this is a political problem. There are bad actors and thus content moderation – as also content distribution – has to be an intervention in the political process.
Among those who appear more aware of this than social media sites themselves are political parties.
“The report thus argues for a comprehensive transparency law to enforce relevant disclosures by social media platforms. Moreover, content moderation and its allied functions such as standard setting, fact-checking and de-platforming must be embedded in the sovereign bipartisan political process for democratic legitimacy,” it says, noting that the process should not devolve into censorship at the behest of the government.
Note (November 4, 2022): A reference to The Wire’s Tek Fog findings has been edited out as the stories have now been removed from public view pending the outcome of an internal review, as one of its authors was part of the technical team involved in our now retracted Meta coverage. More details about the Meta stories may be seen here.