Can ASEAN’s Cybersecurity Push Protect People and Economies?

Taking the lead on cybersecurity, both through the Norms Implementation Checklist and the ASEAN Regional Computer Emergency Response Team, is crucial to the security of people and economies in Southeast Asia.

As ransomware attacks and cyber-scams surge across Southeast Asia, the Association of Southeast Asian Nations (ASEAN) is stepping up to create a more secure regional cyberspace.

With cyber criminals targeting the region’s critical infrastructure, including data centres, and young and old users at risk of falling victim to digital scams, ASEAN’s efforts are not only about digital security — they’re also aimed at protecting economic and social stability.

In October 2024, ASEAN members launched two major initiatives.

First, the ASEAN Regional Computer Emergency Response Team (CERT) opened its Singapore headquarters to boost collaboration on cybersecurity incident response, with Malaysia leading as the first overall coordinator.

This response team focuses on critical areas including information-sharing and strengthening public-private partnerships to bolster defences across the region.

In the same month, the Cyber Security Agency of Singapore and Malaysia’s National Cyber Security Agency introduced the Norms Implementation Checklist.

This list of action points aims to guide ASEAN nations in promoting responsible behaviour in cyberspace, based on United Nations (UN) cybersecurity norms.

Responding to a surge in cyberattacks

This year, the region has experienced a spate of major ransomware attacks. For example, a major incident occurred in June, when the Brain Cipher ransomware group disrupted the data centre operations of more than 200 government agencies in Indonesia.

Critical information infrastructure supports government and other essential services, so any disruption can cause severe socio-economic impacts that undermine public trust in government.

The threat of disruption from cybersecurity incidents extends to the private sector where, for example, in Singapore, three out of five companies polled had paid ransom during cyberattacks in 2023.

In addition, cyber scams are a major crime concern: they often impact vulnerable groups and are now so common they have become a regional security threat.

The rapid pace of digitalisation in Southeast Asia, coupled with low digital literacy and the ease of conducting online financial transactions, has facilitated a sharp increase in cyber scams such as phishing and social media scams.

Tackling cyber scams at the source is challenging. Transnational organised crime groups thrive in Southeast Asian countries with limited cybersecurity and insufficient law enforcement capabilities.

They often collude with local power structures: for example, they operate in conflict areas near the border of Myanmar, where they collude with militant groups.

Given these increasing threats, the launch of the ASEAN Regional Computer Emergency Response Team is a promising effort to enhance cooperation among Southeast Asian countries.

The eight functions of the response team — which include information-sharing, training and exercises, as well as developing partnerships with academic institutions and industry — aim to strengthen regional coordination on cyber incident response.

Incident response is a critical part of the region’s attempts to mitigate the impact of malicious cyber activities such as ransomware and the epidemic of cyber scams.

Strengthening ASEAN’s strategic position in cyberspace

In 2018, ASEAN agreed to subscribe in principle to the 11 UN norms of responsible state behaviour in cyberspace.

While their full potential has not yet been realised, these 11 norms, set out in the UN’s Norms Implementation Checklist, could play a crucial role in helping ASEAN member states progress from ‘in principle’ to ‘in practice’ in the cybersecurity space. These norms aim to guide countries’ national cyber policies to align with the rules-based international order set out by the UN.

Adherence to these cyber norms (such as fostering inter-state cooperation on security, preventing misuse of Information and communications technologies, and cooperating to stop crime and terrorism) could, ideally, complement the work of the ASEAN Regional Computer Emergency Response Team in responding to malicious cyber activities and fighting cyber scams.

Regional implementation of these norms could contribute to an environment of trust and confidence among ASEAN countries, to create stability in Southeast Asia’s cyberspace.

There are strategic reasons for creating regional cyberspace stability. As the UN Secretary-General Antonio Guterres has warned, cyberspace is increasingly being exploited as a weapon in conflicts — by criminals, non-state actors, and even governments. This trend is inimical to ASEAN’s regional ambitions, strengthening the argument for nations in the region to proactively adopt a cyber rules-based order.

What’s more, ASEAN aims to be a zone of peace, freedom and neutrality. This goal emphasises keeping the region free from interference by external powers that could create insecurity.

As ASEAN established this goal in 1971 during the analogue era and Cold War, it is only appropriate that the organisation develop new initiatives to adapt to the digital era and Cold War 2.0.

ASEAN should also promote the Norms Implementation Checklist as a guide for other countries that are its dialogue partners but are embroiled in geopolitical and cyber rivalry (such as China and the United States).

Observers warn that the inability of the regional group to address the Myanmar civil war and rising tensions in the South China Sea, both of which involve cyber activities, is eroding its relevance.

This crisis consequently shapes how some ASEAN members and external powers view ASEAN centrality. It is also among the reasons why non-ASEAN security arrangements — such as the QUADNATO Indo-Pacific Four and Japan-Philippines-US alliance —are establishing cooperative efforts, including on cybersecurity, in the Indo-Pacific.

Taking the lead on cybersecurity, both through the Norms Implementation Checklist and the ASEAN Regional Computer Emergency Response Team, is therefore crucial to the security of people and economies in Southeast Asia.

It could also prevent ASEAN’s centrality in regional security matters from eroding further. But this is contingent on ASEAN nations providing sufficient resources, policy thinking and political will to make these two initiatives deliver results.

Muhammad Faizal Abdul Rahman is a research fellow (Regional Security Architecture Programme) with the Institute of Defence and Strategic Studies at the S. Rajaratnam School of International Studies, Nanyang Technological University, Singapore.

Originally published under Creative Commons by 360info™.

Maharashtra Polls: BJP Shares Fake AI Audio Clips of MVA Leaders Supriya Sule, Nana Patole

The BJP alleged these recordings were “proof” of Opposition leaders misappropriating bitcoins from a 2018 cryptocurrency fraud case to fund the ongoing state elections. 

Hours before Maharashtra went to polls on November 20, 2024, the Bharatiya Janata Party (BJP) posted at least three fake AI generated audio clips on their official X (formerly Twitter) handle. These clips, shared late on November 19, claimed to be recorded conversations involving Opposition Maha Vikas Aghadi (MVA) leaders Supriya Sule (NCP), Nana Patole (Congress), IPS officer Amitabh Gupta, and an employee of an audit firm, Gaurav Mehta. 

The BJP alleged these recordings were “proof” of Opposition leaders Supriya Sule (NCP Sharad Pawar) and Nano Patole (Congress) misappropriating bitcoins from a 2018 cryptocurrency fraud case to fund the ongoing state elections. 

The other two individuals, whose voices are allegedly part of the audio clips as alleged by the BJP are Amitabh Gupta, currently Inspector General, Indo Tibetan Border Police and a Gaurav Mehta, employee of an audit firm, Sarathi Associates.

What are the allegations against Supriya Sule and Nana Patole? 

Former Pune police officer Ravindranath Patil alleged that Sule, Patole, IPS Amitabh Gupta (then Pune police commissioner), audit firm employee Gaurav Mehta, and IPS officer Bhagyashree Navtake were involved in misappropriating bitcoins. News agency ANI reported Patil claimed he had been sent voice notes of conversations between the named people. 

BJP soon latched on to these allegations and posted the four alleged voice notes.

In the fake recordings, voices purporting to be those of Sule and Patole can be heard asking for cash in exchange for bitcoins stored in the four crypto wallets and also promising no investigation into the matter.

Sambit Patra, national spokesperson of the BJP also held a press conference on Wednesday morning further amplifying the AI generated voice notes.

An archive of these posts can be seen here, here, here and here.

Fact Check

BOOM found that the voice notes posted by BJP are fake and generated using generative AI technology. We tested the audio clips using TrueMedia.org’s deepfake detection tool available to journalists and researchers.

Three of the four audio clips showed substantial evidence of being AI generated. One of the voice notes, only five seconds long, showed little evidence of being manipulated most likely due to its short duration.

We also heard the voice notes and compared the voices of Supriya Sule, Nana Patole and IPS Amitabh Gupta with publicly available interviews on YouTube. None of the voices in the audio posted by BJP match the original voices of the three.

Voice note one: Audit firm employee Gaurav Mehta speaking to IPS Amitabh Gupta

In this voice note, a man identified as Gaurav Mehta, employee of an audit firm Sarathi Associates is heard speaking to IPS Amitabh Gupta who was then Pune police Commissioner.

We tested the audio clip using TrueMedia’s AI deepfake detection tool which showed that there was substantial evidence of manipulation. View the results here.

Also read: Maharashtra: In the Battle of Alliances, It’s the Regions Which Are Crucial

Voice note two: NCP (Sharad Pawar) leader Supriya Sule speaking to Gaurav Mehta

In this voice note, Sule is heard asking Mehta for cash in exchange for bitcoins without worrying about an investigation. 

We compared Sule’s voice in the voice note to her interview on Samdish Bhatia’s video podcast, Unfiltered By Samdish in 2023. Both voices sound different from each other. Sule’s original voice from an interview can be heard here.

The results TrueMedia’s AI deepfake detection tool confirmed that there was substantial evidence of manipulation. View the results here.

Voice note three: Congress leader Nana Patole speaking to IPS Amitabh Gupta 

In this voice note, Patole is allegedly threatening IPS Amitabh Gupta to convert the bitcoin for cash. 

TrueMedia.org’s tool determined there was little evidence of manipulation. View the analysis here. However, the recording is only 5 seconds long and is too short to be tested on current detection tools to give an accurate result.

We also compared Patole’s voice in the audio recording to his interview on YouTube channel, JistNews, published on November 19, 2024. The voice in the audio note did not match Patole’s original voice. His original voice can be heard here.

Voice note four: Conversation between IPS Amitabh Gupta and Gaurav Mehta

This is a voice note allegedly of IPS Gupta, the then Pune police commissioner speaking to Mehta, ensuring the compliance of cash for bitcoins as demanded by Sule and Patole.

We found several interviews by IPS Gupta on YouTube, none of them matching his voice as heard in the audio recording posted by BJP. His original voice from a March 15, 2024 interview can be heard here.

Also read: No Aadhaar or Voter ID: Here’s Where Chief Election Commissioner’s Claim on Inclusion of Vulnerable Tribes Doesn’t Check Out

TrueMedia’s tool confirmed that there was substantial evidence of manipulation by AI. The analysis can be found here

Additionally, the Misinformation Combat Alliance’s Deepfake Analysis Unit (DAU), of which BOOM is a part of, analysed the viral audio clips using deepfake detection tools Hive, Hiya, True Media, Deepfake-o-meter. It found three of the audio clips were AI generated while one was found to show little evidence of manipulation.

The state of Maharashtra, which is voting on Wednesday (November 20), saw aggressive campaigning by parties with similar names and little else in common. Currently, the state is led by the Mahayuti alliance comprised of the chief minister Eknath Shinde led Shiv Sena, BJP and the Ajit Pawar faction of the NCP. In Opposition is the MVA an alliance between Uddhav Thackery led Shiv Sena (Shiv Sena UBT), Congress and the Sharad Pawar faction of the NCP.

This article is republished from BOOM under a Creative Commons license. Read the original article here.

Bengal’s ‘Tab Scam’ Exposes Significant Vulnerabilities in Data Storage

Money that was supposed to be sent to school students was siphoned off to other accounts through sophisticated techniques.

Kolkata: West Bengal’s ambitious ‘Taruner Swapna’ scheme, which translates to ‘dreams of the youth,’ was designed to provide Rs 10,000 to Class 10 and 12 students for digital devices. Now, it has been marred by a significant cyber fraud.

Over 2,000 students have been affected in what has been an effort by cybercriminals to exploit system vulnerabilities to siphon off funds. These actors, many from outside the state, have used sophisticated techniques to change information on various data records, making sure that the money which was to be given to the students went to other accounts instead. In the process, students’ records were also accessed.

Bratya Basu, the state’s education minister, told The Wire that the National Informatics Centre has also been asked to investigate the matter thoroughly. “The government has taken note of the issue where a section of higher secondary students across the state have not received their tablet funds,” he added.


While the full extent of the crime is still being learnt, West Bengal Police have already registered 93 first information reports and arrested 11 individuals in connection with the fraud. The scam has impacted students across multiple districts, particularly in South 24 Parganas, where the highest number of cases have been reported.

‘Inadequate’ security measures

Chief Minister Mamata Banerjee has promised students who have lost money of a “refund”. Addressing reporters last week, she stated, “The group responsible for this scam has been identified. Our administration is rough and tough. A Special Investigation Team has been formed and people have already been arrested. Let the administration do its job.”

A cybercrime expert investigating the case, who requested to not be named, told The Wire that there were clear lapses in the system. “The security measures were inadequate, and there was a lack of oversight. It seems that many schools kept theirs students’ name lists, bank account numbers and IFSC codes.”

The investigation into the tab scam has revealed a dense network of individuals, including farmers, tea garden workers, lottery ticket sellers, and even home tutors, who operated through cyber cafes. They allegedly exploited their access to school login credentials to change bank account numbers and IFSC codes to divert the funds meant for the students. In some cases, malware and rented bank accounts were used to facilitate the transfers. Investigators have unearthed a network involving cybercafé owners, contract-based school staff, and also insiders who manipulated government portals to divert the money.

Cyber experts have pointed to glaring vulnerabilities in the scheme’s implementation. “Discrepancies in data linking, outdated computers, and irregular portal maintenance may have facilitated the fraud. While professional hackers might have exploited malware or system loopholes, the pattern here suggests errors in data entry and verification. While the government sends funds directly to recipients’ accounts, the fact that so many are affected indicates systemic issues that require thorough investigation and clarification.,” said cyber law expert Rajarshi Roy Chowdhury.

Headmasters blamed

The tab scam initially surfaced with complaints lodged at two police stations in Kolkata. Subsequently, district school inspectors got non-bailable cases registered against the headmasters of several affected schools. However, some of the accused headmasters claim they were falsely implicated for being among the first to report the issue.

“The government is framing the headmasters. We believe the scam occurred due to flaws or weaknesses in the government portal. A criminal gang was altering the IFSC codes uploaded from the schools,” said Sukumar Pain, the secretary of the state teachers’ association, ABTA.

Many of those arrested in connection with the fraud have direct ties to the ruling party Trinamool Congress, including the son of a prominent local leader in Malda, raising uncomfortable questions about insider involvement.

Slamming the state government, BJP Rajya Sabha MP Samik Bhattacharya said, “Rice is being stolen from mid-day meals, lentils are being stolen from ration supplies, river sand and mountain stones are being looted – it’s no surprise that even the funds for tablets are being stolen here. To establish the rule of law in this state, 50 new jails need to be built quickly!”

Translated from the Bengali original by Aparna Bhattacharya.

After Trump’s Win, ‘X-iters’ Turn to Revive Former Twitter Project Bluesky

Among the reasons cited for departing the platform is the continued increase in negative content on the platform.

Users appear to be making yet another exodus from the Elon Musk-owned social media platform X , with other microblogging sites rocketing to the top of app download rankings and courting millions of new users in the week since the US election.

Whether users are permanently leaving X (formerly Twitter) or simply establishing new accounts elsewhere is unclear.

But brands and individuals alike are citing Musk’s substantial financial and rhetorical backing of Donald Trump in the US election as well as the polarising nature of the X platform as the reason for their departures.

Bluesky — originally a Twitter project that was spun off into its own company reported more than 1 million new users in the last week. It now has 15 million in total.

While it’s a minnow in the social media field, the platform has shot to top spot in Apple’s App Store rankings this week, just ahead of Instagram’s own X competitor Threads.

This is not the first time X has seen a decline in active users. Downturns notably happened after Musk took ownership of Twitter in October 2022 and when Brazil banned the platform this year.

But it looks as though Musk’s support for Trump has proven the final straw for certain account holders.

“This is kind of a tipping moment to some extent,” Bart Cammaerts, a communications and democracy researcher at London School of Economics, told DW.

Cammaerts points to the whittling down of moderation and the ramping up of Musk’s own rhetoric around X’s future direction as long simmering developments that may have helped to push users away.

“I think the fact that we see now so many people making that move is a combination of approaches that have been ongoing for longer than [the election]

Who’s leaving X?

On Wednesday, The Guardian newspaper said it would no longer post on X, though would not delete its accounts.

It is not alone in departing or downsizing its X presence. American media companies NPR and PBS stopped posting on the platform last year. The Australian Broadcasting Corporation also downsized its dozens of X offerings to just four: news, sport, Chinese language and “masterbrand” profiles.

More notable have been celebrity exits. US actors Jamie Lee Curtis and Bette Midler have both deleted their X accounts while retaining presences elsewhere. They join previous X-iters like Elton John, Jim Carrey, Whoopi Goldberg and Gigi Hadid who left or stopped posting after Musk’s takeover in 2022.

Other public figures have vocalised their intent to leave X but have yet to delete their profiles. They include prominent media and political names like former CNN news anchor turned YouTube streamer Don Lemon and Democrat congresswoman Alexandria Ocasio-Cortez.

There’s clearly a left-lean to the most vocal personalities to depart the platform.

But brands are leaving too — and from beyond the anglosphere. The Berlin Film Festival and Bundesliga team FC St Pauli are German brands that have announced their exit. Earlier in 2024, more than 50 other not-for-profits announced their departure via the campaign website byebyeelon.de.

tweet

Last year, major brands halted advertising on the platform citing a rise in hateful content, drawing a public rebuke from Musk.

Why are they leaving?

Among the reasons cited for departing the platform is the continued increase in negative content on the platform.

That includes the increase of toxic content, remarked by The Guardian in its published statement as “the often disturbing content promoted or found on the platform, including far-right conspiracy theories and racism.”

But it might be difficult to pinpoint a single cause for the exit. The newspaper noted its decision had been a long time coming and that it resources could be “better used” elsewhere.

“News companies do not have unlimited resources, audiences do not have unlimited attention, so they might have to make a strategic decision if there is a platform that is associated to a high level of uncertainty when it comes to how the conversations will evolve in the short term,” Silvia Majo-Vazquez, a political communication researcher at Vrije Universiteit Amsterdam, told DW.

“They want to convert the audiences on social media platforms, [so] which audiences are you targeting right now on Twitter [X] with the drop of, also, [X] users?”

“Other platforms are gaining traction, so probably they’ll mobilise their resources to those platforms which provide new [groups] that are more difficult to reach — young audiences — and perhaps provide better environments.”

For individuals, many are remarking the “feel” of other microblogging outlets is like a Twitter of old, with fewer bots and more one-to-one interactions.

“If those functionalities can be offered by alternatives and enough people make that switch, it could go quite quickly. We’ve seen that also in the past with other platforms like Myspace, for example,” said Cammaerts.

Politics — and personalities — may keep migrating, discourse may minimise

As much as celebrities, politicians and brands may turn their gaze to new social pastures, upstart platforms are still vulnerable to the same negative interactions and toxic content prevalent on established social media.

“In a way people are going to the lesser of two evils because all these platforms have a business model that in essence is geared towards extraction, towards commodifying your sociality in ways contravening your privacy,” said Cammaerts.

“So, sure, X is the worst and is problematic for a number of political reasons, but it doesn’t mean that these other platforms are necessarily ‘the good’.”

The future direction of public discourse online is difficult to predict, but he believes it is a conversation that needs to start now.

“What do we want our democratic media environment to be? How do we want it to look? And can we, through democratic means, regulate it in such a way that it reaches that [democratic] ideal more than it does today? That can also be a contentious debate.”

It also assumes users will continue seeking “full view” public spaces to engage socially with each other.

Majo-Vazquez predicts the rise of closed groups on private messaging apps will continue to grow, pushing online interactions further away from the global public square Twitter originally aspired to.

“When it comes to social media platforms, the environment is getting more fragmented,” she said.

“The attention that those major platforms were receiving … has been fragmented to many other places. Which winner will come out of this process, we don’t know.”

This article has been republished from DW

In Publishing Percentage and Not Vote Count, the EC Flouts the Basic Rules of Data Science

Legitimisation of obfuscating the citizen’s view of vital data, coupled with no visible SOP for calculating the numbers that potentially decide the fate of the democratic polity, is deeply disturbing. 

The most fundamental rule of data science is: collect the data at its source and if that is entirely impossible, collect it as close to the source as ever possible.

Arguably the most important data that the Election Commission deals with are the votes registered on polling day. 

Electronic Voting Machines or EVMs register these votes. EVMs display the count of votes registered. EVMs do not display the percentage of polling. Percentage of polling is a calculated number, or in other words, it is processed data. It is against the fundamental principles to collect calculated data when unprocessed and raw data is easily available at source. Using this calculated number leads to fundamental problems like possible ‘vote leakage’. More on this later. 

The observers’ handbook of August 2024, in line item 4, on page 37, mentions the following as duty of the poll observer:

“Register of Voters (Form 17A) must be checked with display of total votes polled on EVM and Observer must sign the visit sheet along with his observation and  record the time of his/her visit.” 

It is obvious from this that the number of votes polled is available on the EVM at all times on polling day, since there is no mention of when a poll observer should visit the booth on polling day or how many visits the observer should make. 

In my search of the guidelines published on its website by the EC, I was unable to find reference to any directive that mandates the EC to publish the vote percentage and not publish the vote count. 

I was also unable to find any reference to any standard operating procedure (SOP) on how the polling officer must calculate the percentage. 

The webpage for the ‘Voter Turnout’ app on the ECI website clearly mentions that it is a “mobile app to display the approximate voter turnout percentage”. This is an official  app launched by the EC, a constitutional authority mandated and entrusted the  responsibility of conducting fair, transparent elections that citizens can trust. The mention of the percentage of voter turnout in a news item by non-constitutional entities is entirely different from a constitutional entity using and disseminating an approximate count as primary data. 

Such legitimisation of obfuscating the citizen’s view of vital data, coupled with no visible SOP for calculating the numbers that potentially decide the fate of the democratic polity, is a deeply disturbing revelation. 

Also read: Why the Supreme Court Verdict on EVMs Is Disappointing

As such I decided to file Right to Information requests. To my dismay, the EC RTI portal has become extremely unreliable, flaky and inconsistent compared to a year ago. Over 72 hours, I made several attempts to file RTIs, but only three RTIs show up on the portal,  although I paid the Rs 10 RTI fee seven times. Thrice, the server reported an ‘Error 500’ after the payment was made. In each case the RTI was not recorded on the server. The remaining four times, the page just blanked out. 

Even worse, for each of the three RTIs that show up on the server, I had to make between five and 15 attempts.  

Following are the questions that I intended to ask the EC through the RTIs. As you will  see, each deals directly with the process, or the accountability of ECI to the common Indian citizen and ethical conduct of a public office. 

  1. Is the count of votes cast available at every booth at all times?
  2. Does the EC have a specific guideline or directive not to enter the vote count on its website every two hours?
  3. Does the EC have a specific guideline or directive not to publish the vote count on its website every two hours?
  4. Can polling officers enter the percentage of votes cast when ENCORE fails to connect to the server?
  5. If polling offers cannot enter the percentage of votes cast at the given time after every two hours, is there any alternate process to update the percentage to the  server (the ECI guideline mandates officer to upload the percentage at 9 am, 11 am, 1 pm, 3 pm, 5 pm and 7 pm)?
  6. Does EC software on the web server highlight polling booths that have not entered polling data every two hours?
  7. Does EC software on the web server generate exception reports that list polling booths not complying with guidelines for updating vote data?
  8. Does EC have a review committee that reviews data updation process throughout the polling day?
  9. Does EC have a review committee to escalate non-compliance to the Election Commissioners?
  10. Is it the Election Commissioner’s responsibility to immediately inform the public  of non-compliance with data upload guidelines via the EC website?
  11. What is the acceptable time gap between noticing non-compliance and informing  the public via the EC website?
  12. Are there any penal actions if such information is not shared within a mandated time?
  13. What are the penal actions when information is not published within the stipulated time by the Election Commissioners?
  14. Does EC have any guideline or method to calculate the percentage of votes cast?
  15. Does EC have any SOP to verify the accuracy of the percentage calculated by every polling officer?
  16. Does EC verify the accuracy of the percentage calculated by every polling officer every two hours when the percentage data is uploaded via the ENCORE app?
  17. Is there any directive or guideline to upload the final vote percentage at 11:59 pm on polling day?
  18. What is the time before which the final voting percentage must be uploaded via  the ENCORE app?
  19. What is the time before which the final voting percentage must be publicly visible on the ECI website?
  20. Is the Control Unit (CU) electronically paired with the Ballot Unit (BU) or VVPAT  making it impossible to replace one CU with another after the voting has ended? 

Page 41 of the handbook of August 2024 mentions, “The Observers will  ensure that RO/DEO and the technical staff assisting them have tested the  ENCORE software and are ready for fast transmission of final result to ECI  using this software.” 

The ENCORE app, described in various other EC documents as the end-to-end  app developed in-house by the EC, can clearly upload the data almost  instantly. In the digital era where even WhatsApp messages get delivered almost instantly, this should hardly be a surprise.

In such a scenario, it is only fair that there be a very close time limit to publish the data after polling has ended.  

Also read: Election Commission’s FAQs on EVMs Don’t Really Address Major Design Deficiencies

It is important here to relate to the General Data Protection Regulation (GDPR) guidelines about data. These are accepted worldwide as the most democratic and comprehensive. 

Drawing from there, we must conclude: The ownership of the data is with citizens and not with the EC. The EC is a mere custodian of the votes cast. Owners have the right to their data and the custodian is duty bound to service the data immediately and without processing, upon request. 

Calculating the percentage amounts to processing the data. Publishing the count is servicing unprocessed data. The EC therefore must be considered duty bound to publish the vote count and not the vote percentage as a replacement to the vote count. 

Also, being an approximation, vote percentage may substantively lead to vote leakage. This means that some votes may not get reflected in the vote percentage when it is a rounded number. For example, if a polling station with say 10,000 voters has 10 polling booths, each with 1,000 voters. If each booth records 643 votes, each booth will have 64.3% voting. If the percentage is reported by each booth as a rounded  figure (64 instead of 64.3), three votes from each booth will go unaccounted. When  extrapolated to a million, this leaves serious possibilities of a few thousand votes being transferred or defrauded. 

I had asked these questions for well over a year ago now and the EC has never answered it. It is a simple, fool-proof way to prevent the most obvious malpractice possible today. Not answering it only leaves open suspicion and with very good reason. 

In spite of the serious loss of credibility of the Indian judiciary – even more so in recent  times – as a common citizen, I would like and want the Supreme Court to take note of  these very foundational queries and seek answers from the ECI on behalf of us. 

This should be the duty of the opposition in a lively democracy with responsible and mature opposition. It is sad that opposition is as much responsible for the degradation of the Indian electoral system as the party in power. It is sad that not many trust the EC to provide responsible answers to the Indian citizen or to conduct itself in a respectable way. 

The Supreme Court should take suo motu cognisance or it should be moved to recognise the violation of people’s right to know the actual data in the form of vote count when only a percentage of voting is published.  

The SC should also be moved to issue an order to the ECI to publish the actual vote count at the end of the polling day so that the count does not change after 24, 48 or 72 hours – or even later. Any change in the vote count is an admission that the EVMs are defective, unreliable and not trustworthy, as electronic votes once cast have no way to either disappear or multiply. This surely must be the minimum intervention the SC can do to protect the democracy, and citizens’ right to factual information. 

If the ECI insists on publishing the percentage, the SC should direct it to publish both the number of votes and the percentage, but in any case, the count of votes must be published on the EC website at the end of the voting day. The SC must ensure that.  

Madhav Deshpande is a former CEO of Tulip Software and a former consultant to the Obama administration in the United States. He is one of India’s foremost experts on EVMs.

Former Bureaucrat Urges Caution Over Govt’s Satellite Spectrum Allocation to Elon Musk’s Starlink

“Once allotted satellite spectrum, a foreign player like Starlink can have unlimited access to personal and public data systems in India, with no bar on the company using the same across geographic borders,” E.A.S. Sarma noted.

New Delhi: E.A.S. Sarma, former secretary to the Union government, has raised concerns in an open letter addressed to Neeraj Mittal, secretary of the Department of Telecommunications (DoT), regarding the department’s recent steps toward administratively allotting satellite spectrum to foreign companies, specifically Elon Musk’s Starlink.

In his letter, Sarma highlighted the public interest risks of permitting foreign companies like Starlink to access satellite spectrum without a competitive process, noting the security implications tied to the company’s alleged connections with the US military. According to Sarma, Starlink’s satellite technology, branded as Starshield, possesses advanced capabilities for accommodating diverse payloads which include military-grade radar, infrared missile detection and optical surveillance systems. Given these capabilities, Sarma cautioned that any spectrum allocation to Starlink could risk exposing Indian data systems and sensitive communications infrastructure to foreign surveillance.

“It is important to understand that Starlink is not so much a means of satellite communication, but a time-tested reliable satellite bus technology that can accommodate various payloads as needed, including radars, optical cameras, and infrared (IR) missile launch signaling systems. It is obvious that the Pentagon is interested in getting the most it can out of the functionality provided by Starshield satellites,” he noted, referencing reports about the US defence department’s involvement with SpaceX’s Starlink. “Once allotted satellite spectrum, a foreign player like Starlink can have unlimited access to personal and public data systems in India, with no bar on the company using the same across geographic borders.”

Sarma refers to an earlier letter he wrote on the “illegality involved in the Department of Telecommunications (DOT) administratively allotting strategic satellite spectrum to telecom service providers, especially the public interest implications of allotting it to foreign players.”

He argues that the DoT’s actions bypass the auction-based allocation procedure, which was mandated by the Supreme Court to ensure transparency and fair market value in the allocation of spectrum, a highly valuable national resource. Sarma warned that circumventing this procedure could amount to contempt of court, undermining the transparency that the Supreme Court called for in its landmark 2G spectrum judgment.

Also read: Russia-Ukraine War: Starlink Row Offers a Cautionary Tale on Role of Private Space Industry in Wars

“I am surprised that the DOT should obstinately go ahead allowing Elon Musk’s Starlink to have access to satellite spectrum without going through the apex-court-prescribed transparent auction procedure… It defies all economic logic of discovering the price of a valuable natural resource like spectrum through competitive means,” wrote Sarma.

Union telecom minister Jyotiraditya Scindia announced on Tuesday, November 12, that the decision to launch Starlink would depend on recommendations from the Telecom Regulatory Authority of India (TRAI), which was conducting a consultation process, according to a report in the Hindu.

Sarma’s letter also emphasised that allocating spectrum to foreign players, particularly those tied to foreign powers, could open doors to security vulnerabilities and misuse of data. “It will be prudent for DOT to reserve satellite spectrum for purely strategic purposes that subserve the national interest, such as use by ISRO, the Indian defence forces and CPSEs involved in strategic communications activity for such organisations,” he added.

Sarma appealed to the government to exercise caution and reserve this spectrum for strategic, nation-serving purposes.

 

Government Agencies Should be Careful in Allowing Free Access to Western AI Tech Companies

What is needed is meaningful consultation with the public and local language expertise available in India.

Marathi at home. Hindi at work. And English online. More than a quarter of Indians speak two or three of the country’s 22 official languages and 250 distinct dialects. But despite more internet users logging on now than ever before, English remains the lingua franca of the internet.

Tech companies and the Government of India want to change that with AI. But to do so, it’ll require a lot more than just a declaration and wishful thinking. It will require meaningful consultation with the public, something tech companies and the Government have often fallen short of doing in the past.

Last month, the government of India launched BharatGen to make generative AI work in Indian languages. Prime minister Narendra Modi’s call was accompanied by statements made by Google and Nvidia CEOs Sundar Pichai and Jensen Huang who too shared the Modi’s enthusiasm to bring AI to India. India already boasts of many homegrown AI initiatives. Efforts to bring AI to India should work alongside local AI projects, and learn from them.

India is a hot market with over a billion potential users, and services have long tried to court them with slipshod products and little to no consultation with users. Take for example, Free Basics. Facebook launched the Free Basics programme in the mid-2010s as a well-meaning effort to bridge the digital divide and expand access to the internet by providing low-cost data access. A closer look into the offerings revealed that recipients of the service only got access to Facebook and a handful of other Western online services such as AccuWeather, BBC News, and ESPN. Users were limited from downloading additional online services onto their devices and they were not able to read the terms of the service offered in any language outside English. It was internet access in name only, with terms dictated by and for Facebook, which many termed a form of “digital colonialism”. Ultimately, the Telecom Regulatory Authority of India (TRAI) banned Free Basics in February 2016 in a ruling based on principles of net neutrality, to ensure that users had equal access to all internet content.

AI companies are on track to make a similar mistake. Already, Western tech companies have boasted about their multilingual systems that work in languages ranging from Assamese to Bengali. Yet, a closer look into these systems reveals that they have neither been built robustly nor tested rigorously. A study of chatbots revealed that they were unable to answer health-related questions in Hindi despite being used in various health-care settings already. On 60 Minutes, Sundar Pichai promised that AI could help people access the world’s information in their own language by affirming that Bard, Google’s AI chatbot, had taught itself Bengali. In reality, the model had been trained in the language but not in any meaningful way. Only 1.4% of the data used to train the model had been bilingual, with only 0.026% of the training material being in Bengali, sourced from Google translations of English text. Computer science experts on Twitter called out the company for perpetuating a notion of AI “hype” rather than investing in the building and testing of services that could benefit Bengali speakers.

Also read: Code Dependence Has a Human Cost and Is Fuelling Technofeudalism

When AI companies do try to build systems that work in multiple languages, they often largely ignore local expertise. Companies that have built large multilingual models have largely opted for an Anglocentric approach creating one model for English, and another for all other languages no matter how distant they are from one another. They also often rely on machine-translated text and underpaid data workers with limited discretion and power to use their expertise, and lack processes or will to adopt local feedback. For example, Microsoft’s chatbot Sydney was initially tested in India with many users reporting disturbing conversations. Yet, the company reportedly ignored the feedback until the New York Times covered the issue.

If Big Tech companies want to better serve Indian users, they can and should start with consultation and collaboration with groups with the expertise and understanding of the needs of Indian users. Collaboration with local groups can help to bridge the “resourcedness gap” or the dearth of high quality training datasets in many Indian languages.

My colleague Gabriel Nicholas and I have written in the past that this dearth of high quality training and testing data systems often impedes companies from building and testing models in non-English languages and incentivises them to cut corners such as taking non-English text from social media sites that are often replete with obscenities or typos. Local groups can provide high quality training data and testing prompts to ensure that models are built in a way that is more language-specific. As my colleagues write in a new brief, by engaging with language experts and local research groups, companies can ensure that communities can both meaningfully contribute to and benefit from NLP tools developed in their languages.

Engaging a diverse group of Indian language AI consortia also will help tech companies better represent the language diversity in the development and testing of AI systems for Indian users. Currently, Western tech companies operate as though other languages work the way English does, or that Indian users are all the same. However, many languages do not follow the semantic logic that English or even Indo-European languages do. English and other Latin languages and even some Indo-European languages like Hindi are gendered languages with pronouns for subjects and nouns, whereas Bengali, all Uralic and Turkic languages like Hungarian or Turkish, and many others lack gender pronouns at all. What’s more, companies assume that most users speak in one language or the other at any given time whereas linguists (and most Indians) know that we tend to “code-switch” – that is speak one language and another at the same time as in the case of Hinglish or Tamlish. Few training datasets or chatbots reflect this reality.

By failing to engage with language speakers and local experts, companies can replicate and even scale Anglocentric assumptions. Some claim that this failure to properly validate a model’s language accuracy could have major ramifications on the linguistic diversity and cultural uniqueness of the world, with AI researchers at Cornell claiming specifically that it could lead Indian users to hue more closely to American or Western norms when they write at the expense of their own style or agency. Regional and language-specific research consortia such as IndicLLM Suite led by AI4Bharat and Center for Tamil Natural Language Processing Research are examples of groups that can offer expertise and resources to shape and test multilingual systems in ways that better reflect the Indian context.

Tech companies can also ensure that their systems are fit for purpose by working with domain-specific experts. For building technologies in rural languages or contexts, organisations like Karya are essential. Karya is working with language speakers to record, document, and digitize languages spoken in rural India including Odia, Mundari, and others for domain-specific systems such as call centre support or agriculture data systems. For instances where models need to be tested for fairness or are developed to detect and action harmful conduct, Tattle is working to create high quality training datasets and systems to combat gender-based violence and other forms of harassment in Tamil, Hindi, and Indian English. Their training datasets are developed in consultation with language and subject matter experts such as gender-based organisations and other affected communities who are often at the front lines of new vectors of abuse and can ensure automated models can detect these evolving threats. 

Finally, consulting with local groups will also help companies garner consent from individuals over the collection and use of language data. Local groups and language experts will be more likely to have the consent and consensus of speakers and be able to represent whether a community wants their languages digitized and handed over to Big Tech companies or not. Te Hiku Media in New Zealand, for example, have recorded and digitized Māori, one of the 3000 Indigenous languages that are under threat of extinction, but retain the rights to distribute or license the training data to AI systems to ensure that the benefits of the AI revolution benefit language speakers and not just profit tech companies. Nvidia has helped Te Hiku Media create their own Māori language models through a crowd-sourced labelling campaign and after receiving the consent of elders who have stewarded the language.

Meaningful engagement with local experts will require a paradigm shift on the part of the companies to invest in time and resources to consult with external experts and solicit input into product development roadmaps. But in doing so, companies can gain from locally-made datasets and benchmarks to train and test models. Companies should consider remunerating such experts for their expertise and ensuring local expertise is solicited and incorporated at each stage of the product development lifecycle. Some initiatives such as Meta’s No Language Left Behind or Microsoft’s ELLORA are examples of efforts that fund research that advances the performance of systems in non-English languages. This investment will also make all users safer by ensuring safeguards cannot be circumvented in languages other than English, as they currently can.

It’s in the Government of India’s best interest to play a leading role here. Companies seem encouraged by the prime minister’s tech savviness, and Indians want to be technologically innovative. Prime Minister Modi can leverage that by ensuring that foreign investment by tech companies bolsters local groups working on language access in tech and ultimately benefits Indian languages, rather than endangers them. The Government of India’s relevant agencies such as MeitY should also establish a meaningful consultation process with the public to understand the potential opportunities and risks AI systems, including multilingual ones, pose rather than offering a blank cheque of access to Western tech companies.

Ensuring technologies work in the world’s languages, or even in just Indian languages, is a mighty task. Engaging with language and subject-matter experts, companies can move the needle on actually serving the plethora of Indian users and businesses who seek information about farming and agriculture policy, healthcarecricket history, and more.

Aliya Bhatia is a policy analyst at the Center for Democracy & Technology and the co-author of Lost in Translation: Large Language Models in Non-English Content Analysis.

Super Chat: How YouTube and YouTubers Are Making Money Out of Hate

While Super Chat is encouraging creators to post inflammatory content, YouTube is not only failing to curb extremism on the platform but is also benefiting from it.

While hate-filled content has been given space on YouTube for a while now, the video sharing giants other features also help in spreading hatred and profiting from it. One such tool is Super Chat.

Super Chat is a feature on YouTube that allows viewers to pay to have their messages highlighted during a live stream.

Four months ago, YouTube creator Ajeet Bharti streamed a video live on YouTube. In it, he claimed that Muslims are conspiring and engaging in ‘love jihad’ against Hindu women. During the live stream, a person named Kumar Saurabh asked through Super Chat, “Can we form a group like the Ranveer Sena to fight against love jihad?”

The Ranveer Sena is known for committing atrocities against Dalits and is notably infamous for the massacre of Dalit communities in Bihar during 1990s.

‘Love jihad’ is a bogey peddled by Hindutva organisations who claim Muslims are engaged in a conversion plan and wish to convert Hindu women through marriage.

Bharti’s video is about a murder from Karnataka that he repeatedly refers to as having arisen from ‘love jihad’. However, the state police, the chief minister Siddaramaiah and later the state’ crime investigation department have all denied a communal angle to the killing.

Bharti’s live video, which was watched 107,000 times, goes against all of YouTube’s guidelines regarding sensitive, false, violent, and dangerous content. The Super Chat by Kumar Saurabh is also a violation of the violent and dangerous content policy of YouTube.

This Super Chat, which incites violence against Muslims, was purchased by Kumar Saurabh for Rs 40. Seventy percent of this amount (Rs 28) will go to the creator, Bharti, and 30% (Rs 12) will be taken by YouTube, as per the site’s rules. Kumar Saurabh’s was not the only Super Chat Bharti got – he earned approximately Rs 2,100 from Super Chats during this live stream. In another live video, which also violated hate speech and violent content guidelines, he got earned him up to Rs 14,000 from Super Chats.

Bharti is a well-known figure in Hindutva circles. He has around 639,000 subscribers on YouTube, 447,000 followers on X, and 265,000 followers on Instagram. Earlier he used to work at OpIndia, a Hindutva propaganda website.

An operation similar to Bharti’s is the infamous Sudarshan TV channel, which spreads hatred against Muslims in almost all its videos. YouTube has allowed the channel to exist, and allowed it to make money and also profits from its content.

Ajeet Bharti’s live stream from 19 April, 2024 where Superchat from the user Kumar Saurabh can be seen.

Business model

Sudarshan TV’s hateful videos on YouTube run with advertisements from major brands like GoIbibo and Zomato, for which YouTube collects large sums of money and shares a portion with the creators. YouTube’s CEO, Neal Mohan, says that such partnerships with creators are “good for business.”

In January 2017, YouTube launched Super Chat and Super Stickers. Any user can now pay money to make their comment or animated sticker appear in a larger font, with a distinctive colour and in an animated format. The word limit and the duration for which the comment will appear in the live chat depends on the amount paid. In India, Super Chat can be bought for amounts ranging from Rs 40 to Rs 10,000, depending on duration of visibility and length of the comment. The prices are set by YouTube.

Depending on the duration of visibility in the live comment section, the price range of a Super Sticker is from Rs 19 to Rs 10,000. This money gives privilege to a user to post animated stickers in the live running comment section. For instance, a sticker purchased for Rs 19 only changes in colour and size, while a sticker worth Rs 1,000 adds animation and stays for up for upto 30 minutes during a live video.

According to YouTube, any content on YouTube must follow the community guidelines. YouTube says that if a Super Chat violates these guidelines, it will be deleted, and the amount will be “donated to charity,” though there is no transparency about what kind of donation this entails.

Kumar Saurabh’s comment violates YouTube’s guidelines. This correspondent reported Kumar Saurabh’s comment twice on YouTube, but the Super Chat has still not been removed.

Super Chat allows ordinary citizens to directly interact with celebrities during a live streaming and get publicly noticed by a celebrity creator. A user can make provocative comments to grab the attention of the celebrity, and in the pursuit of money, celebrities may also make inflammatory remarks to engage viewers.

It is thus an easy tool for promoting extremism on religious, ethnic, and gender issues. The quickness of Super Chat gives it immense potential to promote extremist views.

One example of this is Bharti’s live video. “What is the solution to this murderous mentality? This will continue. How and when will it stop?” a Super Chat in Hindi, of Rs 100, comes from a viewer named Amit Mishra. It appears in a larger size and different colour from other comments. In response, Bharti generalises Muslim children studying in madrasas, calling them sexual offenders. “No, it won’t stop, Amit ji, this doesn’t stop. Because it’s a mentality. This is not an isolated incident…there was a madrasa in Moradabad, a child from there was caught… he was a small child, 7-8 years old, and he was doing something with a Hindu girl. When asked, he said the Maulvi teaches us to do this – if it’s a Hindu girl, bring her, and do dirty things with her,” Bharti says.

After that, Bharti calls Muslim men “violent” and madrasas “dangerous places”. The next Super Chat, worth about Rs 300, justifies Chinese president Xi Jinping’s actions against Uyghur Muslims in the country. In response, Bharti agrees, calling Xi’s approach the right one for eliminating “extremism.” There is substantial evidence of human rights abuses against Muslims in China.

A study by the Reuters Institute this year stated that around 50% of people in India get their news from social media, and 54% of them rely solely on YouTube. In such a case, YouTube’s accountability matters much more.

According to research reports, YouTube is unable to ensure that videos uploaded to its platform follow community guidelines. And a video can have hundreds of Super Chat comments. If YouTube is unable to enforce rules on a video, it is unlikely to be able to moderate its comments.

For instance, according to a report on the Sudarshan TV YouTube channel, there were about 25 videos violating guidelines. They were reported to YouTube, but no action was taken. Many of these videos, created during the Lok Sabha elections, spread hate and misinformation against the Muslim community, amassing over 3.5 million views.

According to YouTube’s Transparency Report, India has the highest number of videos violating guidelines. Between January and March 2024 alone, more than 26 lakh videos were removed. India has ranked first in this category for the last four years.

A research from Oxford University shows that YouTube and other social media platforms ignore hate speech, disinformation, and violent content coming from poorer countries, while enforcing stricter rules in developed countries.

Last year, cow vigilante Monu Manesar from Haryana, was accused of kidnapping two Muslim individuals and burning them alive. Monu was also a YouTube creator who had been posting violent videos for almost six years. In his videos, he would chase vehicles transporting cattle and fire shots at them. Several videos showed vehicles that were being chased crashing while trying to escape. He would capture and photograph himself with the injured individuals he caught. Monu had nearly 200,000 subscribers on YouTube, and the platform had awarded him a celebratory silver button for reaching 100,000 subscribers. Despite being reported by several Indian news outlets and fact-checking platforms, his channel was not removed. However, after a report by the New York-based news organisation, Coda Story, in February 2023, YouTube removed nine of Monu’s videos from his channel.

Screenshot of the Youtube Transparency Report.

According to YouTube’s Transparency Report, between January and March 2024, YouTube removed over 144 crore comments worldwide for guideline violations. YouTube claims that 99% of these comments were identified and removed by the platform itself. Of these, 83.9% were spam, fraud, or misleading, while hate-filled and offensive comments were only 1.7%.

Screenshot of the Youtube Transparency Report.

This correspondent reported the hate-filled comments on Bharti’s video to YouTube, but they were not removed. This suggests that YouTube does not necessarily identify hate comments on its own, nor does it remove them even after they are reported by a user. This could be due to YouTube’s technical incapacity, or because moderation of hate and polarising content is not a priority for the platform. The third possibility is that hate and polarisation are profitable for YouTube.

The quarterly Transparency Report does not mention anything about Super Chat at all. YouTube does not provide data on how many Super Chats have been removed for guideline violations, or about the sum of money received by Youtube as penalty it has levied on Super Chat buyers as a result of violation of its guidelines. It also, crucially, has nothing about the nature of charity they are doing with the above penalty money.

This correspondent has sent several questions to YouTube regarding Super Chat, but the company did not give specific answers to any question. No action has been taken on Bharti’s video or Kumar Saurabh’s ‘love jihad’ Super Chat yet. YouTube says they are reviewing both.

However, it stated, “We have created several tools through which any creator can control live chat in their live video. A regular user can also flag any inappropriate Super Chat, meaning they can report that the Super Chat is inappropriate.”

In addition to the options available to users and creators, YouTube, says about its accountability as a platform: “If our smart detection system identifies any inappropriate Super Chat, we stop the purchase before it is completed.”

But the violent and hateful Super Chats this correspondent reported were not only not identified by YouTube, but even after reporting, they were not removed. The Wire has also sent questions to Ajeet Bharti but has received no response.

This article first appeared on The Wire Hindi and has been translated by Naushin Rehman and Vipul Kumar.

Code Dependence Has a Human Cost and Is Fuelling Technofeudalism

Madhumita Murgia’s new book alerts us to the fact that artificial intelligence affects above all how we relate to ourselves, to each other, and to our societies.

In 2023, the former Greek finance minister and economist Yanis Varoufakis put forth a controversial thesis: capitalism, as we knew it, had died. In place of capitalism, he argued, we could see the rise of a new, perhaps even more dangerous economic form which he called technofeudalism. The argument that he made in his book was simple – cloud capitalists (under which we can include all the Big Tech companies like Google, Amazon, Apple and Meta) were no longer capitalists in the strict sense, oriented towards generating profit through commodity production. Rather, they were technofeudalists, charging cloud rent for the use of their services from their vassals, who are engaged in commodity production.

One way to understand this would be the case of food delivery apps like Zomato, which take a percentage of the cut from restaurants on their app. They are charging them ‘cloud rent’ just to be on their app, and it is their algorithm which decides which restaurants a customer sees on the top of their lists, and those which appear at the bottom, thus effectively dooming the latter.

Code Dependent: Living in the Shadow of AI, Madhumita Murgia, Picador, 2024

If capitalism was commodity-dependent, technofeudalism is code-dependent. Madhumita Murgia’s new book Code Dependent: Living in the Shadow of AI, explores the very human cost of this code-dependence.

This difference between commodity-dependence and code-dependence is important for us to understand. Karl Marx had said that in the world of commodities, relations between people take the form of relations between things. This can be understood if we see that commodities embody value. Any commodity that is purchased is a product of labour; in purchasing it we relate ourselves to the producers of that commodity. In buying rice, I am relating myself to the paddy farmers who grew it, the truckers who transported it, the wholesaler who stored it. But these relationships are not directly visible to us; our social relation takes the form of an objective relation between things, between the money in my pocket and the rice in the shiny aisles of the supermarket. Our relations with each other are mediated by commodities and the general equivalent for all commodities which is money.

With code-dependence, relations between people take the fantastic form of relations between data. Value resides no longer in the commodity, but in data. Unlike the commodity which is a product of labour, data is produced through giving cloud capital access to our thoughts, conversations, preferences, locations, ideas, and moods: in essence our entire life. We produce data actively, by posting on social media, creating content, clicking on ads, or at work. We also produce it passively when we are listening to music, browsing the web, visiting our doctor, or even just walking around with our phones in our pockets. It is this data where value inheres, rather than in our lives which produce that data. The process of production of value now expands from labour to our very lives as such. Just like labour becomes invisible and embodied in the commodity, life becomes invisible and embedded as data. If money was the general equivalent for all commodities, then algorithmic code plays an analogous role for the data we produce. In this way the transformation from commodity-dependence to code-dependence is brought about. 

Earlier, labour was subject to exploitation and alienated from what it produces, the commodity. That still continues, but supplemented to it and governing it is the exploitation of life as such, alienated from what it produces: data. What happens when our lives become defined by the data they produce? What happens when our relations with each other are mediated by data, and the code that functions as its general equivalent? In what ways do our relations with ourselves and our societies start to be transformed?

Through various interviews and encounters with gig workers, data entry operators, bureaucrats enamoured by AI, social workers contemptuous of it, lawyers fighting back against it, and even a consortium of multi-faith priests, Murgia’s book seeks to lay out what this code dependence means for us, and how it affects us at every scale of our lives. She comprehensively covers the way it transforms our relationships with our jobs, our bodies, our identities, our governments, our laws and our societies. Murgia’s strength is as a journalist (she is artificial intelligence editor at Financial Times) and she quite deftly weaves together stories from the people she meets to paint a grim picture – the woman in England subjected to deepfake porn, underpaid Facebook censors who have PTSD, Chinese dissidents fighting back against an omniscient state, and gig workers rebelling against the opacity of the algorithms. There are a few positive stories as well, such as the doctor in India’s rural hinterlands using AI to detect tuberculosis in her tribal patients.

Divided into 10 chapters, the book makes an inescapable point – the smooth functioning of code is heavily dependent on a precarious army of poorly paid and invisible workers toiling away in secrecy and on the margins. When we think of AI as learning how to see, speak and recognise things, we are encouraged to miss out on the very real human labour behind it. Most of them are from the Global South or immigrant communities, who ‘train’ it by tagging images and labelling them. This includes people like Ian from the shanties of Nairobi who tags images for driverless cars so that the AI running them can see better, and Hiba, an Iraqi refugee in Bulgaria who labels images of simple objects like roads, pedestrians, kitchens, living rooms and so on. There is exploitation involved in the code as well. Murgia narrates the story of Armin Samii, who found out that UberEats was paying him and many others less than it should for the distance they travelled to deliver food. Sami, a computer scientist himself, made a freely available app called UberCheats so that drivers could know if they were being underpaid for their rides or not. Despite efforts like Samii’s, Big Tech’s algorithms remains resolutely outside of public and governmental scrutiny. Governments in fact are playing catch-up, buying into the ideology of code-dependence in all departments, from welfare policies to digital policing.

This means that code-dependence does not just transform our relations with our employers and cloud capitalists, but also our governments as well.

From predictive policing to facial recognition and ubiquitous surveillance, Murgia narrates how state power starts to malfunction in terrifying ways when it begins considering citizens as nothing more than data-points. Predictive policing creates a nightmare for 14 year old Damien, the son of immigrants in Amsterdam, when his name is included in an algorithmically decided list ‘Top400’ of teenagers at risk of becoming criminals in the future. Facial recognition and surveillance turns the entire population of Uyghurs in Xinjiang into lab rats monitored for even the slightest change in their facial expressions, their lives literally resembling a video game, but one with real life consequences. In Argentina, zealous government officials try and fail to ‘solve’ teenage pregnancy among indigenous populations, creating a digital welfare state that looks at social problems through the lens of ‘objective’ data, not noticing how the process of production of data is never really objective but colours and reinforces pre-existing biases and prejudices.

More than a technological revolution, Murgia’s book alerts us to the fact that artificial intelligence affects above all how we relate to ourselves, to each other, and to our societies. The contours of these transformations are yet to be fully mapped out. What is clear, however, is that when data becomes the embodiment of value and code becomes its general equivalent, social relations, under which one can include class relations, are not just inverted but effaced. Questions about AI ethics, as raised by Murgia in her conclusion, are all well and good. But both the ethical and the technological perspectives on AI effectively erase how code-dependence is essentially a political problem. To recognise this, is to recognise that our code-dependence is just another expression of our sometimes messy, often unhealthy, but essentially unavoidable co-dependence on each other.

Huzaifa Omair Siddiqi is assistant professor of English, Ashoka University.

Learning from Jharkhand: Advancing Transparency in Public Distribution System Portals

Jharkhand’s PDS portal is more transparent than those of other states. And transparency is a prerequisite of accountability.

When we met Sunita Devi in July, she was anxious about her ration entitlements. A resident of Surkumi village in Garu, Jharkhand, receiving foodgrains through the public distribution system (PDS), she had applied years ago to add her daughter’s name to her ration card but had not heard back.

The dealer claimed the name was not added because he had not received the extra foodgrains for another member of her family. However, a quick check on Jharkhand’s PDS portal revealed her daughter was indeed added at the start of the year, making seven members on the card.

Screenshots provided by authors.

Suspecting corruption, we investigated further. But the portal showed her family was only being allocated 30 kg of foodgrains – 5 kg each for six members.

While frustrating, having access to this information helped calm Sunita Devi’s worries and empowered her to hold the state accountable by raising a formal grievance.

Screenshots provided by authors.

This kind of exclusion error in a fully digitised system is not uncommon and transparency is key in bringing citizen-centric resolutions. Since its inception, the PDS has been prone to leakages and pilferages, prompting several evaluations, some notable ones being by the Planning Commission (2005) and the Justice Wadhwa Committee (2006). These reports recommended the digitisation of the PDS to increase transparency and reduce corruption.

Consequently, the digitisation of the PDS must ensure accuracy and multiple modes of resolution in case of errors or exclusions arising from digitisation. In a nutshell, it must be ensured that human-centred design and accountability are not compromised.

We believe transparency is a prerequisite of accountability, and that having the necessary technology in place, it is essential that information generated by key interactions between rights-holders and social protection programs like the PDS needs to be placed in the public domain.

Beyond the PDS, ration cards are also imperative for people to access other schemes such as the Indira Gandhi National Widow Pension Scheme, the Ayushman Bharat Pradhan Mantri Jan Arogya Yojana and several national scholarship schemes.

Governments taking steps to put information in the public domain has made public audits of schemes possible by civil society organisations and movements, which in turn strengthen the implementation of schemes by identifying their gaps.

When it comes to transparency in the PDS, the National Food Security Act, 2013 (NFSA) in its section 12(2)(b) calls for the “application of information and communication technology tools including end-to-end computerisation to ensure transparent recording of transactions at all levels, and to prevent diversion”, whereas section 12(2)(d) calls for the full transparency of records.

Further, section 27 of the Act calls for all PDS records to be placed in the public domain. Additionally, the voluntary disclosure of information is mandatory under section 4(1)(a) of the Right To Information Act. For these purposes, the Department of Public Distribution of the Union government developed a template for a web portal for the PDS.

We found different levels of commitment to transparency after exploring the PDS portals of some highly ranked states in the ‘State Ranking Index for NFSA’ such as Odisha, Uttar Pradesh and Andhra Pradesh.

Odisha being the best-performing state turned out to have the most opaque portal. Uttar Pradesh offered a little more detail and a better interface, and Andhra Pradesh offered minimal information.

Having worked with Jharkhand’s PDS portal, we realised it is more transparent than the rest and offers crucial information. Thus, we decided it was best to compare Jharkhand’s portal with that of Odisha to substantiate the need for transparency.

The two portals of Odisha are https://pdsodisha.gov.in/ (PDS Odisha) and http://www.foododisha.in/index.htm (Food Odisha). The portal of Jharkhand is https://aahar.jharkhand.gov.in/. We focus our attention on two parameters for comparing the portals – the functionality of links and data granularity.

Also read: India’s Path to Food Security Has No Quick Fixes

Functionality of links

The Food Odisha website features a transparency portal under “Online Services”, but most links are either non-functional or redundant. The only accessible link under the PDS is “Current stock position”, which redirects to the supply chain management system showing district-wise data on the stock of commodities under the NFSA.

The PDS Odisha website offers various navigation options like “NFSA Cards & Beneficiaries” and “Allotment NFSA”. However, most links are inactive or non-responsive, except for the “NFSA Cards & Beneficiaries” section.

Unlike Odisha, Jharkhand’s portal does not suffer from non-functional links. While there are other minor issues that the Aahar portal faces, such as older data not being available or the delayed display of data, addressing them is beyond the scope of this article.

Granularity of data

Users in Odisha can access basic ration card information, including the number of members, their names, and entitlements, but detailed demographic information, Aadhaar seeding status and mobile number linking are absent.

This is crucial information, because Aadhaar seeding is mandatory for an individual to be able to lift rations through biometric authentication. Since the Union government has made e-KYC mandatory, this information has become more important.

Moreover, ration card details cannot be accessed using the ration card number. One has to provide district, block and fair price shop information to be able to access ration card details.

Jharkhand’s portal offers a higher granularity of data on ration cards and ration distribution. Allocation reports include the amount of allocation at the level of the dealer and the time of receiving the allocated foodgrains.

Information on ration cards includes member demographics, Aadhaar seeding status, linked mobile numbers and month-wise transactions.

Finally, details of ration distribution are available for three cards – the Priority Household, the Antyodaya Anna Yojana and Green card (the Jharkhand State Food Security Scheme).

Recommendations for a transparent portal

A transparent PDS portal can be built by working on the two parameters discussed above. All existing links must be functional and easily accessible. Once the links are fixed, information must be regularly uploaded to the portal. A range of information from different stages of the supply chain and distribution must be made public.

Direct search by ration card number should be enabled, bypassing demographic details and OTP requirements. Detailed information such as Aadhaar seeding status, mobile number linking and transaction details with timestamps should be displayed.

Dealer information, including their status and suspension details, should be made accessible, alongside allocation and distribution data with timestamps for the verification of stock movements.

Conclusion

Apart from the importance of accountability underscored in this article, the purpose of having a transparency portal is twofold.

First, making information accessible to the public invites scrutiny and constructive feedback. The quantum of data generated within the PDS across the country can not only empower citizens to claim their entitlements, but also be analysed to further strengthen the system. A comparative analysis of data from states with different levels of efficiency can highlight models that are more effective on the ground.

Second, successful grievance redressal requires greater access to information. When the rights-holders understand their rights and the design of a scheme, they are better positioned to raise grievances.

While the authors in no way insinuate anything about the efficiency of the PDS in Odisha or other states, they posit Jharkhand’s transparency portal as a template from which to learn.

The authors are associated with LibTech India.