Whether Russians Like, Comment or Subscribe – Big Brother Is Watching

As Russia’s military loses territory to a Ukrainian counteroffensive, opponents of the war feel emboldened enough to call for Putin’s resignation on social media. But the regime is playing a long game. 

For most of Putin’s authoritarian rule, Russians could enjoy the fruits of globalisation like smartphones and social media and be left well enough alone – as long as they stayed out of politics. Seven months into the brutal war in Ukraine, Russia’s military is losing territory to a surprise Ukrainian counteroffensive, and opponents of the war feel emboldened enough to call for Putin’s resignation. But the regime is playing a long game, and today every Russian is at greater risk of being monitored, tracked, and arrested simply for liking the wrong social media post.

As the invasion was met with near-universal condemnation, sanctions, and unrest at home, the Russian government arrested and detained more than 16,000 people for opposing the war. Then, it focused on expanding its toolbox for stifling domestic dissent. The authorities blacklisted and blocked more than 7,000 websitesbanned Meta (aka Facebook) over alleged extremist activities, and fined Telegram about $178,000 for failing to take down content about Ukraine.

Going after popular platforms like Facebook and attacking websites unfavourable to the regime are not new phenomena in Russia. But the war has accelerated the pace of digital surveillance, which in turn enables far-reaching political repressions. While the Russian authorities may not have the technical capacity to forecast “undesirable” social behaviour from a single post, they are definitely working on it – and the world should be paying attention.

future tense logo

In March, Russia’s parliament adopted a series of Bills imposing administrative and criminal liability and prison terms up to 15 years for disseminating fake news, or “fakes,” about the Russian Armed Forces. Of the 236 criminal cases currently open against Russian citizens for opposing the war in Ukraine, 80 are being prosecuted under the “fakes” law.

“The key goal of the law on ‘fakes’ is to ensure that only the official government position about socially significant issues remains in the public discourse,” said Stanislav Seleznev, a lawyer at Net Freedoms Project, a special project of Agora International Human Rights Group.

A joint analysis of Net Freedoms and BBC News Russia found that more than 55% of criminal cases opened under this law target ordinary citizens, as opposed to the Russian journalists, activists, and opposition figures who have long been subject to prosecution. Among the people charged with “publicly and knowingly spreading false information” about the armed forces this time are three pensioners, three police officers, two students, a teacher, a doctor, and a priest.

One of the first to be prosecuted was a grandma from Seversk who criticised the authorities in her Telegram channel. A history teacher from Barnaul had to pay about $500 for reacting to anti-war posts on Odnoklassniki (a Russian social network) with sad emojis. Luckily, he got off with an administrative fine, not a criminal trial.

“It’s a trend with social media, of being arrested for likes. It looks like with the invasion, it’s picking up more in Russia,” Natalia Krapiva, tech legal counsel at Access Now, told Future Tense. “Regular people – not just activists but anybody who says anything mildly against the war or likes something the government doesn’t like – they’re more at risk.”

The bulk of criminal cases under the “fakes” law were opened in March and April when anti-war sentiments were strong. But that’s changed. Public willingness to openly oppose the war has waned with only 9% of Russians prepared to attend a protest, down from about 20% six months ago. (It is worth noting that, of course, it is hard to find or collect unbiased polling data in Russia, especially since the war started.) But staying off the streets is not enough to escape targeted surveillance.

In June, Russia’s Ministry of Emergency Situations unveiled plans to spend about $265 million to deploy “Safe City” facial recognition technology in three regions bordering Ukraine. Safe City appeared in Moscow in 2020 with cameras installed in metro and train stations to scan crowds against a database of wanted individuals. (In Moscow, you can even use your face to pay for your ride.) Since the invasion, Access Now has heard reports of people detained in the Moscow metro in connection with their war-related social media posts. The evidence is anecdotal, but it suggests that facial recognition tools are being used to identify and track armchair critics of the regime.

Russian President Vladimir Putin. Credit: Yuri Kochetkov/Reuters

Russian security services have a history of weaponising public safety technology for blatantly political ends, like harassing supporters of imprisoned dissident Alexei Navalny. But security ministries have now gone a step further to tap into a wealth of personal data collected across the country. In 2020, the Ministry of Internal Affairs allocated $3.9 million to integrate regional systems of data collection into a centralised federal database containing biometrics, police records, and other personal data.

Also read: With His Army on the Back Foot, Is Escalation Over Ukraine Putin’s Only Real Option?

The project was scrapped in July over disagreements with the developer, but it shows a desire to create “one honeypot of all the data in one place,” a terrifying prospect, Krapiva says. Such a database means local and federal law enforcement can more easily monitor and harass people based on arbitrary characteristics, like Ukrainian citizenship. Plus Russia has a notorious reputation of having weak security measures to protect personal data, resulting in leaks and sale of data.

According to a 2022 Net Freedoms report on political profiling, terminating this project is just a temporary setback. Russia has a host of additional surveillance tools. For example, a private company, SEUSLAB, registered a database that tracks social media users who are active during periods of peak protest activity and collects information about their friends, posts, shares, and comments. Security services used this tool as early as 2019 to identify “socially dangerous” content. The report concludes that there is no reason to assume that Russian authorities have abandoned its plans to create a comprehensive monitoring and profiling system.

That grim prediction is bearing out. Last month, Roskomnadzor, Russia’s internet regulator notorious for its heavy-handed censorship, signed a roughly $886,000 contract with a private firm to develop “Oculus,” a neural network system that analyses images, video, and text across social media and messaging platforms to flag content prohibited under Russian law. Experts are sceptical that $886,000 is enough to get this system up and running by the advertised December deadline.

Krapiva told Future Tense that even before the 2022 invasion, Russia likely didn’t have the technological and financial capacity to implement AI-driven predictive policing. New sanctions and the exodus of Western companies will further hamper Russia’s progress. But Russia’s protracted efforts to beef up its technological arsenal demonstrate a clear commitment to using modern surveillance tools to increase its capacity for targeted repression.

Sophisticated tools, from facial recognition to predictive profiling, are not a silver bullet for ensuring total control over a country’s populace. But, Krapiva says, where Russia fails technologically, it compensates with traditional spying methods like physical surveillance and social engineering. These tactics proved successful in tracking and poisoning Navalny in 2021.

“It doesn’t mean that we should just sit and wait for things not to work, for things to break,” Krapiva says. “If we look at how governments use the kinds of technologies that give them more control, they tend to adopt them, and there is this mission creep. Even in democratic states, let alone in countries like Russia.”

An algorithm that can accurately forecast your protest potential based on your use of emojis or the types of posts you comment under, is unlikely to yield accurate results. But in an authoritarian country like Russia, an omnipotent piece of tech doesn’t need to work perfectly – just well enough to remind Russians that they’re always being watched.

This piece was originally published on Future Tense, a partnership between Slate magazine, Arizona State University, and New America.