As Modi Heads to AI Summit in France, He Needs to Shun Rhetoric and Engage With Critical Issues

There’s a dire need for India’s political leadership to redesign and adopt a more scientifically informed and realistic approach.

Recently during the ongoing budget session on parliament, Congress leader Rahul Gandhi highlighted the critical role of data in the AI landscape, stating, “People talk about AI, but it’s important to understand that AI on its own is absolutely meaningless, because AI operates on top of data.” He noted that without access to data, AI cannot function effectively. Pointing out a significant challenge for India, he added, “Most of the Indian data is stored abroad,” which compromises the nation’s ability to harness AI’s potential independently.

Gandhi went further, stating that global data control is a major issue, with China owning production-related data and the United States controlling consumption data. “Every single piece of data that comes out of the production system in the world… is owned by China, and the consumption data is owned by the United States,” he said, underscoring the need for India to address these imbalances if it hopes to compete in the global AI race. This statement contrasts sharply with the more rhetorical approach seen in speeches by Prime Minister Narendra Modi, and particularly on the same day when he made his ‘Double AI’ statement.

In this data driven world, India is entering an era where technological advancement will shape the fabric of society. With youth on the front lines of this transformation, it is essential for the political class to speak clearly, critically, and with constructive foresight about the implications of AI. The political discourse on AI cannot remain vague when the technological, economic, social and geopolitical implications are deeply entwined with it.

Realities and Rahul Gandhi

In this wake, Rahul Gandhi’s approach on AI is fresh and offers an example of what a scientific temper in political leadership looks like. He did not merely endorse AI as a tool for progress but instead engaged with its geopolitical and political-economic dimensions. Gandhi highlighted that AI cannot exist in isolation and it is entirely dependent on data as an essential fuel for AI systems to function.

Gandhi’s reference to the fact that much of the country’s digital data is stored and processed abroad, controlled by foreign tech giants such as Google, Facebook, and Amazon, reflects the geopolitical reality where global powers like the United States and China have consolidated their dominance over AI technologies due to their control over vast amounts of data. In the United States, OpenAI and companies like Microsoft use the data of millions of users to advance AI systems, creating technological monopolies. Similarly, China uses state control over data to fuel its AI ambitions, using data for everything from economic strategy to social surveillance.

In this critical global juncture, Gandhi is correct in not simply treating AI as a tool of progress but in underscoring the structural realities that will influence AI’s development in India. His critical engagement with these geopolitical realities adds value to our modern political discourse not only through an argument for technological growth, but also for India’s autonomy in the AI landscape. In fact, his advocacy for data sovereignty and infrastructure development has the potential to become an important directive for policy makers too. As China and the United States lead the global AI race, India has to move beyond mere rhetoric on AI by taking Rahul Gandhi’s realist criticism seriously.

Being in the the realpolitik of AI, restructuring AI and data policies in the larger context of national sovereignty and data control is certainly the need of an hour. Today, ’We the people’ of India seek a vision of technological sovereignty, where data sovereignty is priced and human dignity is valued.

Disparities

In his speech, Rahul Gandhi also emphasised the transformative potential of AI in its application to the caste census, claiming it as an opportunity for a “social revolution” in the country. He said, “Imagine the power of AI when we apply it to the caste census. Imagine what we will do with AI and what we will do with the social revolution in this country when we start to apply AI to the data that we get from the caste census.” This vision underscores his belief that AI has the potential of “revolutionising the participation of Dalits, OBC and Adivasis in the running of this country, in the institutions of this country, in the distribution of wealth of this country and on the other side, “challenge the Chinese and participate in the revolution, defeat the Chinese in electric motors, batteries, solar panels and wind.”

Despite all the talk of “Amritkaal” and Viksit Bharat, social exclusion and discrimination remain deeply entrenched. Among SCs, the share of school children drops from 81% in the 6-14 years age group to 60% in the 15-19 age group. The condition of STs is perhaps more deplorable.

In institutions like IIM Indore, over 97% of faculty positions are held by individuals from the General category, leaving no representation for Scheduled Castes (SC) or Scheduled Tribes (ST). Similarly, IIM Udaipur and IIM Lucknow report over 90% of faculty from the General category. The situation is similarly stark in IITs, with over 90% of faculty at IIT Bombay and IIT Kharagpur belonging to the General category, while IITs in Mandi, Gandhinagar, Kanpur, Guwahati, and Delhi report 80-90% General category faculty. These trends highlight a significant disproportion in faculty composition despite reservation policies. Of course, this is just a part of the pan Indian picture of inequality, excluding the larger set of private institutions.

This disparity reveals that in spite of technological advances, the social inequities that define caste hierarchies and systemic oppression remain pervasive.

Caste and other issues

While India celebrates technological achievements like space missions, there is a glaring neglect of the basic infrastructure needed for a truly scientific outlook such as providing proper education and resources for the marginalised. Moreover such a paradox is starkly visible in the paradox of millions being invested in religious gold-plated symbolism while the most basic needs of government schools, including proper infrastructure and qualified teachers, remain unmet.

This paradox highlights a deeper issue in the reach of digital democracy and scientific temper in India. Despite the promise of digital platforms like social media and AI tools such as ChatGPT democratising access to information, the real material and cultural benefits are largely biased towards the socially and economically privileged. For marginalised communities, the rising cost of digital learning resources further compounds their exclusion, leaving them unable to leverage these platforms for upward mobility.

In this light, Rahul Gandhi’s call for a ‘Caste Census’ is an articulation emanating from scientific temper. It recognises AI’s potential to benefit the marginalised masses by measuring/analysing the complex information regarding the causes of their exclusion, marginalisation and underrepresentation in institutions. It shall certainly make the benefits and welfare accessible to this class of India. “Social revolution” through AI, therefore, is not only about technological advancement but also about addressing the systemic inequalities that undermine the fundamental rights provided in articles 14, 15 and 16 of the Constitution.

Gandhi’s emphasis on data localisation and AI-driven social justice is not just a call for technological independence but for national autonomy, integration as well as social revolution. This approach is deeply grounded in the reality of India’s diverse population, ensuring that AI policies ensure holistic and inclusive empowerment for all the sections of society – particularly the historically marginalised ones. Gandhi’s call for a caste census as a foundation for AI policy can’t be faulted: “If we don’t have a caste census, AI policies will not be able to accurately address the needs of India’s diverse population.”

Infrastructure

Moreover, AI’s application in sectors like education, employment, and healthcare can revolutionise them. Gandhi’s critical realist approach – rooted in both scientific temper and social equity – arguably makes him more aligned with the needs of modern India.

On the other hand, Prime Minister Narendra Modi’s speeches on AI often remain on the level of aspirational rhetoric, positioning India as a global leader in AI without engaging with the practical issues that need to be addressed. His recent remarks about “Double AI” – one AI as Artificial Intelligence and the other as an “Aspirational India” – underline the gap between vision and substance. Although his assertions often emphasise optimistic narratives of technological progress, there is a noticeable absence of clear, actionable plans to solve issues like data sovereignty, digital infrastructure, and AI literacy. Needless to say, there is hardly any constructive idea in favour of caste census from PM’s end. Moreover, he criticised Rahul Gandhi’s invocation of it as just a mere “fashion” of speaking on caste.

It’s high time that the political discourse surrounding AI in India transcend rhetoric and engage with critical issues of data sovereignty, infrastructure, and social justice. There’s a dire need for India’s political leadership to redesign and adopt a more scientifically informed and realistic approach. Only through such a vision can India reclaim its rightful position in the global AI race. The onus is on political leaders to engage constructively with not only the vision but also with responsibility to build a future where technology empowers every citizen and strengthens the fabric of the nation.

Vruttant Manwatkar is an Assistant Professor of Political Science, KC College, Mumbai.

This piece was first published on The India Cable – a premium newsletter from The Wire & Galileo Ideas – and has been updated and republished here. To subscribe to The India Cable, click here.

Why Deepseek’s AI Leap Only Puts China in Front for Now

Commentators are right to say DeepSeek’s new AI chatbot is a game-changer but don’t take all the hype about China now dominating the field too seriously.

The knee-jerk reaction to the release of Chinese company DeepSeek’s AI chatbot mistakenly assumes it gives China an enduring lead in artificial intelligence development and misses key ways it could drive demand for AI hardware.

The DeepSeek model was unveiled at the end of January, offering an AI chatbot competitive with the US’s OpenAI’s leading model, o1, which drives ChatGPT today.

DeepSeek’s model offered major advances in the way it uses hardware, including using far fewer and less powerful chips than other models, and in its learning efficiency, making it much cheaper to create.

The announcement dominated the international media cycle and commentators frequently suggested that the arrival of DeepSeek would dramatically cut demand for AI chips.

The Deep Seek announcement also triggered a plunge in US tech stocks that wiped nearly AU$1 trillion off the value of leading chipmaker Nvidia.

This dramatic reaction misses four ways DeepSeek’s innovation could actually expand demand for AI hardware:

  • By cutting the resources needed to train a model, more companies will be able to train models for their own needs and avoid paying a premium for access to the big tech models.
  • The big tech companies could combine the more efficient training with larger resources to further improve performance.
  • Researchers will be able to expand the number of experiments they do without needing more resources.
  • OpenAI and other leading model providers could expand their range of models, switching from one generic model — essentially a jack-of-all-trades like we have now — to a variety of more specialised models, for example one optimised for scientists versus another made for writers.

What makes DeepSeek’s model so special?

Researchers around the world have been exploring ways to improve the performance of AI models.

Innovations in the core ideas are widely published, allowing researchers to build on each other’s work.

DeepSeek has brought together and extended a range of ideas, with the key advances in hardware and the way learning works.

DeepSeek uses the hardware more efficiently. When training these large models, so many computers are involved that communication between them can become a bottleneck. Computers sit idle, wasting time while waiting for communication. DeepSeek developed new ways to do calculations and communication at the same time, avoiding downtime.

It has also brought innovation to how learning works. All large language models today have three phases of learning.

First, the language model learns from vast amounts of text, attempting to predict the next word and getting updated if it makes a mistake. It then learns from a much smaller set of specific examples that enables the large language model to be able to communicate with users conversationally. Finally, the language model learns by generating output, being judged, and adjusting in response.

In the last phase, there is no single correct answer in each step of learning. Instead, the model is learning that one output is better or worse than another.

DeepSeek’s method compares a large set of outputs in the last phase of learning, which is effective enough to allow the second and third stages to be much shorter and achieve the same results.

Combined, these improvements dramatically improve efficiency.

How will DeepSeek’s model drive further AI development?

One option is to train and run any existing AI model using DeepSeek’s efficiency gains to reduce the costs and environmental impacts of the model while still being able to achieve the same results.

We could also use DeepSeek innovations to train better models. That could mean scaling these techniques up to more hardware and longer training, or it could mean making a variety of models, each suited for a specific task or user type.

There is still a lot we don’t know.

DeepSeek’s work is more open source than OpenAI because it has released its models, yet it’s not truly open source like the non-profit Allen Institute for AI’s OLMo models that are used in their Playground chatbot.

Critically, we know very little about the data used in training. Microsoft and OpenAI are investigating claims some of their data may have been used to make DeepSeek’s model. We also don’t know who has access to the data that users provide to their website and app.

There are also elements of censorship in the DeepSeek model. For example, it will refuse to discuss free speech in China. The good news is that DeepSeek has published descriptions of its methods so researchers and developers can use the ideas to create new models, with no risk of DeepSeek’s biases transferring.

The DeepSeek development is another significant step along AI’s overall trajectory but it is not a fundamental step-change like the switch to machine learning in the 1990s or the rise of neural networks in the 2010s.

It is unlikely that this will lead to an enduring lead for DeepSeek in AI development.

DeepSeek’s success shows that AI innovation can happen anywhere with a team that is technically sharp and fairly well-funded. Researchers around the world will continue to compete, with the lead moving back and forth between companies.

For consumers, DeepSeek could also be a step towards greater control of your own data and more personalised models.

Recently, Nvidia announced DIGITS, a desktop computer with enough computing power to run large language models.

If the computing power on your desk grows and the scale of models shrinks, users might be able to run a high-performing large language model themselves, eliminating the need for data to even leave the home or office.

And that’s likely to lead to more use of AI, not less.

Dr Jonathan K. Kummerfeld is a senior lecturer in the School of Computer Science at The University of Sydney. He works on natural language processing, with a particular focus on systems for collaboration between people and AI models.

Originally published under Creative Commons by 360info™.

‘Regulating Hate Speech Is Not Censorship’: UN Rights Chief on Meta’s Fact-Checking Move

UN high commissioner for human rights Volker Türk has criticised Meta’s decision to end its fact-checking programme.

New Delhi: Following Meta’s decision to end its fact-checking programme citing ‘excessive censorship’, UN high commissioner for human rights Volker Türk has criticised the move.

“Allowing hate speech and harmful content online has real world consequences. Regulating this content is not censorship,” Türk wrote on X.

In a separate post on LinkedIn, Türk elaborated on the subject. “When we call efforts to create safe online spaces ‘censorship’, we ignore the fact that unregulated space means some people are silenced – in particular those whose voices are often marginalised. At the same time, allowing hatred online limits free expression and may result in real world harms,” he wrote.

Governance in digital space

In his port, Türk said that social media platforms hold immense potential to enhance lives and connect people. However, they have “demonstrated the ability to fuel conflict, incite hatred and threaten safety,” he added. 

“At its best, social media is a place where people with divergent views can exchange, if not always agree,” he said.

The UN human rights chief said that he would continue to call for “accountability and governance in the digital space, in line with human rights. This safeguards public discourse, builds trust, and protects the dignity of all.”

A UN spokesperson in Geneva, commenting on Meta’s move, said the global organisation continually monitors and evaluates the online space, the UN news reported.

Michele Zaccheo, Chief of TV, Radio and Webcast, said, “It remains crucial for us to be present with fact-based information,” adding the UN remained committed to providing evidence-based information on social media platforms

The World Health Organisation (WHO) also reaffirmed its commitment to providing quality, science-based health information, maintaining a presence across various online platforms, the report said.

‘Meta’s move could cause harm’

Meta chief Mark Zuckerberg had announced last week that the social media giant would end its fact-checking programme, starting in the US, as it did not successfully address misinformation on the company’s platforms, stifled free speech and led to widespread censorship.

He said that self-regulation has resulted in “too much censorship”, adding that it was time to return to its “roots around free expression”.

The International Fact-Checking Network (IFCN) has rejected Zuckerberg’s “false” argument and warned it could cause harm, the UN news reported.

In a statement reacting to Meta’s decision, IFCN chief Angie Drobnic Holan said, “Fact-checking journalism has never censored or removed posts; it’s added information and context to controversial claims, and it’s debunked hoax content and conspiracy theories. The fact-checkers used by Meta follow a Code of Principles requiring nonpartisanship and transparency.”

A large body of evidence supports Holan’s position.

In 2023 in Australia alone, Meta displayed warnings on over 9.2 million distinct pieces of content on Facebook (posts, images and videos) and over 510,000 posts on Instagram, including reshares. These warnings were based on articles written by Meta’s third-party, fact-checking partners.

YouTube Remains India’s Most Favoured Video Streaming Platform: Report

According to a report, YouTube remained the largest video streaming platform and earned Rs 14,300 crore in revenue in India.

New Delhi: YouTube is chief frontrunner of video streaming services in India in 2024, a report on the emerging trends of video industry have revealed. 

According to a report by Business Standard, YouTube remained the largest video streaming platform and earned Rs 14,300 crore in revenue in India. It was followed by Meta, JioStar, and Netflix.

The Media Partners Asia’s 2025 report on the Asia Pacific video and broadband market revealed major growth drivers and significant shifts in the industry landscape. Streaming will overtake TV by 2027, driven by China and India, with streaming’s share from APAC video industry revenues rising from 44% in 2024 to 54% by 2029, it stated.

According to the Business Standard report, the subscription-driven video on demand (SVoD) market also bounced back in 2024, adding an estimated 15 million subscribers, taking the total number of streaming video subscribers in India to 125 million.

The online video market has been witnessing a rapid surge over the past few years, driven by increased engagement and monetisation. 

India is the largest market for video streaming, driving an industry revenue growth of 26%, followed by China (23%), Japan (15%), Australia (11%), Korea (9%) and Indonesia (5%). Altogether, these key markets account for approximately 90% of incremental video industry revenue growth, the MPA report revealed.

It forecasted that India will spearhead growth in the premium video sector contributing close to half of the sector’s US$5.5 billion in incremental growth across the Asia Pacific. 

Meanwhile, given that online video is expected to experience robust 16% CAGR growth in India, the doom of traditional television is not far, although the sector remains material in many parts of the country.

Meta to Roll Back Third-Party Fact-Checkers in Interest of ‘Free Expression’

Mark Zuckerberg claimed that fact-checkers had been “too politically biased” and “destroyed more trust than they’ve created, especially in the US”.


Meta CEO Mark Zuckerberg said Tuesday (January 7) the social media giant was rolling back the use of third-party fact-checkers on its platforms, starting with the US.

Zuckerberg claimed that “fact checkers have just been too politically biased and have destroyed more trust than they’ve created, especially in the US.”

Many conservatives have long described the fact-checking programs as censorship.

Elon Musk’s X platform, which Zuckerberg said the new Meta system aimed to emulate, had also abandoned fact-checkers and replaced them with community notes.

What did Zuckerberg say?

In a video he posted on Facebook, Zuckerberg said Meta was focused on “restoring free expression” on its platforms, which also include Instagram, Threads and WhatsApp.

“It’s time to get back to our roots around free expression. We’re replacing fact-checkers with community notes, simplifying our policies and focusing on reducing mistakes,” another Zuckerberg statement read.

The 40-year-old tycoon said that “recent elections feel like a cultural tipping point toward, once again, prioritising speech.”

He added that Meta platforms would “simplify” their content policies” and get rid of a bunch of restrictions on topics like immigration and gender that are just out of touch with mainstream discourse.”

Meta will also relocate its trust and safety and content moderation teams from California to Texas. Zuckerberg suggested the southern state is a place where “there is less concern about the bias of our teams”.

Meta platforms will also allow users greater control over the amount of political content they see, reversing a 2021 policy to reduce political content across its platforms.

“It feels like we’re in a new era now,” Zuckerberg said. “And we’re starting to get feedback that people want to see this content again.”

Closer ties with Trump

The move comes as Zuckerberg strives to mend ties with President-elect Donald Trump, who had previously accused Meta of supporting liberal policies and being biased against conservatives.

Zuckerberg’s Meta had donated $1 million (€962,310) to Trump’s inauguration fund.

Following the January 6, 2021 Capitol riots, Facebook kicked out Trump, though his account was later restored.

In November, Zuckerberg dined with Trump at his Mar-a-Lago resort, in what was seen as an effort to repair his company’s relationship with the incoming US president.

In his video on Tuesday, Zuckerberg said Meta will “work with President Trump to push back on governments around the world going after American companies and pushing to censor more.”

He cited what he called European laws “institutionalising censorship,” Latin American “secret courts” and China’s wide censorship practices, calling on the US government’s support to push back on “this global trend”.

This article was originally published on DW.

Good Life in the Age of AI

Artificial Intelligence and the World Brain will forever struggle to grasp our inner lives. As Andrei Tarkovsky’s cinematic journey into the Zone – a metaphor for the Unconscious – reminds us, there are depths of human experience that lie beyond the reach of technology.

It is very peculiar. The human attitude towards AI. 

Consider the backlash to the generative AI-powered Spotify Wrapped 2024. Wrapped provides a personality digest to its listeners based on year-end metrics (so published in December). What it offered this time put off most users.

Many declared on social media to leave the app altogether. One user declared on X that “Spotify Wrapped is like my annual mental health report and it’s getting worse by each passing year”.

Meaningless genres like “pink pilates princess” or “indie sleaze catwalk” were doled out as “your favourite” music. Users reacted by calling them cringe. Appeasing users with narcissistic pride about their supposed unique taste backfired as they called out Wrapped for producing slop – shorthand for AI spam.

Not that this was the first time AI or algorithms were used for the purpose by Spotify. But in December 2023, Spotify laid off 1500 workers and replaced them with AI. The backlash by users was related to this increased dependence on AI.

The backlash around Wrapped is very revealing.

We complain or critique AI for not really getting us, for mispresenting us, and as internally biased. It is as though we really want AI to truly get us, forgetting that the “dangers” one highlights about it, is really about the possibility that were it to truly get us, it will actually (and not just supposedly) prove to be dangerous, replace us humans, and so on. There is a circularity in our outbursts and scepticism about AI. 

What is forgotten is that in not really getting us right or getting us who we truly are, that is, in that gap, remainder or flaw, lies the promise, or the hope for humans. We complain about this gap but is this not what precludes the “dangers” of AI, or, if you like, the coming Apocalypse where apparently humans will be reduced to slaves to not just the Machine but the World Brain?

Music often pertains to a very inner experience, memory, and unconscious associations. There is too little scope for “deep machine learning” to really capture your inner life through the metrics generated by your usage data. The failure of AI seems inevitable. 

Also read: From ChatGPT to o3: Revolutionary AI Model Achieves Human-Level General Intelligence

And yet the training AI receives allows it to get better and better, asymptotically approaching the quintessential human. Ultimately, what the machine wants to capture is that which makes us irreducibly human, the human soul. In the meantime, it has access to difficult terrains of our inner lives. We hear that machine learning is uncovering neural pathways to narcissism. So if you want your narcissistic partner to become more kind and giving, go consult your AI psychiatrist. Also called Silicon Shrink, it cuts much closer to the inner lives of humans than say the AI Doctor collating millions of test reports about CT scan and MRI.

Tech gurus and entrepreneurs like NVIDIA’s CEO Jensen Huang tell us that presently we are making a massive and historic transition from accumulated data to intelligence. “Data to AI,” that is the great movement and transformation. Data is crucial raw material which produces AI.

The Machine lives off us humans. Remember, The Matrix pursued the idea that our everyday world is the product of a computer-driven digital matrix that feeds on humans. 

So our question – what is life which cannot be captured as ‘data’? Can we lead and practice a life today which is autonomous of the Machine?

Perhaps here we might be asking a version of that age-old question society and its thinkers have always asked: what is a good life? Aristotle said, it is a life of contemplation. The Buddha would say, it is a life without karmic accumulation, and so on. Nietzsche would say it is a life where the conscious merges with our unconscious drives.

We might wonder whether that which is beyond the ken of AI, that which is specifically and incontrovertibly human, is expanding in the world. Or is it shrinking? Rather than focus on AI and its dangers and the policies to regulate it, we might want to reflect or even probe the life we are living. How rich is our inner life, or outer life, to begin with? Is our truly human self itself undergoing a change, already simulating that which we claim to keep at a distance? Is the “enemy” getting too intimate? 

II

Today in “the age of AI,” narratives about the “end times” or the coming Apocalypse are getting a new lease of life. Machines taking over humans and disrupting society is routine talk among policy makers, governments and tech innovators.

Mary Shelley’s imagined Apocalypse of humans producing a Frankensteinian monster continues to be in the realm of the fantastic but one which is becoming more sub-textual in our lives. What follows is endlessly sanctimonious talk about the need to regulate AI, the dangers of digitalisation, deep fakes, cyber-crime, dark crypto, and what not.

I guess we achieve greater clarity if we imagine a world where “it is all over” and we are already in the Apocalypse.

Such an attitude is nothing new. In fact, popular culture has great appetite for the Apocalypse, and consumes dollops of it.

Movies like the Dune or Mad Max Fury have stories entirely situated within a post-Apocalyptic world. More to the point is the message that true freedom is on the other side, as in the rather cultish Fight Club: “Tt’s only after we lost everything, that we are free to do anything.”

The last scene is where the insomniac protagonist with Marla witnesses massive destruction. The permanent erasure of debt seems to be achieved through the explosion of the buildings of banks, or perhaps the destruction of the world as such. The movie ends and we are supposed to go home with the lingering image of a life of freedom on the other side, where “we are free to do anything”.

We get back to our lives, we work to build the same structures whose destruction is necessary for freedom. Naturally then we must wonder what we can already do in the here and now, as we live our lives in the immense shadow of the Machine.

III

Should we say that today the good life is the life of the schizophrenic?

For one thing, it seems fairly clear that, by all accounts, when “it is all over” (as we heard in Fight Club) and humans are truly free, they live as crazy schizophrenics. That’s how they behave. That’s how they lead their lives. That’s how they relate to each other. 

Interestingly, philosopher Gilles Deleuze actually proposed a schizophrenic life as a kind of a counterpoint to the normalisation of the dominant system and dominant order of life. He declared: “A schizophrenic out for a walk is a better model than a neurotic lying on the analyst’s couch. A breath of fresh air, a relationship with the outside world.” Proposing “the schizo as Homo natura,” he wrote that “the self and the non-self, outside and inside, no longer have any meaning whatsoever.”

What we are looking at is meaninglessness, or the void as the form of life. This implies a rejection of a life which puts a premium on what we can call the plenitude of representation in articulation or even communication. For such a life too easily lends itself to simulation and capture, towards the formation of the simulacra and the spectacle of capital.

Not just that we are a mirror image of the spectacle, but we generate the spectacle in and through our actions. I am already caught up in a situation always prior to myself and yet generated by my actions. It is in disentangling these actions, through say the “talking cure” or “free association”, that we discover our thoughts and the unconscious. This is what Foucault meant when he said that in the modern cogito (as against the Cartesian cogito), thought always implies action.

We must then ask: how does one free action from generating the spectacle. How can you not be the mirror image of the structures you generate daily? What is the life one is leading?

Also read: Google’s Willow: A Quantum Leap (But With Baby Steps)

The great visionary Andrei Tarkovsky seems aware of this modern predicament. Think of what he is trying to suggest in his 1979 movie The Stalker, in particular the scenes about the travel to the Zone and the Room. 

For, as Tarkovsky put it, “the zone is the zone. It is life and does not symbolise anything.” Earth and water, nature are all sentient and responsive, in different colours. The Zone is never the same, or even always new. Neither same, nor new – it is anything you think. What you discover in the Room is who you really are. You are your soul which cannot be known. In knowing yourself, you are no longer the same, hence knowing is not possible, or a contradiction in terms. The knowing is what makes you unknowable. 

It is a no-brainer now to say that here we have a form of life beyond the framework of data and metrics, beyond the algorithm, by far. The inner life which the Wrapped users found missing by found missing or ignored or misrepresented is imagined by Tarkovsky in all its luminosity. 

Tarkovsky imagines a world where the Freudian unconscious loses its power and is dissolved. He exteriorises the unconscious as a world in which we live, travel, and walk around – in our waking state and not in a dream – but as though in a dream. The inner life is the outer – or as Deleuze said, the distinction between the two has no meaning. That is how lucid your life is – completely unavailable to any data or metric. The most creative gesture would be to taunt AI to come and capture life in the Zone. 

We must get over the endless self-flagellation about the “dangers of AI”. We must fix our lives in the first place. We need more attention on what it is to be human today, so that it is no longer a mirror image of the spectacle.

For now, our inner life will forever frustrate AI and the World Brain, producing a lot of people disgruntled with even the best AI available. But we can and should use AI, and not be Luddites at all. For one, we can transfer our less-than-human work to them.

Do not get me wrong: I love to play with AI. 

I recently enjoyed Google’s Sora OpenAI which produced an enjoyable video for the following text:

In an ornate, historical hall, a massive tidal wave peaks and begins to crash. Two surfers, seizing the moment, skilfully navigate the face of the wave.

Saroj Giri teaches Politics in University of Delhi and is part of the Forum Against Corporatization and Militarisation (FACAM).

US Court Finds Israel’s NSO Group, Which Sells Pegasus Spyware, Liable for WhatsApp Attacks

The judge noted that the NSO Group repeatedly failed to produce “relevant discovery and failed to obey court orders regarding such discovery.”

New Delhi: A US district court has found Israel’s NSO Group – which sells the Pegasus spyware – liable in a 2019 lawsuit brought by the messaging app WhatsApp, citing breaches in 1,400 devices.

WhatsApp is owned by Mark Zuckerberg’s Meta, which also owns Facebook, Threads and Instagram.

Judge Phyllis Hamilton said that NSO had violated US Computer Fraud and Abuse Act (CFAA) and WhatsApp’s own terms of service.

The judge said:

“Defendants’ [NSO’s] relevant software products, collectively referred to as “Pegasus,”allow defendants’ clients to use a modified version of the Whatsapp application – referred to as the “Whatsapp Installation Server,” or “WIS. The WIS, among other things, allows defendants’ clients to send “cipher” files with “installation vectors” that ultimately allow the clients to surveil target users. As mentioned above, plaintiffs allege that defendants’ conduct was a violation of the CFAA, the CDAFA [California Comprehensive Computer Data Access and Fraud Act], and a breach of contract.”

The judge also noted that the NSO Group repeatedly failed to produce “relevant discovery and failed to obey court orders regarding such discovery.”

It said that the most significant of such behaviour by NSO was its position that the Pegasus source code should be viewable only by Israeli citizens present in Israel – something it said was “simply impracticable” for a California lawsuit.

‘Illegal spying’

Will Cathcart, who heads WhatsApp, said on social media channels that this ruling is a “huge win for privacy.”

“We spent five years presenting our case because we firmly believe that spyware companies could not hide behind immunity or avoid accountability for their unlawful actions. Surveillance companies should be on notice that illegal spying will not be tolerated. WhatsApp will never stop working to protect people’s private communication,” he wrote.

The issue of damages will go on trial next year, according to the judgement.

Pegasus and India

In 2021, The Wire was among an international consortium of news outlets which had unveiled the use of Pegasus with the help of a leaked list of potential surveillance.

The NSO Group, as this consortium had reported then, says it only offers its spyware to “vetted governments”. NSO Group has said in this US case that it cannot be considered liable because Pegasus was operated by clients investigating crimes and cases of national security. This argument was rejected by the judge.

During the 2021 news investigations, the company had refused to make its list of customers public but the presence of Pegasus infections in India, and the range of persons that may have been selected for targeting including opposition politicians, journalists, lawyers and activists, had strongly indicated that the agency operating the spyware on Indian numbers is an official Indian one.

The Supreme Court had in 2021 ordered an inquiry into the findings. A technical committee set up by it found malware in five phones but could not say if it was or was not Pegasus. Notably, the Indian government, when asked, refused to confirm or deny that it had acquired and used Pegasus.

Solving the Renewable Energy Puzzle: The Push for Long-Term Power Storage

As nations push toward 100% renewable energy, challenges like “Dunkelflauten” – periods of low solar and wind power – highlight the need for efficient, long-term energy storage solutions.

When the Sun is blazing and the wind is blowing, Germany’s solar and wind power plants swing into high gear. For nine days in July 2023, renewables produced more than 70% of the electricity generated in the country; there are times when wind turbines even need to be turned off to avoid overloading the grid.

But on other days, clouds mute solar energy down to a flicker and wind turbines languish. For nearly a week in January 2023, renewable energy generation fell to less than 30% of the nation’s total, and gas-, oil- and coal-powered plants revved up to pick up the slack.

Germans call these periods Dunkelflauten, meaning “dark doldrums,” and they can last for a week or longer. They’re a major concern for doldrum-afflicted places like Germany and parts of the United States as nations increasingly push renewable-energy development. Solar and wind combined contribute 40% of overall energy generation in Germany and 15% in the US and, as of December 2024, both countries have goals of becoming 100% clean-energy-powered by 2035.

The challenge: how to avoid blackouts without turning to dependable but planet-warming fossil fuels.

Solving the variability problem of solar and wind energy requires reimagining how to power our world, moving from a grid where fossil fuel plants are turned on and off in step with energy needs to one that converts fluctuating energy sources into a continuous power supply. The solution lies, of course, in storing energy when it’s abundant so it’s available for use during lean times.

But the increasingly popular electricity-storage devices today – lithium-ion batteries – are only cost-effective in bridging daily fluctuations in sun and wind, not multiday doldrums. And a decades-old method that stores electricity by pumping water uphill and recouping the energy when it flows back down through a turbine generator typically works only in mountainous terrain. The more solar and wind plants the world installs to wean grids off fossil fuels, the more urgently it needs mature, cost-effective technologies that can cover many locations and store energy for at least eight hours and up to weeks at a time.

Engineers around the world are busy developing those technologies – from newer kinds of batteries to systems that harness air pressure, spinning wheels, heat or chemicals like hydrogen. It’s unclear what will end up sticking.

“The creative part… is happening now,” says Eric Hittinger, an expert on energy policy and markets at Rochester Institute of Technology who coauthored a 2020 deep dive in the Annual Review of Environment and Resources on the benefits and costs of energy storage systems. “A lot of it is going to get winnowed down as front-runners start to show themselves.”

Finding viable storage solutions will help to shape the overall course of the energy transition in the many countries striving to cut carbon emissions in the coming decades, as well as determine the costs of going renewable – a much-debated issue among experts. Some predictions imply that weaning the grid off fossil fuels will invariably save money, thanks to declining costs of solar panels and wind turbines, but those projections don’t include energy storage costs.

Other experts stress the need to do more than build out new storage, like tweaking humanity’s electricity demand. In general, “we have to be very thoughtful about how we design the grid of the future,” says materials scientist and engineer Shirley Meng of the University of Chicago.

Reinventing the battery

The fastest-growing electricity storage devices today – for grids as well as electric vehicles, phones and laptops – are lithium-ion batteries. Recent years have seen massive installations of these around the globe to help balance electricity supply and demand and, more recently, to offset daily fluctuations in solar and wind. One of the world’s largest battery grid storage facilities, in California’s Monterey County, reached its full capacity in 2023 at a site with a natural-gas-powered plant. It can now store 3,000 megawatt-hours (MWh) and is capable of providing 750 MW – enough to power more than 600,000 homes every hour for up to four hours.

Lithium-ion batteries convert electrical energy into chemical energy by using electricity to fuel chemical reactions at two lithium-containing electrode surfaces, storing and releasing energy. Lithium became the material of choice because it stores a lot of energy relative to its weight. But the batteries have shortcomings, including their fire risk, their need for air-conditioning in hot climates and a finite global supply of lithium.

Importantly, lithium-ion batteries aren’t suitable for long-duration storage, explains Meng. Despite monumental price declines in recent years, they remain costly due to their design and the price of mining and extracting lithium and other metals. The battery cost is above $100 per KWh – meaning that a battery container supplying one MW (enough for about 800 homes) every hour for five hours would cost at least $500,000. Providing electricity for longer would quickly become economically unfeasible, Meng says. “I think four to eight hours is really a sweet spot for balancing cost and performance,” she says.

For longer durations, “we want energy storage that costs one tenth of what it does today – or maybe, if we could, one hundredth,” Hittinger says. “If you can’t make it extremely cheap, then you don’t have a product.”

One way of cutting costs is to switch to cheaper ingredients. Several companies in the US, Europe and Asia are working to commercialise sodium-ion batteries that replace lithium with sodium, which is more abundant and cheaper to extract and purify. Different battery architectures are also being developed – such as “redox flow” batteries, in which chemical reactions take place not at electrode surfaces but in two fluid-filled tanks that act as electrodes. With this kind of design, capacity can be enlarged by increasing tank size and electrolyte amount, which is much cheaper than increasing the expensive electrode material of lithium-ion batteries. Redox-flow batteries could supply electricity over days or weeks, Meng says.

US-based company Form Energy, meanwhile, just opened a factory in West Virginia to make “iron-air” batteries. These harness the energy released when iron reacts with air and water to form iron hydroxide – rust, in other words. “Recharging the battery is taking rust and unrusting it,” says William Woodford, Form’s chief technical officer.

Because iron and air are cheap, the batteries are inexpensive. The downside with both iron-air and redox-flow batteries is that they give back up to 60% less energy than is put into them, partly because they gradually discharge with no current applied. Meng thinks both battery types have yet to resolve these issues and prove their reliability and cost-effectiveness. But the efficiency loss of iron-air batteries could be dealt with by making them larger. And since long-duration batteries supply energy at times when solar and wind power is scarce and more costly, “there’s more tolerance for a little bit of loss,” Woodford says.

Spinning wheels and squished air

Other engineers are exploring mechanical storage methods. One device is the flywheel, which employs the same principle that causes a bike wheel to keep spinning once set into motion. Flywheel technology uses electricity to spin large steel discs, and magnetic bearing systems to reduce the friction that causes slowdowns, explains electrical engineering expert Seth Sanders of the University of California, Berkeley. “The energy can be stored for actually a very substantial amount of time,” he says.

Sanders’ company, Amber Kinetics, produces flywheels that can spin for weeks but are most cost-effective when used at least daily. When power is needed, a motor generator turns the movement energy back into electricity. As the wheels can switch quickly from charging to discharging, they’re ideal for covering rapid swings in energy availability, like at sunset or during cloudy periods.

Each flywheel can store 32 KWh of energy, close to the daily electricity demand of an average American household. That’s small for grid applications, but the flywheels are already deployed in many communities, often to balance fluctuations in renewable energy. A municipal utility in Massachusetts, for instance, has installed 16 flywheels next to a solar plant; they supply energy for more than four hours, absorbing electricity during low-demand times and discharging during peak demand, Sanders says.

A different kind of mechanical facility stores electricity by using it to compress air, then stashes the air in caverns. “When the grid needs it, you release that air into an air turbine and it generates electricity again,” explains Jon Norman, president of the Canada-based company Hydrostor, which specialises in compressed-air storage. “It’s just a giant air battery underground.”

Such systems usually require natural caverns, but Hydrostor carves out cavities in hard rock. Compared to batteries or flywheels, these are large infrastructure projects with lengthy permitting and construction processes. But once those hurdles are passed, their capacity can be slowly scaled up by carving the caverns more deeply, at pretty low additional cost, Norman says.

In 2019, Hydrostor launched the first commercial compressed-air storage facility, in Goderich, Ontario, storing around 10 MWh — enough to power some 2,100 homes for more than five hours. The company plans several much larger facilities in California and is building a 200-MW facility in the Australian town Broken Hill that can supply energy for up to eight hours to bridge shortfalls in solar and wind energy.

Storing energy as heat and gas

Around the world, there are efforts afoot to make use of excess renewable electricity by using it to heat up water or other heat-storing materials. This can then provide climate-friendly warmth for buildings or industrial processes, says Katja Esche of the German Energy Storage Association.

Heat can also be used to store energy, though that technology is still being developed. Energy storage and systems expert Zhiwei Ma of Durham University in the United Kingdom recently tested a pumped thermal energy storage system. Here, the main energy-storing process occurs when electricity is used to compress a gas, like argon, to a high pressure, heating it up; electricity is generated when the gas is allowed to expand through a turbine generator. Some experts are skeptical of such thermal storage systems, as they supply up to 60% less electricity than they store – but Ma is optimistic that with more research, such systems could help with daily storage needs.

For even longer-duration storage – over weeks – many experts put their bets on hydrogen gas. Hydrogen exists naturally in the atmosphere but can also be produced using electricity to split water into oxygen and hydrogen. The hydrogen is stored in pressurised tanks and when it reacts with oxygen in a fuel cell or turbine, this generates electricity.

Hydrogen and its derivatives are already being explored as fuel for ships, planes and industrial processes. For long-duration storage, “it looks plausible that that would be the technology of choice,” says energy expert Wolf-Peter Schill of the German Institute for Economic Research who coauthored a 2021 review on the economics of energy storage in the Annual Review of Resource Economics.

The German energy company Enertrag is building a facility that uses hydrogen in both ways. Surplus energy from the company’s 700-MW solar and wind plant near Berlin is used to make hydrogen gas, which is sold to various industries. In the future, about 10% of that hydrogen will be stashed away “as an emergency backup measure” for use during weeks without sun or wind, says mechanical engineer Tobias Bischof-Niemz, who is on Enertrag’s board.

The idea of using hydrogen for electricity storage has many critics. Similar to heat, up to two-thirds of the energy is lost during reconversion into electricity. And storing massive quantities of hydrogen over weeks isn’t cheap, although Enertrag is planning on reducing costs by storing it in natural caverns instead of the customary pressurised steel cylinders.

But Bischof-Niemz argues that these expenses don’t matter much if hydrogen is produced from cheap energy that would otherwise be wasted. And, he adds, hydrogen storage would be used only for Dunkelflauten periods. “Because you only have two or three weeks in the year that are that expensive, it works economically,” he says.

A question of cost

There are many other efforts to develop longer-duration storage methods. Cost is key for all, regardless of how much is paid for by governments or utility companies (the latter typically push such costs onto consumers). All new systems will need to prove that they’re significantly cheaper than lithium-ion batteries, says energy expert Dirk Uwe Sauer of Germany’s RWTH Aachen University. He says he has seen many technologies stall at the demonstration stage because there’s no business case for them.

Developers, for their part, argue that some systems are approaching that of lithium-ion batteries when used to store energy for eight hours or more, and that costs will come down substantially for others when they are manufactured in large volumes. Maybe many technologies could, ultimately, compete with lithium-ion batteries, but getting there, Sauer says, “is extremely difficult.”

The challenge for developers is that the market for long-duration technologies is only beginning to take shape. Many nations, such as the US, are early in their energy transition journey and still lean heavily on fossil fuels. Most regions still have fossil-fuel-powered plants to cover multiday doldrums.

Indeed, Hittinger estimates that the real economic need for long-duration storage will only emerge once solar and wind account for 80% of total power generation. Right now, it can often be cheaper for utilities to build gas plants – fossil fuels, still – to ensure grid reliability.

One important way to make storage technologies more economical is a carbon tax on fossil fuels, says energy systems researcher Anne Liu of Aurora Energy Research. In European countries like Switzerland, utilities are charged up to about $130 per metric ton of carbon emitted. California grid operators, meanwhile, have spurred storage development by requiring utility companies to ensure adequate energy coverage, and helping to cover the cost.

Market incentives can also help. In the Texas energy market, where electricity prices fluctuate a lot, electricity customers are saving hundreds of millions of dollars from the build-out of lithium-ion batteries, despite their costs, as they can store energy when it’s cheap and sell it for a profit when it’s scarce. “Once those power markets have incentive, then the longer-duration batteries will be more viable,” Liu says.

But even when incentives are there, the question remains of who will foot the bill for energy storage, which isn’t considered in many cost projections for transitioning the grid off fossil fuels. “I don’t think there’s been enough time spent studying how much these decarbonisation pathways are going to cost,” says Gabe Murtaugh, director of markets and technology at the nonprofit Long Duration Energy Storage Council.

Without interventions, Murtaugh estimates, California customers, for instance, could eventually see a threefold increase in utility bills. “Thinking about how states and federal governments might help pay for some of this,” Murtaugh says, “is going to be really important.”

Saving costs and resources

Cost considerations are prompting experts to also think of ways to reduce the need for storage. One way to strengthen the grid is building more consistently available forms of renewable energy, such as geothermal technologies that draw energy from the Earth’s heat. Another is to connect the grid over larger regions — such as across the US or Europe — to balance local fluctuations in solar and wind. Ensuring that storage technologies are as long-lived as possible can help to save costs and resources.

So can being smarter about when we draw electricity from the grid, says Seth Mullendore, president of the Vermont-based nonprofit Clean Energy Group. What if, rather than charging electric cars when getting home from work, we charged them at midday when the Sun is blazing? What if we adjusted building heating and cooling so the bulk would happen during windy periods?

Mullendore’s nonprofit recently helped to design a program in Massachusetts where electricity customers could sign up to get paid if they responded to signals from their utilities to use less energy – for instance, by turning their air-conditioning down or delaying electric car charging. In a smart grid of the future, such tweaks could be more widespread and fully automatic, while allowing consumers to override them if needed. Governments could encourage programs by rewarding utility companies for designing grids more efficiently, Mullendore says. “It’s much less expensive to have people not use energy than it is to build more infrastructure to deliver more energy.”

It will take careful thought and a worldwide push by engineers, companies and policymakers to adapt the global grid to a solar- and wind-powered future. Tomorrow’s grids may be studded with lithium-ion or sodium-ion batteries for short-term energy needs and newer varieties for longer-term storage. There may be many more flywheels, while underground caverns may be stuffed with compressed air or hydrogen to survive the dreaded Dunkelflauten. Grids may have smart, built-in ways of adjusting demand and making the very most of excess energy, rather than wasting it.

“The grid,” Meng says, “is probably the most complicated machine ever being built.”

This article was originally published on Knowable Magazine.

US Prosecutors Pursue Divestment of Google Chrome; Trump Win Puts Question Mark on Trial

The new proposal marks the most significant government effort to curb the power of a technology company since two decades ago.

US prosecutors asked a judge on Wednesday (November 20) to force Alphabet’s Google to divest its Chrome browser, share data and search results with competitors and take other steps to end its internet search monopoly.

A court filing showed that the Department of Justice urged Google be banned from becoming the default search engine on smartphones, preventing the US tech giant from exploiting its Android mobile operating system.

The world’s most popular web browser, Chrome provides user information to Google that helps the company profitably personalise which ads users see. Google controls about 90% of the online search market with over 60% of users relying on Google Chrome to perform those searches.

The sale of Chrome would “permanently stop Google’s control of this critical search access point and allow rival search engines the ability to access the browser that for many users is a gateway to the internet,” the justice department said.

Changes in store for Google?

If US courts take the justice department’s advice to force Google to sell off Chrome, it would come as a significant blow to the company’s revenue.

Google has previously called the idea of a breakup “radical”. The company is expected to submit its proposals for business practice changes in a court filing next month.

Adam Kovacevich, chief executive of the Chamber of Progress, an industry trade group, told the AFP news agency that government demands were “fantastical”, adding that less intrusive measures were better suited to the case.

Google – the monopolist

In August this year, internet behemoth Alphabet lost the biggest antitrust challenge it has ever faced when a US judge ruled that its Google subsidiary was a monopoly in the online search market.

US federal court Judge Amit Mehta ruled that $26.3 billion in payments Google made to other companies to make its internet search engine the default option on smartphones and web browsers effectively blocked any other competitor from succeeding in the market.

The new proposal marks the most significant government effort to curb the power of a technology company since the justice department unsuccessfully attempted to break up Microsoft two decades ago.

A trial on these proposals is set to begin in April of 2025, and Judge Mehta aims to deliver a verdict before September. However, as President Joe Biden hands over the reins to President-elect Donald Trump, the new Department of Justice officials could alter the course of the case.

This article was originally published on DW.

The Evolving Threat of Cybercrime

Social engineering, deep fakes, ransomware, zero-day exploits and supply chain attacks are emerging as new forms of cybercrime.

They are going after our drinking water systems.

Earlier this year, the US government warned state governors about foreign hackers carrying out disruptive cyberattacks against water and sewage systems.

Increasing digitisation of our lives has meant we are more vulnerable to cybercrime than ever.

And the role of cybersecurity has become crucial.

Social engineering, deep fakes, ransomware, zero-day exploits and supply chain attacks are emerging as new forms of cybercrime.

Social engineering includes a range of malicious activities where cybercriminals psychologically manipulate users and trick them into making security mistakes or giving away sensitive information.

In September, a prominent Indian businessman was swindled by cyber criminals who made him pay $830,000, after summoning him to a fake court hearing and threatening jail time for a crime he had not committed.

Zero-day exploits is a cyberattack vector that takes advantage of an unknown or unaddressed security flaw in a computer software, hardware or firmware.

Zero day means that the software or device has zero days to fix the flaw because malicious actors can already use it to access vulnerable systems.

According to a joint cybersecurity advisory from US, Australian, British and New Zealand government agencies, several enterprise networks with zero-day vulnerabilities were the main targets of malicious cyberactors in 2023.

Network defenders have been warned that the attackers may continue to exploit such vulnerabilities until 2025.

From Russian cyberspies conducting an espionage campaign against Mongolian government websites, to hackers breaking into the US presidential campaign, to a faulty software update disrupting airline and hospital operations, the need for sound cybersecurity investments has never been higher.

Hacking away at new technology

The evolution of cybersecurity as an educational and professional discipline can be traced back to the emergence of computers and the first cases of cybercrime in the 1990s to early 2000.

The digital revolution meant that cybercriminals learnt technology to engineer newer ways to cheat people and steal data from organisations.

For instance, in 1994, British hackers known as “Datastream Cowboy” and “Kuji” attacked Rome Laboratory’s computer systems more than 150 times.

Rome Laboratory is the US Air Force’s premier command and control research facility.

During the attacks, the hackers stole sensitive data on air tasking order research.

Air tasking orders are messages military commanders send to pilots during wartime.

The orders provide information on air battle tactics, such as where the enemy is located and what targets are to be attacked.

Through the 1990s, banks were robbed, credit card information was stolen and used, and government networks were broken in.

Later, as technology evolved, so did the criminals.

From devising simple viruses to sophisticated malware, direct attacks to phishing and social engineering, simple financial frauds to large-scale data breaches – cybercrime really came into its own.

Now, cybercriminals are evolving their way to exploit vulnerabilities using advanced tools such as AI and automation, and continue to target critical infrastructure.

Recent security breaches such as AI-based phishing attacks in 2023 and the 2018 hacking of Facebook’s user data show the evolution of the criminals, and their adaptability to the latest technology.

But security systems have been evolving too.

Increased transfer of information over the web, particularly sensitive information, has led to new technologies in encryption, firewalls and other mechanisms to ensure adequate security and maintain credibility of transactions over the internet.

For instance, Zero Trust firewalls, a security model, establishes trust through continuous authentication and monitoring of each network access attempt.

This is different because in a traditional model it was assumed that everything in a corporate network can be trusted.

Other traditional measures of security management include demilitarised zone, and access control and intrusion detection methods.

A demilitarised zone or DMZ is a physical or logical subnet that separates a local area network (LAN) from other untrusted networks – usually, the public internet.

New security measures such as web application firewalls with advanced threat protection have also been designed to detect and protect against common security flaws in web traffic.

These are essential for online businesses such as retailers, banks, healthcare and social media, which need to protect sensitive data.

To ensure cybersecurity, all organisations need to put in place policies and procedures, based on certain best practices, beyond the simple defences of specific systems and networks.

Also read: Can ASEAN’s Cybersecurity Push Protect People and Economies?

Global unity to fight cybercrime

Over the years, cybersecurity has evolved to become a crosscutting discipline across computer science, information technology, legal and psychological and risk management domains.

It is essential to protect not just data systems and networks, but also physical structures such as industrial control systems, healthcare infrastructure, as well as basic systems such as energy grids, ATMs, payment processing systems, banks and cryptocurrency platforms.

Governments across the globe have identified cybersecurity as a key area of concern, and put into place laws to protect information.

Some examples include the Cybersecurity Information Sharing Act and Health Insurance Portability and Accountability Act in the US, and the General Data Protection Regulation in the EU.

In India, the Digital Personal Data Protection Act came into force in 2023, and a new law to protect personal data is being considered.

Enterprises such as Microsoft, Google, Walmart, and Amazon have made significant investments in cybersecurity, including using advanced AI and machine learning for real-time threat detection.

Google’s Chronicle platform, for instance, offers a security analytics tool.

In August, the United Nations also finalised a new cybercrime treaty, which seeks to improve international collaboration in combating cybercrime.

It outlines measures for countries to collect and share data on suspects, ease the extradition of cybercriminals, and confiscate crime-related proceeds.

But the cybersecurity environment is dynamic.

New threats such as the botnets, cloud security threats such as API vulnerabilities require constant vigilance and innovation to stay ahead of cybercriminals.

With newer measures such as advanced anti-phishing technologies, and threat intelligence to counter zero-day vulnerabilities, cybersecurity professionals can create a strong protective shield encompassing different levels of government, businesses and individuals.

Abhishek Jain is an Assistant Professor in the School of Engineering and Technology at BML Munjal University. He is a licensed cybersecurity auditor and has audited banks, healthcare institutions, oil and gas companies and IT firms.