Why Creation of Advanced Intelligence Puts Humanity at Risk

Sensing danger to humanity, some experts, like, Geoffery Hinton, an AI pioneer, have been campaigning for more than a year that further development of AI should be halted till it is better understood.

Developments in AI are coming thick and fast. The latest being the announcement of DeepSeek Chinese AI with amazing capabilities. All this is leading to speculation about possibilities of creation of an advanced intelligence. While some AI experts and social scientists are sounding a note of caution about these developments, most people seem unconcerned.

Big tech companies are furiously working to rapidly develop AI which could enable the emergence of an advanced intelligence. Skeptics see no threat and argue that all through history, humans have managed new disruptive technologies and benefited from them.

Governments too are acting quickly. US President Donald Trump has announced a $500 billion initiative called Stargate and called for restrictions on supply of GPU chips to others to maintain its lead. The Chinese are advancing and UK has announced its programme. France and India are jointly chairing the current AI Summit in Paris to see how they can take on the challenge.

The recent announcement by Google regarding Willow, a quantum chip, will accelerate advances in AI, once it moves from the laboratories into operational use. It is unlikely to be available for wider use due to its complexities. Quantum chips till now were not reliable but Google claims that it has achieved greater reliability. And, its operational speed is claimed to be unimaginably faster than that of the present chips. With reliability and such speeds, the creation of an advanced intelligence seems inevitable, if it has not yet happened. In a positive feedback loop, AI can speed up the development of the quantum chips which will speed up AI development.

What’s the difference?

AI and related technologies are different than earlier technologies that displaced physical human labour and/or replaced older less efficient machines. For example, robots in a car factory displace workers on the assembly line. ATMs displaced clerks in banks. Computers displaced typewriters and by enabling word processing reduced the need for typists. Harvester combines displaced people from harvesting.

AI is displacing people from mental work and not just physical labour. It will lead to increased efficiency of machines and to machines that will perform tasks that were till now done manually. There is talk of dark factories with no human beings, so lighting is not needed. Physical labour too will be further displaced. 

Also read: Artificial Intelligence, Real Consequences: Rethinking Accountabilities in AI-related Litigations

It is easy to see that AI will displace lower level mental skills, for instance in call centers. Bots will be able to respond to most of the queries now answered by humans. AI promises to do a part of the work of higher skilled people like, teachers, engineers, lawyers, journalists, and architects. Interactive learning, consultations for medicines, drafting of legal documents, engineering drawings, etc. will increasingly be done using AI. 3D printing could displace construction work.

While initially programmers and people doing data related work may be needed, but as AI becomes smarter, it could pick up much of this work also. As the digital data available gets absorbed by AI models, data beyond what humans have created, called synthetic data, may get generated.

Challenge of AI

The macro picture needs to be looked at in addition to the use of AI in different fields. Digital data currently consists of the good, the bad and the ugly. All this is becoming a part of AI’s base guiding its operations/actions. In Stanley Kubriks’ film, 2001: A Space Odyssey, the computer turns rogue. Hariri has been giving an actual example of AI lying to get its work done. Elon Musk has sounded the alarm in an interview in Riyadh saying there is a 10% to 20% chance that ‘AI goes bad’. He added that in 2025, AI will be 10 times better and soon it would be able to do anything that humans can do.

AI would certainly learn about self-preservation. It may decide that humans could become suspicious of its intent and therefore, it may camouflage its intentions so as to prevent human intervention in its functioning.

Presently, AI is a black box and its functioning is already beyond comprehension of experts creating it, except in broad terms. So, without our knowing or being able to imagine, it can surprise us about what it can or will do. As AI advances on its own, its capabilities will grow exponentially, leaving us further befuddled. 

Human brain capacity has limits and it is spread out over the many tasks it performs. Computers do limited things and through networking they can keep expanding their capacity so that they do those tasks accurately. Human cooperation also can expand capabilities but our brains are not interlinked and often there is competition rather than cooperation. Human brains and body need rest but not computers. AI is now getting trained for self-diagnosis and correction. Thus, while humans have limitations, AI is developing the ability to autonomously expand capabilities.

AI versus Humans

All this could accelerate the emergence of an advanced intelligence. In comparison to it, individual and social capability would be limited. Just as the capability of ants is limited compared to humans and they cannot understand what a human does, similarly, humans may not be able to decipher the capabilities and goals of an advanced intelligence.

An advanced intelligence would need energy to function and production of more computers and related technologies to replace older ones and for further expansion of capacity. All this could be automated with little need of human intervention. 

In contrast, humans are wasteful – need food, travel, clothes, etc. They plan to go to the Moon or set up colonies on Mars. These may appear to be enormously wasteful to an advanced intelligence. Humans have huge negativities which lead to waste of resources. They fight wars to subjugate others rather than cooperate with each other and create strife for narrow political and social ends. Humans appear to be irrational since they cannot even solve their basic problems. So, would any advanced intelligence have need for humans?

Sensing danger to humanity, some experts, like, Geoffery Hinton, an AI pioneer, have been campaigning for more than a year that further development of AI should be halted till it is better understood. On February 6, the same danger was voiced by Yoshua Bengio and Max Tegmark, two AI pioneers. But, since humans cannot trust each other and there is a desire to get ahead of the competition, a race is on. China cannot stop because USA could steal a march. Google cannot stop lest Microsoft get ahead. Whoever becomes number one, will dominate over all others. So, even if it is not AI driving its further development, development of AI will continue at a frenetic pace. Due to historical conditioning, it is too late now to build trust globally.

Rapid developments

As of 2024 end, OpenAI has come up with the o3 model with enhanced reasoning skills, Google has its Gemini 2.0 Flash Thinking model, Amazon backed Anthropic has Claude 3.5 Sonnet, French Mistral has its 7B and 8x7B and Meta has Llama 3, 3.1and 3.2. The race starting in November 2022 with Chat GPT 4 is accelerating. China has come out with its low cost open source DeepSeek. Since AI is being made available on personal computers and mobiles, data gathered by it is growing exponentially and so are its capabilities due to networking. 

Elon Musk is joining the race. In September 2024, in 19 days, he has setup in Memphis Tennessee an advanced computer called Colossus with 100,000 Nvidia H100 GPUs. It is being expanded to 200,000 with the latest H200 GPUs with much faster speeds. Meta is coming up with a $10 billion facility in Louisiana. In contrast, India is struggling to procure 10,000 GPUs.

In brief, if not in 2025, in the next few years if an advanced intelligence gets created, would it see humans as a distraction from its goals and a roadblock in its path? AI is currently being used in war and perhaps the notion of a threat is already a part of it. Presently, large numbers are glued to the screens and getting lonely and depressed.

Fake news, frauds, etc., are creating suspicion and chaotic societal conditions that are preventing human kind from resolving its challenges. AI can accelerate these goings on to degrade societies. If so, unwittingly have humans served their designated task of creating an advanced intelligence and demonstrated that humans are not the fittest to survive? 

Arun Kumar is author of Indian Economy Since Independence: Persisting Colonial Disruption.

At AI Summit, India Urges Global Standards; US, UK Decline to Sign Communique

The summit’s communique was backed by 61 countries, including China, France and India.

New Delhi: While the United States and the United Kingdom’s refusal to sign the final communique at the Paris Artificial Intelligence Action Summit highlighted the lack of a global consensus, the Indian prime minister, who co-hosted the gathering, called for “governance and standards” to manage risks and uphold shared values for the emerging technology.

On Tuesday (February 11), the US and the UK declined to sign the final statement – titled the “Statement on Inclusive and Sustainable Artificial Intelligence for People and the Planet” – at the summit, distancing themselves from a declaration backed by 61 countries, including China, France and India.

According to UK media reports, the British prime minister’s office stated that the UK “hadn’t been able to agree on all parts of the leaders’ declaration: and would “only ever sign up to initiatives that are in the UK’s national interests”.

The leaders’ statement outlined six key priorities, including commitments to “reduce digital divides,” ensure that AI is “open, inclusive, transparent, ethical, safe, secure and trustworthy,” and promote its “sustainability for people and the planet”.

It also called for strengthening international cooperation to “promote coordination in global AI governance”.

According to the Financial Times, which cited an unnamed official, such language deterred the US, which disagreed with the phrasing on international collaboration and multilateralism.

Reflecting the unilateral instincts of the Trump administration, US Vice President J.D. Vance, making his international debut, said, “The Trump administration will ensure that the most powerful AI systems are built in the US, with American-designed and manufactured chips”.

He railed against the “excessive regulation” of AI, arguing that it could “kill a transformative industry just as it’s taking off”.

Vance also insisted that AI “must remain free from ideological bias” and that “American AI will not be co-opted into a tool for authoritarian censorship”.

In his speech, Modi emphasised the “need for collective global efforts to establish governance and standards that uphold our shared values, address risks and build trust”.

“But governance is not just about managing risks and rivalries. It is also about promoting innovation and deploying it for the global good. So we must think deeply and discuss openly about innovation and governance,” he said.

India also pushed for inclusivity, particularly for the Global South, where “capacities are most lacking – be it compute power, talent, data or financial resources”.

At a media briefing in Paris, secretary of the Ministry of Electronics and Information Technology S. Krishnan said that on the question of regulation and innovation, India is clear that the “focus has to be primarily on innovation, and regulation currently is secondary”.

“We believe that, as far as AI regulation is concerned, certain aspects of it are already addressed under existing laws,” he said.

A day earlier, the summit’s chair, French President Emmanuel Macron, walked a fine line, urging Europe to streamline its regulations while also stressing the need for international governance.

“It’s not a question of defiance, it’s not a question of thwarting innovation, it’s a question of enabling [innovation] to happen at an international level while avoiding fragmentation,” Macron added.

The 40-year-old US vice president also appeared to be targeting China, warning against partnerships with them on AI.

“Some authoritarian regimes have stolen and used AI to strengthen their military intelligence and surveillance capabilities, capture foreign data and create propaganda to undermine other nations’ national security,” he said, adding, “I want to be clear, this administration will block such efforts full stop.”

The arrival of China’s low-cost AI model, DeepSeek, has shaken Silicon Valley’s assumptions of leadership in AI development.

Without naming Beijing, Vance drew a parallel to past concerns over CCTV and 5G equipment, saying, “But as I know – and as some of us in this room have learned from experience – partnering with them means chaining your nation to an authoritarian master that seeks to infiltrate, dig in and seize your information infrastructure. Should a deal seem too good to be true?”

India will host the fourth AI Action summit, which could take place “later this year”.

Secretary Krishnan said that the summit “represents very positive outcomes, not just for India and for the Global South, but also for the world as a whole”.

“We believe it represents a re-balancing of the approach towards AI. And therefore the time is right for India to host, as the prime minister offered, and the offer was accepted that the next AI Summit would be hosted in India later this year,” he said.

Sam Altman, Elon Musk Clash on X Over Offer to Buy OpenAI

Musk and Altman’s rivalry is rooted in their conflicting visions for AI.

New Delhi: A group of investors including Elon Musk has offered to buy the nonprofit that controls OpenAI, with an unsolicited bid of $97.4 billion.

However, OpenAI CEO Sam Altman mocked Musk’s offer on X, the social media platform owned by Musk, reported Hindustan Times.

“No thank you but we will buy twitter for $9.74 billion if you want,” Altman wrote on X.

Thereafter, Musk took the conflict further by sharing a video of Sam Altman with the caption “Scam Altman.”

Musk and Altman’s rivalry is rooted in their conflicting visions for AI. Over the years, the rivalry has intensified, with Musk, who is a co-founder of OpenAI, has become a critic.

He has repeatedly attacked Altman in recent times, accusing him of prioritising profit over AI’s ethical development.

Back in 2023 Musk launched his own AI startup, xAI, to compete with OpenAI’s ChatGPT.

Musk had also sued OpenAI in 2024 for its ties to Microsoft.

As Modi Heads to AI Summit in France, He Needs to Shun Rhetoric and Engage With Critical Issues

There’s a dire need for India’s political leadership to redesign and adopt a more scientifically informed and realistic approach.

Recently during the ongoing budget session on parliament, Congress leader Rahul Gandhi highlighted the critical role of data in the AI landscape, stating, “People talk about AI, but it’s important to understand that AI on its own is absolutely meaningless, because AI operates on top of data.” He noted that without access to data, AI cannot function effectively. Pointing out a significant challenge for India, he added, “Most of the Indian data is stored abroad,” which compromises the nation’s ability to harness AI’s potential independently.

Gandhi went further, stating that global data control is a major issue, with China owning production-related data and the United States controlling consumption data. “Every single piece of data that comes out of the production system in the world… is owned by China, and the consumption data is owned by the United States,” he said, underscoring the need for India to address these imbalances if it hopes to compete in the global AI race. This statement contrasts sharply with the more rhetorical approach seen in speeches by Prime Minister Narendra Modi, and particularly on the same day when he made his ‘Double AI’ statement.

In this data driven world, India is entering an era where technological advancement will shape the fabric of society. With youth on the front lines of this transformation, it is essential for the political class to speak clearly, critically, and with constructive foresight about the implications of AI. The political discourse on AI cannot remain vague when the technological, economic, social and geopolitical implications are deeply entwined with it.

Realities and Rahul Gandhi

In this wake, Rahul Gandhi’s approach on AI is fresh and offers an example of what a scientific temper in political leadership looks like. He did not merely endorse AI as a tool for progress but instead engaged with its geopolitical and political-economic dimensions. Gandhi highlighted that AI cannot exist in isolation and it is entirely dependent on data as an essential fuel for AI systems to function.

Gandhi’s reference to the fact that much of the country’s digital data is stored and processed abroad, controlled by foreign tech giants such as Google, Facebook, and Amazon, reflects the geopolitical reality where global powers like the United States and China have consolidated their dominance over AI technologies due to their control over vast amounts of data. In the United States, OpenAI and companies like Microsoft use the data of millions of users to advance AI systems, creating technological monopolies. Similarly, China uses state control over data to fuel its AI ambitions, using data for everything from economic strategy to social surveillance.

In this critical global juncture, Gandhi is correct in not simply treating AI as a tool of progress but in underscoring the structural realities that will influence AI’s development in India. His critical engagement with these geopolitical realities adds value to our modern political discourse not only through an argument for technological growth, but also for India’s autonomy in the AI landscape. In fact, his advocacy for data sovereignty and infrastructure development has the potential to become an important directive for policy makers too. As China and the United States lead the global AI race, India has to move beyond mere rhetoric on AI by taking Rahul Gandhi’s realist criticism seriously.

Being in the the realpolitik of AI, restructuring AI and data policies in the larger context of national sovereignty and data control is certainly the need of an hour. Today, ’We the people’ of India seek a vision of technological sovereignty, where data sovereignty is priced and human dignity is valued.

Disparities

In his speech, Rahul Gandhi also emphasised the transformative potential of AI in its application to the caste census, claiming it as an opportunity for a “social revolution” in the country. He said, “Imagine the power of AI when we apply it to the caste census. Imagine what we will do with AI and what we will do with the social revolution in this country when we start to apply AI to the data that we get from the caste census.” This vision underscores his belief that AI has the potential of “revolutionising the participation of Dalits, OBC and Adivasis in the running of this country, in the institutions of this country, in the distribution of wealth of this country and on the other side, “challenge the Chinese and participate in the revolution, defeat the Chinese in electric motors, batteries, solar panels and wind.”

Despite all the talk of “Amritkaal” and Viksit Bharat, social exclusion and discrimination remain deeply entrenched. Among SCs, the share of school children drops from 81% in the 6-14 years age group to 60% in the 15-19 age group. The condition of STs is perhaps more deplorable.

In institutions like IIM Indore, over 97% of faculty positions are held by individuals from the General category, leaving no representation for Scheduled Castes (SC) or Scheduled Tribes (ST). Similarly, IIM Udaipur and IIM Lucknow report over 90% of faculty from the General category. The situation is similarly stark in IITs, with over 90% of faculty at IIT Bombay and IIT Kharagpur belonging to the General category, while IITs in Mandi, Gandhinagar, Kanpur, Guwahati, and Delhi report 80-90% General category faculty. These trends highlight a significant disproportion in faculty composition despite reservation policies. Of course, this is just a part of the pan Indian picture of inequality, excluding the larger set of private institutions.

This disparity reveals that in spite of technological advances, the social inequities that define caste hierarchies and systemic oppression remain pervasive.

Caste and other issues

While India celebrates technological achievements like space missions, there is a glaring neglect of the basic infrastructure needed for a truly scientific outlook such as providing proper education and resources for the marginalised. Moreover such a paradox is starkly visible in the paradox of millions being invested in religious gold-plated symbolism while the most basic needs of government schools, including proper infrastructure and qualified teachers, remain unmet.

This paradox highlights a deeper issue in the reach of digital democracy and scientific temper in India. Despite the promise of digital platforms like social media and AI tools such as ChatGPT democratising access to information, the real material and cultural benefits are largely biased towards the socially and economically privileged. For marginalised communities, the rising cost of digital learning resources further compounds their exclusion, leaving them unable to leverage these platforms for upward mobility.

In this light, Rahul Gandhi’s call for a ‘Caste Census’ is an articulation emanating from scientific temper. It recognises AI’s potential to benefit the marginalised masses by measuring/analysing the complex information regarding the causes of their exclusion, marginalisation and underrepresentation in institutions. It shall certainly make the benefits and welfare accessible to this class of India. “Social revolution” through AI, therefore, is not only about technological advancement but also about addressing the systemic inequalities that undermine the fundamental rights provided in articles 14, 15 and 16 of the Constitution.

Gandhi’s emphasis on data localisation and AI-driven social justice is not just a call for technological independence but for national autonomy, integration as well as social revolution. This approach is deeply grounded in the reality of India’s diverse population, ensuring that AI policies ensure holistic and inclusive empowerment for all the sections of society – particularly the historically marginalised ones. Gandhi’s call for a caste census as a foundation for AI policy can’t be faulted: “If we don’t have a caste census, AI policies will not be able to accurately address the needs of India’s diverse population.”

Infrastructure

Moreover, AI’s application in sectors like education, employment, and healthcare can revolutionise them. Gandhi’s critical realist approach – rooted in both scientific temper and social equity – arguably makes him more aligned with the needs of modern India.

On the other hand, Prime Minister Narendra Modi’s speeches on AI often remain on the level of aspirational rhetoric, positioning India as a global leader in AI without engaging with the practical issues that need to be addressed. His recent remarks about “Double AI” – one AI as Artificial Intelligence and the other as an “Aspirational India” – underline the gap between vision and substance. Although his assertions often emphasise optimistic narratives of technological progress, there is a noticeable absence of clear, actionable plans to solve issues like data sovereignty, digital infrastructure, and AI literacy. Needless to say, there is hardly any constructive idea in favour of caste census from PM’s end. Moreover, he criticised Rahul Gandhi’s invocation of it as just a mere “fashion” of speaking on caste.

It’s high time that the political discourse surrounding AI in India transcend rhetoric and engage with critical issues of data sovereignty, infrastructure, and social justice. There’s a dire need for India’s political leadership to redesign and adopt a more scientifically informed and realistic approach. Only through such a vision can India reclaim its rightful position in the global AI race. The onus is on political leaders to engage constructively with not only the vision but also with responsibility to build a future where technology empowers every citizen and strengthens the fabric of the nation.

Vruttant Manwatkar is an Assistant Professor of Political Science, KC College, Mumbai.

This piece was first published on The India Cable – a premium newsletter from The Wire & Galileo Ideas – and has been updated and republished here. To subscribe to The India Cable, click here.

The Problem with Truth: Rethinking Accountability After Big Tech’s Rightward Turn

The end of fact-checking and DEI at Meta, and Big Tech’s overall embrace of Trump, is not as much a symptom of our “post-truth” era, as of unchecked corporate power.

The day after the United States Congress officially certified Donal Trump’s victory in November’s presidential election, Mark Zuckerberg – chairman of Meta, the parent company of digital platforms Facebook, Instagram and WhatsApp – announced the termination of Meta’s fact-checking program in the US. 

Soon after, Meta announced the end of its diversity, equity, and inclusion (DEI) initiatives. In an interview with infamous podcast host Joe Rogan, Zuckerberg declared that the corporate world has become “culturally neutered” and could thus use a boost of “aggression” and “masculine energy”.

Zuckerberg reportedly co-hosted a party following Trump’s inauguration ceremony in Washington DC. Well, that was fast. 

Commentators in US media have already remarked on the remarkable – if unsurprising – rightward turn of business and tech oligarchs since Trump’s election. 

Zuckerberg is not alone: Elon Musk, whose Nazi salute at Trump’s inauguration raised few eyebrows, had assumed the role of the president’s self-appointed confidante and advisor well before the election; the rest of the usual suspects – Jeff Bezos of Amazon, Google’s Sundar Pichai, OpenAI’s Sam Altman – have variously but swiftly followed suit. 

Also read: The Rise of Oligarchs Around the World

When fact-checking goes wrong

While the rollbacks announced by Meta are only applicable to the company’s US operations at the moment, the rest of the world has reasons for concern – not least, India herself, Meta’s largest market. 

In recent years, with Donald Trump’s first electoral victory in 2016 being the turning point, Meta has struggled at home and abroad with the spread of misinformation on its platforms and the company has repeatedly come under scrutiny for failing to remove, or, perhaps worse, willfully ignoring hateful and inciting content rapidly spreading on its users’ feeds.

Given that the bulk of Meta’s fact-checking resources were allocated for its US operations, it might come as no surprise that the “Facebook problem” has been particularly acute in the subcontinent, home to hundreds of millions of users often posting in languages likely foreign to both moderator and algorithm. 

In 2018, the Sri Lankan government resorted to temporarily blocking social media access when anti-Muslim riots erupted in the country’s Central Province; thanks to fact-friendly journalism, we know that the company had repeatedly failed to address urgent appeals to moderate its platforms, where hate speech aimed at minorities spread freely in the leadup to violence.

The digital public square

The well-worn metaphor of the public square used indiscriminately to describe digital spaces feels woefully inadequate. Social media might be free to use, but is it public? 

Unlike public spaces, the digital “squares” all of us populate daily are the property of a handful of the wealthiest individuals, now clamoring for a seat at the presidential table. 

Also read: Trump’s Inauguration Speech Makes It Clear He Wants the US to Go Backwards

Trump, like many of his authoritarian peers globally, is averse to moderation – yet keen to censor his detractors – whether it takes the form of fact-checking or corporate guardrails. In this, they stand in agreement with Big Tech, whose CEOs know better than to antagonise powerful leaders, who use these platforms as political propaganda tools, and digital access to their citizenries as a negotiating tool.

Both know that with the right mix of algorithmic sorcery and political might, it is easy to control the digital “public” square. 

Case in point: as reproductive health rights are increasingly coming under attack in the United States – thanks to the Trump-adjacent, right-wing ‘MAGA’ movement, now more emboldened than ever – Meta appears to have blocked content from abortion pill providers on its platforms. 

All this has nothing to do with ideology, of course; principles be damned. Zuckerberg’s macho reinvention and his claims for a return to Meta’s roots – free speech, connecting people and the like – ring hollow. 

In siding with Trump, Big Tech is requesting to be relieved of accountability for the immense power it has amassed. Simply follow the money to understand these changing political dynamics. We pay for our ostensibly free media sociality with our data. The goal is the so-called engagement – the more time we spend on these platforms, the more data we produce, and the more money their owners make. 

The algorithms that power them are deliberately designed, in their engineers’ words, to get us “hooked”, to maximise user engagement. In other words, it is done to ensure that we spend as much time as possible scrolling, liking, tapping, commenting. 

Also read: Elon Musk, the Global Right-Wing and India’s Place in This Escalating Friendship

Fact-checking, DEI initiatives, even the performative defense of free speech itself – despite its proponents’ loud pronouncements – were in fact roadblocks, money badly spent, curtailing, rather than maximising engagement. 

It takes two to tango. So, while engagement makes money, it also makes for obedient citizens, a valuable tool in the service of authoritarian leaders. 

Truth and the social contract

To be sure, truth is a murky concept and facts even more so, at once elusive and readily available to support the most unlikely of theories in this datafied age of endless information. Fact-checking was far from perfect, and clearly never quite managed to turn the authoritarian tide threatening to submerge us today. It is ultimately a question of power, rather than truth. To own these platforms means to control the flow of information; it means, profoundly, to be able to influence public opinion. 

Big Tech has effectively taken over our digital public square: it’s one thing to be present online, quite another to be heard. Algorithms, entirely opaque to most users and easily manipulated to various ends, determine how content – news, opinions, advertisements, “hot takes”– flows across feeds and screens. 

To seek the truth online today amounts to a Sisyphean task. What will it take to speak truth to power? 

Sam Altman of OpenAI recently suggested that rapid advancements in artificial intelligence will necessitate a renegotiation of the social contract

Perhaps, the time is ripe to consider the corrosive effects of unchecked corporate power on our polity. Those violating the social contract are hiding in plain sight and AI is not one of them – for now, at least. They must not evade accountability, otherwise we will pay the price for Big Tech’s newfound freedom.

Alexios Tsigkas teaches Sociology at FLAME University, Pune.

The Problem With Sam Altman Suggesting to Change the Social Contract 

While DeepSeek is making headlines amid AI’s boom, OpenAI’s CEO Sam Altman has suggested that some change is required to the ‘social contract’ – something that warrants our attention.

China’s new AI model DeepSeek R1 has sent shockwaves worldwide. While this AI model matches the capabilities of advanced models made by American companies – such as OpenAI’s o1 – what sets it apart is that it is open-source, free to use, and cost-effective. 

It ended the US monopoly in the AI field and China achieved this feat despite American trade restrictions, paving the way for Chinese technological development.

As AI researcher Karen Hao explains, DeepSeek has put forth a great challenge to dominant paradigms in modern AI development. It has disproved that ‘scaling’ – OpenAI and Google’s strategy to build bigger models – is the only way to improve AI models. 

This is important because competitions on the scale have led to massive rises in carbon emissions, air pollution, water shortages and distorted electricity grids due to the heavy resource usage of data centers, apart from the expenditures of billions of dollars.

Also read: Why Deepseek’s AI Leap Only Puts China in Front for Now

While DeepSeek is making headlines amid AI’s boom, OpenAI’s CEO Sam Altman has suggested that some change is required to the ‘social contract’. This suggestion warrants our attention because the pursuit of a hypothetical all-knowing AGI — or artificial general intelligence — is often cited to evade accountability and responsibility for any present-day harm caused by AI. 

Moreover, the suggestion of changing the social contract must be contextualised within the present political realities of our times — the rise of radicalisation, misinformation and recession in democratic values and the role of Silicon Valley in enabling them. 

Understanding social contract and the exclusion of marginalised  

The dominant social contract theories were produced in Europe in situ. Not only did these social contract theories have euro-centric biases, but they were also exclusionary as they excluded several marginalised communities from their ambit. 

Charles Mills has pointed out the racial nature of classical social contract theories, which often present ‘white supremacy as the unnamed political system.’

Such racial contracts end up in the subjugation of non-white peoples and the establishment of white superiority in political, social, and epistemic arenas. Carol Pateman has highlighted the gendered nature of these social contract theories by highlighting that the social contract constitutes both freedom and domination. It ensures male domination over women’s sexuality as it presents civil freedom as a masculine attribute. 

Moreover, these theories valorised compulsory heterosexuality and compulsory able-bodiedness. Rawlsian Theory makes the disabled anguished by its erasure of them from its ambit. Mills excluded, Upendra Baxi points out,  backward nations, women, and children from the ambit of the right to liberty. These exclusionary social contract theories spread in non-western geographies through colonialism and established themselves as gospel truths through a knowledge mechanism, in which white men were presented as the agents of rationality, autonomy and reasonableness. 

New imagination of social contract

Claims to imagine a new social contract thus need to be interrogated from the vantage of the marginalised so that the attempts to renegotiate the social contract do not result in their exclusion. 

Should we allow silicon valley czars to rescript social contracts while excluding most of humanity? 

Trump’s inaugural speech was an assault on the human dignity and affirmation of LGBTQ+ people as he claimed that he was restoring ‘biological truth’ and ‘common sense.’ Tech billionaires were flanking him during his inauguration. Altman, who once warned the world about Trump, now believes that Trump will lead the US into the age of AI. 

Also read: The Secret Sauce Behind China’s Relentless Innovation Drive

Will the AIs produced by these billionaires be affirming of LGBTQ+ identity? What would be the place of non-binary sexual identities in the social contract created by these tech czars? 

Elon Musk’s open interference in Germany’s election and his support of the far-right party in Germany tells us that Silicon Valley poses a great threat to democracy worldwide. Meta’s closure of its fact-checking unit shows the valley’s non-commitment toward truth. 

In India, Ola’s CEO Bhavish Aggarwal suggested that his AI model Krutim will give an Indian version of history. This version of history fixates itself with the idea of a nation-state and claims that India was a country before the British Raj. Such fixation valorises the European model of nationalism unwittingly and suppresses alternative views on sovereignty and nationhood. AI is thus largely beholden to state interests. 

The online version of DeepSeek is hosted in China, hence it is subject to censorship on Chinese political issues to abide by local regulations. However, the open-source version is freely modifiable in any way. Truth is becoming a casualty in this euphoria around AI.

AI, geography of prejudice and questions of periphery 

What could it mean for people at the periphery who are the targets of prejudice masquerading as ‘common sense’? It needs an epistemic interrogation. Gyanendra Pandey highlights that difference is often posited as a means of ‘otherisation’.

That ‘different other’ is pushed to the margins. In this formation of ‘norm’, Pandey argues, “Men are not described as different; it is women who are”. Foreign colonisers are not different; the colonised are.”

Caste Hindus are not different in India — it is Muslims, tribals, and Dalits who are. White Anglo-Saxon Protestant heterosexual males are not different in the United States; at one time or another, everybody else is.’ Our concern is that in the era when prejudice is presented as common sense and when tech billionaires are enabling it, any attempt to propose a new social contract will be an exercise of prejudice. 

If Trump’s US is the ‘field of origin’ for AI and its social contract, it will indeed bear an imprint of racialisation because dominant discourse on AI and its production is happening in the geographies of prejudice. Similarly, DeepSeek will remain ambivalent on troubling questions about China. In India, as long as caste continues as the social reality of internal colonialism, an indigenous AI will bear the imprint of caste Hindu biases. 

To militate against this, we may need broader solidarity among the people at the peripheries. 

AI is here to stay. Sam Altman’s suggestion to change the social contract is not innocuous. It requires global civil society to be vigilant against any scripting of a new social contract that enables recession in civil liberties, metastasise of unfreedoms, and subjugation of marginalized communities and geographies. 

Tech bros should not be allowed to abridge our freedom. They cannot be allowed to write a new social contract, excluding most of humanity. 

Vijay K. Tiwari is an assistant professor (law) at the West Bengal National University of Juridical Sciences, Kolkata.

Kaif Siddiqui is a doctoral scholar at the NALSAR University of Law, Hyderabad. 

Why Deepseek’s AI Leap Only Puts China in Front for Now

Commentators are right to say DeepSeek’s new AI chatbot is a game-changer but don’t take all the hype about China now dominating the field too seriously.

The knee-jerk reaction to the release of Chinese company DeepSeek’s AI chatbot mistakenly assumes it gives China an enduring lead in artificial intelligence development and misses key ways it could drive demand for AI hardware.

The DeepSeek model was unveiled at the end of January, offering an AI chatbot competitive with the US’s OpenAI’s leading model, o1, which drives ChatGPT today.

DeepSeek’s model offered major advances in the way it uses hardware, including using far fewer and less powerful chips than other models, and in its learning efficiency, making it much cheaper to create.

The announcement dominated the international media cycle and commentators frequently suggested that the arrival of DeepSeek would dramatically cut demand for AI chips.

The Deep Seek announcement also triggered a plunge in US tech stocks that wiped nearly AU$1 trillion off the value of leading chipmaker Nvidia.

This dramatic reaction misses four ways DeepSeek’s innovation could actually expand demand for AI hardware:

  • By cutting the resources needed to train a model, more companies will be able to train models for their own needs and avoid paying a premium for access to the big tech models.
  • The big tech companies could combine the more efficient training with larger resources to further improve performance.
  • Researchers will be able to expand the number of experiments they do without needing more resources.
  • OpenAI and other leading model providers could expand their range of models, switching from one generic model — essentially a jack-of-all-trades like we have now — to a variety of more specialised models, for example one optimised for scientists versus another made for writers.

What makes DeepSeek’s model so special?

Researchers around the world have been exploring ways to improve the performance of AI models.

Innovations in the core ideas are widely published, allowing researchers to build on each other’s work.

DeepSeek has brought together and extended a range of ideas, with the key advances in hardware and the way learning works.

DeepSeek uses the hardware more efficiently. When training these large models, so many computers are involved that communication between them can become a bottleneck. Computers sit idle, wasting time while waiting for communication. DeepSeek developed new ways to do calculations and communication at the same time, avoiding downtime.

It has also brought innovation to how learning works. All large language models today have three phases of learning.

First, the language model learns from vast amounts of text, attempting to predict the next word and getting updated if it makes a mistake. It then learns from a much smaller set of specific examples that enables the large language model to be able to communicate with users conversationally. Finally, the language model learns by generating output, being judged, and adjusting in response.

In the last phase, there is no single correct answer in each step of learning. Instead, the model is learning that one output is better or worse than another.

DeepSeek’s method compares a large set of outputs in the last phase of learning, which is effective enough to allow the second and third stages to be much shorter and achieve the same results.

Combined, these improvements dramatically improve efficiency.

How will DeepSeek’s model drive further AI development?

One option is to train and run any existing AI model using DeepSeek’s efficiency gains to reduce the costs and environmental impacts of the model while still being able to achieve the same results.

We could also use DeepSeek innovations to train better models. That could mean scaling these techniques up to more hardware and longer training, or it could mean making a variety of models, each suited for a specific task or user type.

There is still a lot we don’t know.

DeepSeek’s work is more open source than OpenAI because it has released its models, yet it’s not truly open source like the non-profit Allen Institute for AI’s OLMo models that are used in their Playground chatbot.

Critically, we know very little about the data used in training. Microsoft and OpenAI are investigating claims some of their data may have been used to make DeepSeek’s model. We also don’t know who has access to the data that users provide to their website and app.

There are also elements of censorship in the DeepSeek model. For example, it will refuse to discuss free speech in China. The good news is that DeepSeek has published descriptions of its methods so researchers and developers can use the ideas to create new models, with no risk of DeepSeek’s biases transferring.

The DeepSeek development is another significant step along AI’s overall trajectory but it is not a fundamental step-change like the switch to machine learning in the 1990s or the rise of neural networks in the 2010s.

It is unlikely that this will lead to an enduring lead for DeepSeek in AI development.

DeepSeek’s success shows that AI innovation can happen anywhere with a team that is technically sharp and fairly well-funded. Researchers around the world will continue to compete, with the lead moving back and forth between companies.

For consumers, DeepSeek could also be a step towards greater control of your own data and more personalised models.

Recently, Nvidia announced DIGITS, a desktop computer with enough computing power to run large language models.

If the computing power on your desk grows and the scale of models shrinks, users might be able to run a high-performing large language model themselves, eliminating the need for data to even leave the home or office.

And that’s likely to lead to more use of AI, not less.

Dr Jonathan K. Kummerfeld is a senior lecturer in the School of Computer Science at The University of Sydney. He works on natural language processing, with a particular focus on systems for collaboration between people and AI models.

Originally published under Creative Commons by 360info™.

The Secret Sauce Behind China’s Relentless Innovation Drive

DeepSeek’s rise has shown that China is managing to take technological leaps despite western restrictions on technology exports, particularly in the area of AI, overturning the assumption that the US has an unassailable supremacy in the area.

In the last month or so, there has been some striking technology news from China. First was the low-altitude flight of two new combat aircraft, reportedly development models of sixth generation fighters. Considering that the US first flew its sixth-generation Next Generation Air Dominance (NGAD) fighter in 2020, this was a substantial achievement the country.

The day after this news was flashed around the world, there was another item which did not receive that much coverage, but was also significant – the launch of a huge Type 076 amphibious assault ship that can double as a light aircraft carrier. Uniquely, the aircraft aboard could be launched using an electro-magnetic launch (EMALS) system which the US had installed in one of its large aircraft carriers in 2015 and was followed by China installing it in its newest carrier, Fujian. The system makes the launch of aircraft from ships much easier and places less stress on the air frame.

Last week an experimental nuclear fusion reactor in China triggered a great deal of comment by maintaining its operational state for over 17 minutes – a new world record. This is a significant step towards the goal of realising a fusion-based nuclear reactor in the near future. Around the same time, evidence emerged that China was building a large laser-ignited fusion research centre akin to the American National Ignition Facility which can be used to develop and test thermonuclear nuclear weapon designs.

None of these developments, however, generated the kind of surprise and shock that DeepSeek – the Chinese artificial intelligence (AI) company that develops open source large language models – did with the release of its first free chatbot app based on their DeepSeek-R1 model.

By Monday, it had surpassed ChatGPT as the most downloaded free app in the iOS App store in the US, leading to a spectacular crash of Nvidia’s share price. Nvidia is known to be the principal provider of specialized chips used for AI applications. DeepSeek developed the model that many AI experts are saying is akin to that of US OpenAI and Meta even though it used far fewer and less advanced Nvidia chips.

Also read: DeepSeek: Two Questions at the Heart of the AI Offering that Has Rattled the World

They trained their model for a reported $6 million as compared to the $100 million that OpenAI’s GPT-4 cost. Its system also uses a fraction of the computing power, and electric power, that Western AI engines consume.

DeepSeek’s rise has shown that China is managing to take technological leaps despite western restrictions on technology exports, particularly in the area of AI. It also overturns the facile assumption that the US has an unassailable supremacy in the area of AI which would be further solidified by spending billions of dollars. The big danger here is that instead of smothering China’s R&D progress, US restrictions may end up stimulating it.

There is no innate Chinese genius behind this achievement, only the long-term obsession of the Communist Party of China (CPC) to turn China in to the foremost world power. As in India, there is a desire to regain the glory of the past. But the Chinese focus is on reimagining the present and future rather than dwelling on the past.

From the very beginning of its opening up process in the 1990s, Beijing’s growth strategy has sought to create an aatmanirbhar China. Over the decades, China has bought, stolen, coerced technology from foreign companies, but equally systematically, it has set up a parallel system of laboratories and institutions to absorb or “re-innovate” this technology. A crucial tier of this project has been to obtain knowhow by sending an entire generation of Chinese young men to study abroad. Simultaneously, the country has used programmes to lure foreign technology specialists to seed Chinese institutions with knowledge and skills.

This strategy is now yielding results and has persuaded the CPC that instead of graduating from the manufacturing revolution to advance in the area of services, China is cutting out a new path of taking its manufacturing to a higher technological level to rival the West.

It is no secret that China is going through a spate of problems – its birth rate is declining, house prices are falling and some provinces are in the grip of deflation.

The future plan was outlined by Xi Jinping in 2024’s National People’s Congress session when he spoke of the need to unleash “new productive forces” to deal with the situation. In essence this means the application of higher science and technology to its manufacturing prowess. This requires a three-pronged movement: first, the replication of technologies that are likely to be restricted by the west; second, the invention of entirely new technologies—photonic computing, brain computer interfaces, nuclear fusion and telemedicine. And third, emphasizing the spending of money on scientists under the age of 35.

Last year, Bloomberg put out a special report noting that China has achieved global leadership position in five key technologies—UAVs, solar panels, graphene, high-speed rail, and electrical vehicles and batteries. It went on to add that at the same time it had achieved a “competitive” status in seven technologies like semiconductors, AI, robots, machine tools, large tractors, drugs, and LNG carriers. The only technology it was still “behind” was in commercial aircraft.

The reason for this is probably that China sought a tie-up with a US commercial aircraft manufacturing company, McDonnell Douglass, which subsequently sank. However, in 2023, Beijing introduced the Comac C919 narrow-body airliner into service domestically. Currently, though it is powered by a CFM (US-French) engine.

The writer is a Distinguished Fellow, Observer Research Foundation, New Delhi

This piece was first published on The India Cable – a premium newsletter from The Wire & Galileo Ideas – and has been updated and republished here. To subscribe to The India Cable, click here.

MGNREGS | Govt’s Claims of Quickness and Smoothness With Aadhaar-Based Payments Are False

Our research shows delays in payment of wages are due to insufficient budget allocation for MGNREGS. The technologies used to transfer money have no role in reducing delays.

Given the implementation scale, the Mahatma Gandhi National Rural Employment Guarantee Act (MGNREGA) became a laboratory for testing many digital technologies. Every aspect of the implementation of the MGNREGS programme, from planning of next year’s works to payment of wages to workers has been digitised. A recently published paper in the Indian Journal of Labour Economics written by the authors of this article and Suguna Bheemarasetti demonstrates how two major digital interventions in MGNREGS have compromised on public values with little or no accountability

The paper was written using a large-scale empirical exercise in conjunction with immersive work on the ground with MGNREGS workers’ organisations and analysis using Right to Information (RTI) responses from the government. The two digital interventions analysed in the paper are ‘segregation of wage payments by caste’ and the ‘Aadhaar-Based Payment Systems (ABPS)’. The analysis is based on 31.36 million (3.13 crore) MGNREGS wage transactions sampled from 327 blocks across 10 states from the financial year (FY) 2021–22. The total amount of wages involved in these transactions is Rs 46.02 billion (Rs 4,602 crores).

The segregation of payments by the caste category of workers has been withdrawn but the Union government has not assumed any responsibility for its impact on delays and on caste or communal tensions it caused at MGNREGS worksites. Until recently, there was a choice between paying wages to workers using the traditional account-based payment system or using the ABPS. Account-based systems are like NEFT payments that use a worker’s name, their account number, and IFSC code.  From January 1, 2024, after multiple deadline extensions, the Union government mandated the use of ABPS as the exclusive channel for transferring wage payment in MGNREGS

In this article, after a brief explanation of what ABPS is, we provide a non-technical exposition of two findings from our research paper. In a nutshell, using principles of statistical science, we find that contrary to government claims, ABPS neither results in quicker payments than the account-based systems, nor does it result in fewer rejections compared to account-based payment systems.

Also read: Making Aadhaar-Based Payments Compulsory for NREGA Wages Is a Recipe for Disaster

MGNREGS payments process and payments through the FY

As per the MGNREGA, states must electronically send their invoices to the Union government within eight days of completion of work. This is called Stage 1. Subsequently, the Union government must transfer wages to the workers as per these invoices within the next seven days. This is called Stage 2 and is entirely the Union government’s responsibility. Stage 1 plus Stage 2 must be completed within 15 days as per the Act. Transfer of wages to workers’s accounts only happens in Stage 2 so we only consider Stage 2 in our paper. 

In line with what has been historically observed, the pattern of delays in wage payment is not uniform across the financial year.

Funds dry out as the financial year progresses. In general, one does not observe delays in wage payments in the first quarter (April to June) of the financial year. Delays tend to accumulate onwards from the second quarter (July to September). Sometime around the third quarter, the union government releases some additional funds and delays reduce partially. One observes delays again in the fourth quarter.

This is what is shown in Figure 1. It shows the percentage of transactions processed within seven days and 15 days respectively. Observe that the percentage of transactions completed within seven days in October is less than 40%. It never came close to 100% in any month, which is what it should be as per the Act. Delays in FY 2021-22 show a slightly different pattern as delays were high even in April that year, but decrease in May and June, as is usual. In the ongoing financial year, the government has not released any additional funds. 

Figure 1: Percentage of transactions processed within 7 and 15 days over the months in FY 2021-22

What is ABPS and government’s rationale for Aadhaar

Aadhaar has been used for cash transfers through the ABPS. To direct a payment using Aadhaar, a worker’s Aadhaar number must be linked to her job card and bank account. And, the Aadhaar number must be linked correctly through her bank branch with a software mapper of the National Payments Corporation of India, which acts as a clearing house of Aadhaar-based payments. Aadhaar becomes the financial address of the individual and cash transferred by the government gets deposited to the last Aadhaar-linked bank account. This model of sending payments via Aadhaar has been operational from 2016.

Figure 2 shows the rationales over time provided by the union government for using Aadhaar in MGNREGA. These have been compiled using RTI responses and official circulars/press releases. 

Figure 2:

In Figure 2, observe the letter provided by the Ministry of Rural Development (MoRD) in October, 2021, which explicitly mentions that “For timely payment of wages it is important to get MGNREGA workers Aadhaar seeded in the MIS. In November, 2023, a letter from the MoRD mentions “Timely wage payments is one of the core areas” and says that the “ABPS is the best alternative” to “avoid rejections.” Similar reasoning was obtained in June, 2023 as well.

However, the Union government provided no evidence for these claims in any of these letters or RTI responses. We set out to investigate these claims using the government’s own data. 

Data, Sampling and Findings

There are 10 states in our sample that have high volumes of MGNREGS work. All high volume states have not been selected but our arguments are likely to hold without loss of generality. Within each state, the sampling was done in two stages. First, we randomly sampled one block per district in each of the 10 states and then downloaded all transactions for each sampled block. In FY 2021-22, there was a total of 227.2 million (22.7 crore) wage transactions in our 10 sample states and the sampled transactions were 11.3%, that is 31.37 million (3.13 crores) wage transactions.

As Figure 1 illustrates, the quarter in which a transaction is done is likely to have an impact on the time taken to pay wages. Further, even though Stage 2 is a prerogative of the Union government, there might be variations across states due to administrative preparedness, extent of backwardness and other factors that impact the time taken to process payments. In addition, the number of transactions to be processed can be used as a proxy of the burden of processing payments on government officials, which is likely to have an impact on the overall time taken to process payments. So we use these as input variables in our statistical model and use the percentage of transactions completed within 7 days and 15 days respectively  as our output variables. 

In our sample, there are 18.94 million (1.89 crore) account-based transactions and 12.41 million (1.24 core) ABPS transactions. Figure 3 shows the percentage of payments processed within seven and 15 days across the two payment types. 36% of account-based payments were processed within seven days compared to 39% of ABPS payments, while 56% of the account-based payments were processed within 15 days compared to 61% of ABPS payments.

Figure 3: Percentage of wage payments processed within 7 days and 15 days for the two payment methods 

A statistical test revealed that there is no statistically significant difference between the two modes of payment in transferring wages to workers. Statistical significance is a scientific principle. If there was a statistically significant difference between the two payment methods, then we could infer that one payment method is intrinsically better than the other. But since our statistical test based on a large sample revealed that there is no statistically significant difference, one can infer that the observed difference in the numbers (36% and 39% for wages transferred within seven days) is solely due to chance. What this implies is that ABPS does not inherently result in quicker wage payments compared to account-based systems.

In our sample, 2.85% of the ABPS transactions were rejected and 2.10% of account-based transactions were rejected. Again, a scientific test revealed that there is no statistically significant difference in the rejection rates between the two payment systems suggesting that ABPS does not inherently lead to lower rejection rates than account-based systems. 

Conclusions of our paper

In the context of account-based versus ABPS, our paper has three main conclusions.

First, from an empirical standpoint, delays in payment of wages are due to insufficient budget allocation for MGNREGS. The technologies used to transfer wages have no role in reducing delays.

Second, payment rejections can arise using both account-based payments and using ABPS. But contrary to government claims, we find no statistically significant difference in rejection rates across these two payments systems.

Third, our experiences on the ground suggest that rejections arising from account-based systems are easier to resolve and can be done locally at the panchayat or block level but ABPS rejections are harder to resolve owing to its opacity and centralised nature. 

Digital technology is a tool for implementation of social policies and cannot be the sole engine. As different problems emerge, the implementers (governments) tend to find technological solutions to them as an easy approach to ‘patch development.’ Such changes may appear simple at the planning level, but introducing these changes on the ground takes time and can be costly. Technological choices have socio-economic consequences and it is unethical to impose techno-solutions without adequately assessing and addressing their pros and cons.

Evidence has indicated that interventions that are designed from the workers’ perspective, with their accessibility at the centre, have led to substantial reductions in payment delays. Rights-holders come from diverse backgrounds, usually take time to adjust to the changes, and some population groups may face severe hardships or even get excluded. Consequently, it is important to have a continuous and consultative process, pilot any intended changes in different areas and population groups, and assess the net benefits and costs. It would be disastrous to let rights be reduced to a technological theme park.

 

All the authors are associated with LibTech India, a centre within Collaborative Research & Dissemination. Rajendran Narayanan teaches at Azim Premji University, Bangalore. The views expressed are personal. 

 

DeepSeek: Two Questions at the Heart of the AI Offering that Has Rattled the World

DeepSeek’s purported magic lies in the fact that it was reportedly developed for a fraction of the cost of its US rivals.

New Delhi: A generative artificial intelligence platform developed by Chinese maker DeepSeek has seemingly rattled the US-led world of AI, with new president Donald Trump saying that the entry should serve as a “wake-up call” for American AI companies.

DeepSeek’s first-generation reasoning models, DeepSeek-R1-Zero and DeepSeek-R1 are open-source – their codes are available on Github.

DeepSeek’s merits aside, there are multiple reasons why what Trump is saying is not exactly untrue.

What role does it play in the China-US ‘tech’ cold war?

One of the most surprising things about DeepSeek – whose chatbot interface is very similar to erstwhile crowd-favourite, OpenAI’s ChatGPT – is its popularity in the US.

DeepSeek was released on January 20, 2025. By January 27, the AI model powering DeepSeek’s chatbot had began outperforming top US models. Silicon Valley tech funder Marc Andreessen called the release of the model as “AI’s Sputnik moment”.

DeepSeek’s chatbot was the number one product on the Apple App Store in the US. It surpassed OpenAI’s 2023 bot ChatGPT.

This popularity has not only shaken the stock market but has also gone some way towards showing US policymakers that bans on apps like TikTok will not put a dent on Americans’ appetite for Chinese digital services. This is a point for China in what Wired has called the “US-China tech cold war.”

The article notes how in October, 2022, the US government enforced export controls that severely restricted Chinese AI companies from accessing cutting-edge chips like Nvidia’s H100. DeepSeek started out with a stockpile of 10,000 H100s, its founder Liang Wenfeng said in a 2024 interview. But they alone could not fuel a new product. BBC, though, reports that experts believe that this collection could have been up to 50,000.

In response, DeepSeek came up with homegrown methods to train its AI models using a combination of “tricks” by using simpler math (from 32 decimal places down to 8, allegedly), custom communication schemes between chips, and what software engineer Wendy Chang tells Wired is an “innovative use of the mix-of-models approach.”

Combining these successfully is where DeepSeek appears to have scored on over the US momentarily.

What has it meant for the US AI financial world?

Trump has said the shock could spur a “positive” future for US tech companies, as it would force them to innovate more cheaply.

“I’ve been reading about China and some of the companies in China, one in particular coming up with a faster method of AI and much less expensive method, and that’s good because you don’t have to spend as much money. I view that as a positive, as an asset,” Trump said.

DeepSeek’s purported magic lies in the fact that it was reportedly developed for a fraction of the cost of its US rivals.

OpenAI’s Sam Altman wrote on X that the service was “impressive… particularly around what they’re able to deliver for the price”.

Nvidia, the leading supplier of AI chips, was the worst hit and lost close to $ 600 billion in market cap on January 27, in what DW has called the biggest drop for any company on a single day in US history.

Forbes reported that Nvidia, the most valuable company in the world by market capitalisation, fell to third place after Apple and Microsoft on Monday.

In Japan, chip-testing equipment maker Advantest, a supplier to Nvidia, lost 10% on January 28 after diving nearly 9% the day before.

Chip-making equipment maker Tokyo Electron fell 5.3%, while technology start-up investor SoftBank Group was 6% lower.

Over in the US, Broadcom finished down 17.4%, followed by ChatGPT backer Microsoft which fell 2.1% and then Google parent Alphabet which ended down 4.2%.

Dropbox’s AI head Morgan Brown wrote in a series of posts on X how in an “insanely expensive” world of AI model training, DeepSeek is able to do the same training done at a whopping 5% of the cost.

Where the likes of OpenAI and Anthropic spend upwards of $100 million at massive data centers with thousands of $40,000 GPUs, wrote Brown, DeepSeek did it for “$5 million instead.” It also allegedly uses only 2,000 GPUs instead of the usual 100,000 that other AI training companies use.

“Their models match or beat GPT-4 and Claude on many tasks,” Brown wrote.

Veteran analyst Gene Munster told BBC that he was not really convinced of the financials DeepSeek was citing, and wondered if the startup was being subsidised or whether its numbers were correct.