US Firm Takes Down Private Network Profiling Indian Activists Opposing Pesticides, GMO After Reports

The company confirmed the removal of over 500 profiles on the network after a legal review of European data privacy rules, and threats of litigation, following a media investigation.

Mumbai/London/Athens: A US-based reputation management firm involved in monitoring activities of those critical of pesticides and genetically modified (GM) crops on a private social network has ceased its profiling operations following an investigation led by investigative newsroom Lighthouse Reports, and shared with The Wire and other international media partners.

The Missouri-based firm v-Fluence Interactive, headed by a former Monsanto executive, Jay Byrne, confirmed in an official statement on December 9, 2024, that the company has removed its Bonus Eventus portal that served as a “stakeholder wiki” hosting profiles of over 500 individuals globally. The private network included profiles of prominent Indian environmentalist Vandana Shiva, ecologist Debal Deb, organisations like Pesticide Action Network (PAN) India and other scientists and academics.

Among other findings, the investigation revealed that v-Fluence had received funding from the now-reduced US Agency for International Development (USAID) for Bonus Eventus via the International Food Policy Research Institute. The sub-contracts were aimed at countering criticism of “modern agriculture approaches” in Asia and Africa, according to public records obtained by Lighthouse Reports.

The investigation also highlighted that v-Fluence and Byrne are co-defendants in a lawsuit against global pesticide giant Syngenta, for suppressing information on the dangers of paraquat herbicide, alleged to have caused Parkinson’s disease among farmers in the US. Byrne had denied the allegations of the lawsuit, saying they were based on claims which were “manufactured and false”.

In India too, Syngenta came under scrutiny in 2017 after the Yavatmal pesticide poisoning scandal in Maharashtra that claimed the lives of at least 20 farmers. Farmers had alleged that Syngenta had failed to provide sufficient information regarding the risks of its pesticide ‘Polo’. Syngenta, however, maintained that there’s no evidence proving that its product caused the tragedy.

Narasimha Reddy Donthi, an independent policy analyst and consultant with PAN India, who has also worked closely with farmers in Yavatmal for securing compensation from Syngenta, says that the removal of the profiles is a “positive outcome”.

“However, they have to tell why they did that and for whom. Furthermore, we need to know how US funds got involved in such an enterprise. We need a deterrence – a official promise,” Donthi adds.

Legal concerns, lost clients and threats of litigation 

v-Fluence said in its official statement in December last year that the removal of profiles comes after an “independent legal review” of obligations under the European data privacy rules. They also informed that the firm will “continue to offer stakeholder research with updated guidelines to avoid future misinterpretations of our work product”.

In an emailed statement, Byrne confirmed that the profiles had been removed, but said that they had been taken down prior to the legal review in light of litigation and threats of litigation.

According to reporting from David Zaruk, a Bonus Eventus member who was a recipient of Byrne’s emails, v-Fluence had to lay off around 40 staff after industry clients cancelled their contracts.

The investigation, published in September last year, revealed that v-Fluence’s Bonus Eventus was accessible to over 1,000 members, including many executives associated with global agrochemical companies, lobbyists and government members. 

The eight Indians who had access to the Bonus Eventus portal include Raghavan Sampathkumar, the executive director of the Federation of Seed Industry of India (FSII); and Anand Ranganathan, consulting editor of the Indian right-wing magazine Swarajya, and a former staff research scientist at the International Centre for Genetic Engineering and Biotechnology (ICGEB).

Notably, the FSII is involved in a project with the Ministry of Agriculture and Farmers Welfare for deploying technologies to agro-ecological zones allotted for cotton production. The ICGEB also works with the Union Ministry of Science and Technology, for supporting biotech research and development. However, Ranganathan informed Lighthouse Reports and The Wire that he was unaware of the network and denied association with v-Fluence.

‘What about the harm already done?’

A number of profiles on the Bonus Eventus portal contained personal information such as phone number, email and residential address, details of people’s personal website, and income among other details of the individuals. Indian activists profiled on the network expressed concerns about potential misuse of data by those having access to such data.

In a written statement last year, Byrne had informed that the private, community-edited wiki platform includes only “publicly available and referenced information”, asserting that, “Any contact or other information which may appear on the wiki is from public records and is used publicly by the source as part of their business or advocacy.” 

However, technology lawyer and policy adviser Pranesh Prakash, who had reviewed excerpts from some of the profiles, found that personal data was indeed being processed, and because much of the collected personal data was not made available by the person who was profiled, India’s Digital Personal Data Protection Act (DPDPA) applied to it. 

Prakash informed that the exception for “research purposes” under DPDPA does not apply if the data is being used to make any decision specific to any of the activists whose personal data has been collected.

Ecologist and seed conservator Debal Deb, who was profiled by v-Fluence, says that the company closing down the network is an “important development”, however, he also raised apprehensions about the harm already done.

“The issue is that no one knows what and how much harm these corporate agents have already perpetrated to the lives and careers of the scientists and environmental activists. A public announcement of dismantling a website does not absolve the decades-long crime of appropriation of citizens’ personal data, nor atone for the intangible damages to the individuals,” says Deb.

Leaching Landfills, Frothing Rivers, Unbreathable Air: Delhi’s Many Environmental Concerns as Poll Day Nears

Due to garbage continuing to be dumped in the Okhla landfill, water in the taps in the area contains sand and mud.

New Delhi/ Bengaluru: “When the wind blows, a terrible stench spreads,” says Vimala Devi. She says that coughs are constant, as are cold-like illnesses – especially during the winters.

Devi lives in V.P. Singh Camp, a resettlement colony in south Delhi. Right next door is the notorious Okhla landfill – and this is where the stench comes from, as the wind blows. 

“In the summer, a large amount of dust rises, making it even harder to breathe,” Devi told The Wire. “This directly impacts our health, and we frequently fall sick.”

Waste from both the South Delhi Municipal Corporation and Delhi Cantonment Board were disposed of in the Okhla landfill after 1996, when it was first commissioned.

Okhla is one of the three officially-acknowledged landfills in the national capital, near where living has become virtually impossible. The others are Ghazipur in east Delhi, where waste from the East Delhi Municipal Corporation is dumped; and Bhalswa where waste from the North Delhi Municipal Corporation finds its resting place. As a result, living near these areas has become virtually impossible.

Together, Ghazipur, Bhalswa and Okhla contained 28 million tonnes of legacy waste as of 2019, of which nearly 12 million tonnes has been cleared since July 2019, as per a report.

Meanwhile, pollution from vehicle emissions and other sources such as construction dust have also bogged down Delhi’s air quality over the last year, despite a huge drop in farm fires in Punjab, Haryana and neighbouring areas (which are usually blamed for Delhi’s poor air quality during this season), the city’s air quality plummeted in the winter of 2024.

And the Yamuna still flows dirty and frothy, as it snakes its way through Delhi. These are important environmental and health concerns that residents say they will keep in mind as they cast their votes on February 5.

Leaching landfills

“During every election, leaders from all political parties come here asking for votes and make promises, but nothing ever changes,” Suryanayan Paswan, who has been living in VP Singh Camp near the Okhla landfill since 1999, told The Wire

According to one estimate, the Okhla landfill commissioned in 1996 spans across 62 acres and contains 6 million tonnes of legacy waste (waste that has built up over the years, and still continues to be environmental and health threats). Though the landfill was decommissioned in 2018, a team from New Delhi-based Centre for Science and Environment observed fresh waste still being dumped at the site in 2023. Residents in VP Singh Camp claim that waste dumping at the site still continues.

Due to garbage continuing to be dumped in the Okhla landfill, water in the taps in the area contains sand and mud, Paswan alleged.

“Plastic, chemicals, and other waste are constantly dumped at the Okhla landfill. This has a direct impact on the health of our children, causing them to fall sick repeatedly.”

With water from taps and borewells always polluted in the area, almost every household in VP Singh Camp relies on buying drinking water for their daily use, residents say. 

Okhla landfill

MCD trucks carrying garbage. Photo: Atul Ashok Howale/The Wire

Apart from the health and environmental concerns to residents from the garbage in the landfills, these sites pose other dangers too. They are sources of methane, a greenhouse gas that warms up the atmosphere. Together, the three landfills have produced at least 124 methane “super emitter” leaks since 2020 as per data quoted by The Guardian from Kayrros, an environmental intelligence agency.

Fire breaking out in these dump sites is a common sight; the most recent was in April last year when a fire broke out in sections of Delhi’s oldest landfill, Ghazipur. Incidentally, the landfill is also the tallest of the three, at 236 feet in height, per a report.

In 2023, the AAP declared ambitious deadlines to clear the waste at all three landfills. Delhi finance minister Kailash Gahlot said in March 2023 that the landfill at Okhla would be cleared by December that year, Bhalswa by March 2024 and Ghazipur by December 2024. However, none of these have materialised.

According to one report, the Municipal Corporation of Delhi (MCD) pushed the Ghazipur deadline from December 2024 to 2026. In September last year, Times of India reported that the MCD pushed the deadlines again, to 2028, citing the lack of facilities to manage both the legacy waste and fresh waste.

Delhi’s landfills have also become an issue that parties use against each other whenever elections approach, and this time has been no different. On January 26, a portion of garbage at the Bhalswa landfill fell on the nearby houses, near Sharadhanand Colony, injuring two children and damaging several houses.

Delhi Congress President Devender Yadav called out former Delhi Chief Minister and AAP leader Arvind Kejriwal, saying that he had ‘let people down’. A few days later, senior BJP minister Nitin Gadkari promised people at a gathering that if the BJP wins the Delhi elections this year, they would remove garbage from the landfill sites and replace them with gardens and academic institutions within five years. 

The AAP in 2022 had claimed that the Congress, which ruled Delhi for 15 years before the AAP came to power, “did nothing” to tackle the landfill sites. On January 30, Swati Maliwal, currently an AAP Rajya Sabha MP who has fallen out with the party, collected garbage from parts of the city in three mini-trucks and shoved some in front of Kejriwal’s residence saying that he had “ensured” that Delhi became a “giant garbage dump”. Following this, Delhi Police detained Maliwal.

‘Unbreathable’ air

Delhi’s deteriorating air quality is another issue that residents say no government is willing to tackle. The Delhi government, with the AAP at the helm for a decade now, has repeatedly been laying the blame on the stubble burning by farmers in adjoining areas including Punjab and Haryana for the capital city’s poor air quality during winters. Indeed, some studies have shown that stubble burning is a major contributor to the high air pollution levels in Delhi during winter. 

This winter was no different: Delhi’s air quality in November plummeted to shocking levels. On November 18, 2024, the Air Quality Index (AQI) – as per official figures – dipped to 494. This falls in the “Severe” category, the highest category of air pollution as per the Index which classifies the city’s air quality based on the levels of at least three major pollutants including carbon dioxide.

The Index ranges from 0 to 500, and the higher the AQI, the higher impacts of such air on even healthy people. While an AQI between 0 to 50 is considered ideal, an AQI between 401-500 is categorised as “Severe”, and is the worst air quality level per standards followed by the Central Pollution Control Board.

Also read: Delhi Chokes On A “Severe” AQI of 494; More Restrictions in Place to Curb Pollution

Other AQI websites quoted far higher values in Delhi on the day; as per Swiss agency IQAIR’s world rankings of air quality in cities – based on US AQI calculations – Delhi’s AQI at 8:51 IST on November 18 was 1113, and in the “Hazardous” category as per US standards.

However, data on farm fires revealed a new twist in the story: last winter, farm fires had decreased drastically. As The Wire reported, a detailed analysis by the Delhi-based Centre for Science and Environment released on January 6 showed that the annual concentration of fine particulate matter – a major air pollutant – increased by 3.4% in 2024 in Delhi when compared to 2023.

Polluted water supply

Polluted water supply at Okhla. Photo: Atul Ashok Howale/ The Wire

And alarmingly, this has occurred despite a huge decrease – by a staggering 71.2% – in the counts of stubble fires during the winter of 2024. This clearly points to local and regional sources of pollution, including vehicular emissions, open burning of waste and dust from construction and other sectors causing air pollution in the national capital, the report said. 

“I drive an auto-rickshaw daily and travel to different areas in Delhi, but the situation remains the same,” says Ravindra Singh, an auto-rickshaw driver from Uttar Pradesh who has been living in Delhi for several years now.

“There is no change. Along with the polluted air, the dust on the roads and the high volume of vehicles cause a lot of trouble while driving the rickshaw.”

He adds that no government in Delhi seems to be working on the issue of air pollution. “The BJP, Congress, and Aam Aadmi Party all seem to be indulging in freebie politics in Delhi, but none of them are talking about pollution.”

Some are making promises – much like the AAP has over the several years it has been in power in Delhi. Union Minister Nitin Gadkari promised people at a gathering on January 30 that if the BJP wins the elections at Delhi, it would “free” the city of its traffic and air pollution woes in five years, and that the people of Delhi would be able to breathe clean air again.

The frothing Yamuna

But it’s not just polluted air and land that Delhi’s residents have to contend with – there’s polluted water too. The frothing waters of the river Yamuna that flows by the city have consistently been grabbing headlines for years. A literature review of the state of the Yamuna published in 2024 found that 85% of the pollution in the Yamuna stems from “domestic sources”, or human activities, including industrial effluents, raw manure, waste and dead body disposal, idol worship, and contaminants from water used in streams.

Sewage is a huge concern, and studies such as this one show high faecal coliform bacteria levels (due to human fecal waste) in some parts of the river that flows through the city, such as Nizamuddin.

As The Wire reported, the AAP had promised when it came into power a decade ago that it would do whatever it takes to clean the Yamuna, including stopping sewage from draining into the river. But the Yamuna still runs dirty. A study published on February 1 this year found “excessive” values of Biochemical oxygen demand or BOD, which mean that the water is highly polluted and has less oxygen available for aquatic life.

The BOD ranged from 37 to 430 mg/L across 43 points in the Yamuna through the National Capital Region; ideal, unpolluted waters have a BOD of 5 mg/L or less. The study also identified six major pollution hotspots along the river. In mid October last year, stretches of the river in the city were covered in froth: a toxic blanket over the water that contains high levels of ammonia and phosphates and thus poses serious health risks including respiratory and skin problems to people, per The Hindu.

The Yamuna and its pollution has also been a topic of argument between parties in the run up to this year’s elections in the city. While campaigning for the BJP in Delhi, Uttar Pradesh Chief Minister Yogi Adityanath asked if Kejriwal would go and take a dip in the Yamuna. 

“If as a Chief Minister, my ministers and I can take a dip in the Sangam in Prayagraj, then I want to ask the president of Aam Aadmi Party in Delhi, Arvind Kejriwal, can he go and take a bath in Yamuna with his ministers?” he said.

On January 27, Kejriwal accused the BJP in Haryana of deliberately polluting the waters of the Yamuna. “The Haryana government has mixed poison in the water coming to Delhi from the Yamuna,” he said, claiming that it was only the “vigilance” of Delhi Jal Board engineers that the water was stopped at the border.

The CEO of the Delhi Jal Board, however, refuted these claims. On January 28, Delhi CM Atishi claimed that the toxic ammonia levels in the Yamuna had originated from Haryana, and approached the Election Commission of India regarding this. 

Delhi Police had once again detained Rajya Sabha MP Maliwal on February 3 after she staged a protest, along with others, outside Kejriwal’s residence calling out the Delhi government’s “failure to clean the Yamuna”, and saying that the river is “on a ventilator”. Accusing Kejriwal of living in luxury while the river remains polluted, she also challenged him to take a dip in the Yamuna, reported Hindustan Times.

Delhi goes to vote on February 5.

We Are All Mosaics

Picture your body: It’s a collection of cells carrying thousands of genetic mistakes accrued over a lifetime – many harmless, some bad, and at least a few that may be good for you.

You began when egg and sperm met, and the DNA from your biological parents teamed up. Your first cell began copying its newly melded genome and dividing to build a body.

And almost immediately, genetic mistakes started to accrue.

“That process of accumulating errors across your genome goes on throughout life,” says Phil H. Jones, a cancer biologist at the Wellcome Sanger Institute in Hinxton, England.

Scientists have long known that DNA-copying systems make the occasional blunder — that’s how cancers often start — but only in recent years has technology been sensitive enough to catalog every genetic booboo. And it’s revealed we’re riddled with errors. Every human being is a vast mosaic of cells that are mostly identical, but different here or there, from one cell or group of cells to the next.

Cellular genomes might differ by a single genetic letter in one spot, by a larger lost chromosome chunk in another. By middle age, each body cell probably has about a thousand genetic typos, estimates Michael Lodato, a molecular biologist at the University of Massachusetts Chan Medical School in Worcester.

These mutations — whether in blood, skin or brain — rack up even though the cell’s DNA-copying machinery is exceptionally accurate, and even though cells possess excellent repair mechanisms. Since the adult body contains around 30 trillion cells, with some 4 million of them dividing every second, even rare mistakes build up over time. (Errors are far fewer in cells that give rise to eggs and sperm; the body appears to expend more effort and energy in keeping mutations out of reproductive tissues so that pristine DNA is passed to future generations.)

“The minor miracle is, we all keep going so well,” Jones says.

Scientists are still in the earliest stages of investigating the causes and consequences of these mutations. The National Institutes of Health is investing $140 million to catalog them, on top of tens of millions spent by the National Institute of Mental Health to study mutations in the brain. Though many changes are probably harmless, some have implications for cancers and for neurological diseases. More fundamentally, some researchers suspect that a lifetime’s worth of random genomic mistakes might underlie much of the aging process.

“We’ve known about this for less than a decade, and it’s like discovering a new continent,” says Jones. “We haven’t even scratched the surface of what this all means.”

Suspicious from the start

Scientists had suspected since the discovery of DNA’s structure in the 1950s that genetic misspellings and other mutations accruing in non-reproductive, or somatic, tissues could help explain disease and aging.

By the 1970s, researchers knew that growth-promoting mutations in a fraction of cells were the genesis of cancers.

“The assumption was that the frequency of this event was very, very low,” says Jan Vijg, a geneticist at the Albert Einstein College of Medicine in New York.

But it was extremely difficult to detect and study these mutations. Standard DNA sequencing could only analyze large quantities of genetic material, extracted from vast groups of cells, to reveal only the most common sequences. Rare mutations flew under the radar. That started to change around 2008 or so, says stem cell biologist Siddhartha Jaiswal of Stanford University in California. New techniques are so sensitive that mutations present in a tiny fraction of cells — even a single cell — can be uncovered.

In the early 2010s, Jaiswal was interested in how mutations might accumulate in people’s blood cells before they develop blood cancers. From the blood of more than 17,000 people, he and colleagues found what they’d predicted: Cancer-linked mutations were rare in people under 40, but occurred in higher amounts with age, making up about 10 percent or more of blood cells after the 70th birthday.

But the team also saw that the cells with mutations were often genetically identical to one another: They were clones. The cause, Jaiswal figures, is that one of the body’s thousands of blood cell-making stem cells picks up mutations that make it a little bit better at growing and dividing. Over decades, it begins to win out over normally growing stem cells, generating a large group of genetically matched cells.

Not surprisingly, these efficiently dividing mutated blood cell clones were linked to risk for blood cancer. But they were also associated with increased risk for heart disease, stroke and death by any cause, perhaps because they promote inflammation. And unexpectedly, they were associated with about a one-third lower risk of Alzheimer’s dementia. Jaiswal, who coauthored an article on the health impacts of blood cell clones in the 2023 Annual Review of Medicine, speculates that some clones might be better at populating brain tissue or clearing away toxic proteins.

As Jaiswal and colleagues were pursuing the blood clones they reported in 2014, researchers at the Wellcome Sanger Institute commenced investigations of body mutations in other tissues, starting with eyelid skin. With age, some people get droopy eyelids and have a bit of skin surgically removed to fix it. The researchers acquired these bits from four individuals and cut out circles 1 or 2 millimeters across for genetic sequencing. “It was full of surprises,” says Inigo Martincorena, a geneticist at Wellcome Sanger. Though the patients did not have skin cancer, their skin was riddled with thousands of clones, and one-fifth to one-third of the eyelid skin cells contained cancer-linked mutations.

The findings, that so many skin cells in people without skin cancer had mutations, made a splash. “I was blown away,” says James DeGregori, a cancer biologist at the University of Colorado Anschutz Medical Campus in Aurora, who was not involved in the study.

Wellcome Sanger researchers went on to identify clusters of identical, mutated cells in a variety of other tissues, including the esophagus, bladder and colon. For example, they examined colonic crypts, indentations in the intestinal wall; there are some 10 million of these per person, each inhabited by about 2,000 cells, all arising from a handful of stem cells confined to that crypt. In a study of more than 2,000 crypts from 42 people, the researchers found hundreds of genetic variations in crypts from people in their 50s.

About 1 percent of otherwise normal crypts in that age group contained cancer-linked mutations, some of which can suppress proliferation of nearby cells, allowing mutant cells to take over a crypt faster. This alone is not necessarily sufficient to create colorectal cancer, but on rare occasions, cells can acquire additional cancer-causing mutations, overflow crypt boundaries and cause malignancies.

“Everywhere people have looked for these somatic mutations, in every organ, we find them,” says Jones. He’s come to see the body as a kind of evolutionary battleground. As cells accumulate mutations, they can become more (or less) able to grow and divide. With time, some cells that reproduce more readily can overtake others and create large clones.

“And yet,” notes DeGregori, “we don’t turn lumpy.” Our tissues must have ways to stop clones from becoming cancer, he suggests. Indeed, overgrowing mutant clones in mice have been seen to revert to normal growth, as Jones and a coauthor describe in the 2023 Annual Review of Cancer Biology.

Jones and colleagues found one example of protection in the human oesophagus. By middle age, many oesophagus clones — often making up the bulk of oesophagus tissue — have mutations disrupting a gene called NOTCH1. This doesn’t affect the ability of the oesophagus to move food along, but cancers seem to need NOTCH1 to grow. Bad mutations may accumulate in oesophageal cells, but if NOTCH1 is absent, they appear less likely to become tumors.

In other words, some of the bodily mutations aren’t bad or neutral, but even beneficial. And, lucky for us, these good mutations prevail a lot of the time.

Mutant clones increase in size as people age. In this image, each panel represents one square centimetre of tissue from a subject’s esophagus. The youngest subject (top panel) was a moderate smoker; the other two were non-smokers. The size and color of each circle represents a clone with mutations in a particular gene (see key, at top). Sometimes, clones contain multiple mutations, represented by overlapping circles. Mutations in some genes, such as TP53 (orange) promote cancer, while mutations in other genes such as NOTCH1 (purple) suppress it.

Getting inside the brain

Our DNA-copying machinery has plenty of opportunity to make errors in cells of the esophagus, colon and blood because they divide constantly. But neurons in the brain stop dividing before or soon after birth, so scientists originally assumed they would remain genetically pristine, says Christopher Walsh, a neurogeneticist at Boston Children’s Hospital.

Yet there were hints that mutations accruing through life could cause problems in the brain. Back in 2004, researchers reported on a patient who had Alzheimer’s disease due to a mutation present in only some brain cells. The mutation was new — it had not been inherited from either parent.

And in 2012, Walsh’s group reported an analysis of brain tissue that had been removed during surgery to correct brain overgrowth that was causing seizures. Three out of eight samples had mutations affecting a gene that regulates brain size, but these mutations were not consistently present in the blood, suggesting they arose in only part of the body.

There are a couple of ways that brain cells could pick up mutations, says Lodato. A mutation could occur early in development, before the brain was completed and its cells had stopped dividing. Or, in a mature brain cell, DNA could be damaged and not repaired properly.

By 2012, interest in non-inherited brain mutations was heating up. Thomas Insel, director of the National Institute of Mental Health at the time, proposed that these kinds of mutations might underlie many psychiatric conditions. Non-inherited mutations in the brain could explain a longstanding puzzle in neurological diseases: why identical twins often don’t share psychiatric diagnoses (for example, if one twin develops schizophrenia, the other has only about a 50 percent chance of getting it).

Mosaicism provides “a very compelling answer,” says neuroscientist Mike McConnell, scientific director for the Lennox-Gastaut Syndrome Foundation in San Diego, a nonprofit that supports families and research into a severe type of epilepsy.

Starting in the early 2010s, McConnell, Walsh, Lodato and others began to catalog mutations large and small sprinkled across the brains of people who had died. They tallied deletions and duplications of individual genes, multiple genes or entire chromosomes; they spotted entire chromosome segments moved to new spots in the genome. And, eventually, Walsh, Lodato and colleagues found a thousand or more single-letter mutations in the genetic code within every nerve cell of people aged 50 or so. That last finding “seemed completely impossible to us,” recalls Walsh. “We doubted ourselves.”

In the face of such stunning results, the researchers investigated further. They looked at 159 neurons from 15 people who had died between four months and 82 years of age. They reported that the numbers of mutations increased with age, indicating that errors accumulated over time, just as in other body parts. “The brain is a mosaic, in a profound and deep way,” says Lodato.

To further explore that mosaicism, the National Institute of Mental Health funded a series of projects from 2015 to 2019 investigating brain tissue mosaicism in samples, mainly collected after death and deposited in tissue banks, from more than 1,000 people who were neurotypical or had conditions such as Tourette syndrome and autism spectrum disorder.

Single-letter mutations were most common, says McConnell, who co-led the project. Researchers accumulated more than 400 terabytes of DNA sequences and other data, and built analytical tools, creating a powerful platform on which to build the next round of brain mosaicism studies. From this work and other studies, scientists have linked brain mosaicism to neurological diseases including autism, epilepsy and schizophrenia.

In Lodato’s lab, graduate students Cesar Bautista Sotelo and Sushmita Nayak are now investigating how accumulated mutations might cause amyotrophic lateral sclerosis, a paralysing condition also known as Lou Gehrig’s disease. Geneticists can identify a known mutation in only about 10 percent of non-inherited cases. But the new data on mosaicism suggest that many more people may have mutations in ALS genes in their brains or spinal cords, even if they don’t have them in the rest of their body.

That matters, because scientists are working on therapies targeting some of the 40-plus genes that, when mutated, cause ALS. In 2023, the Food and Drug Administration approved the first such treatment, which shuts down a commonly mutated ALS gene. For patients to be eligible for such therapies, they will need to know their mutations.

Thus, says Nayak, “we strongly advocate for a change in the current practice of diagnosing ALS.” Instead of just looking at DNA in a blood sample, other tissues such as saliva, hair or skin could be examined too, in case an ALS mutation arose during development in cells that didn’t give rise to blood but did give rise to other tissues in the body.

Clues to how we age

For now, the health implications of our body’s mosaicism are mostly too fuzzy to warrant action, especially in cases like the blood clones where there is no relevant treatment to offer. “We don’t really advocate that people should worry about this,” says Jaiswal. “At this point in time, there’s no rationale to be testing people who are well.”

But many scientists do see the findings as evidence for a longstanding theory: that a lifetime’s worth of mutations leads to the inevitable condition we call aging.

Martincorena and colleagues tested an element of that theory in a 2022 study. If mutation buildup contributes to aging, they reasoned, then short-lived critters like mice should build up mutations fast, while longer-lived species like people should accumulate mutations more slowly, perhaps due to better repair mechanisms.

To investigate this idea, the researchers embarked on a five-year odyssey studying colon crypt samples from eight people plus a menagerie of creatures: 19 lab mice and rats; 15 domestic animals such as cats, dogs, cows and rabbits; and 14 more exotic creatures that included tigers, lemurs, a harbor porpoise and four naked mole rats, which are famed for their outsized rodent lifespan of 30-plus years. As predicted, the longer-lived the species, the slower its accumulation of mutations.

“This does not demonstrate that somatic mutations cause aging, but is consistent with the possibility that they play at least some role,” says Martincorena. There are two factors at play here: Accumulating mutations contribute to shorter lifespan, but then the shortened lifespan makes mutation protection less crucial, so short-lived species invest less in DNA repair.

The idea that mutations could contribute to aging is tantalizing, as it suggests vanquishing them would be a genetic fountain of youth. “If, tomorrow, I figure out a way to stop these mutations from accumulating, I think I would be a bajillionaire,” says Bautista Sotelo. Already, at least one biotech startup, Matter Bio in New York City, has raised funds with the aim of repairing the human genome. (Whether such a plan would ever be feasible across broad swaths of cells is another matter: “I don’t think you can get rid of the mutations,” says DeGregori.)

The story of body mutations is far from over. “Judging by the discoveries that we are making at the moment, the journey has only just started,” says Martincorena. “I expect many surprises in the next few years.”

This article first appeared on Knowable Magazine. Read the original here.

Western Ghats Among World’s 4 Regions Where Freshwater Species Are at Highest Risk of Extinction

The study recommends targeted action to prevent further extinctions and calls for governments and industry to use this data in water management and policy measures.

New Delhi: The Western Ghats mountain range, one of India’s four biodiversity hotspots, is among the four regions in the world where freshwater species are most threatened with extinction, as per a recent study published in the journal Nature on January 8.

The study, which is the largest global assessment of freshwater animals on the International Union for Conservation of Nature’s (IUCN’s) Red List of Threatened Species so far, shows that 24% of the world’s freshwater fish, dragonfly, damselfly, crab, crayfish and shrimp species are at high risk of extinction. Established in 1964, the IUCN Red List assesses the global conservation status of animal, fungus and plant species and lists them into nine categories based on their assessment levels and how threatened by extinction they are, based on criteria such as population declines, restricted ranges and more. These are “Extinct,” “Extinct in the Wild,” “Critically Endangered,” “Endangered,” “Vulnerable,” (the latter three list species threatened with global extinction in decreasing order of threats) “Near Threatened,” (which comprises species that will be threatened without ongoing conservation measures), “Least Concern,” (species that have a lower risk of extinction) “Data Deficient” (species whose conservation status cannot be assessed because of insufficient data) and “Not Evaluated.”

Key findings: Regions and species most threatened

The recent study found that at least 4,294 species out of 23,496 freshwater animals on the IUCN Red List are at high risk of extinction. Crabs, crayfishes and shrimps are at the highest risk of extinction of the groups studied, with 30% of all these species being threatened, followed by 26% of freshwater fishes and 16% of dragonflies and damselflies. And the greatest number of threatened species dwell in Lake Victoria (distributed across the African countries of Tanzania and Uganda, and on the border of Kenya), Lake Titicaca (in the Andes mountain range bordering Peru and Bolivia), Sri Lanka’s Wet Zone (in the central and southwestern region of the island nation) and the Western Ghats of India. The threatened species in the Western Ghats include the Saffron reedtail (Indosticta deccanensis), a dragonfly that is found only in some localities in the mountain range and is considered “Vulnerable” by the Red List, and the Dwarf Malabar Puffer (Carinotetraodon travancoricus), a species of freshwater puffer fish found in some streams of the Ghats and is “Data Deficient.” However, species like the Kani maranjandu, a spider-like tree crab discovered from the southern Western Ghats in Kerala in 2017, are not even currently assessed in the Red List – there is no data on its conservation status at all.

The reasons for freshwater species being the most threatened in these regions including the Western Ghats are not ones we are unfamiliar with – pollution, mainly from agriculture and forestry, impacts over half of all threatened freshwater animals, according to the study. Add to this land conversion for agricultural use, water extraction and the construction of dams, which also block fish migration routes. Other reasons also include overfishing and the introduction of invasive alien species.

Case study: The hump-backed mahseer

The study also found that although the threatened freshwater animals studied tend to live in the same areas as threatened amphibians, birds, mammals and reptiles, they face different threats due to their specific habitats. Conservation action must therefore be targeted to these species, the study recommends. Take the case of the “Critically Endangered” hump-backed mahseer (Tor remadevii) that is found only in the river system of the Cauvery and its tributaries in south India, for instance. The fish, once thought to be widespread across the entire river (as per historical records dating back to the late 19th century), is now found in just five fragmented river and tributary stretches of the Cauvery, which is a shocking decline of around 90% in its distribution range, according to the IUCN Red List assessment.

“Although they live side by side in the Western Ghats, conservation action for tigers and elephants will not help the ‘Critically Endangered’ hump-backed mahseer, which is threatened by habitat loss due to river engineering projects and sand and boulder mining, poaching and invasive alien species. Active protection of the river and tributaries where the hump-backed mahseer lives is essential to its survival, in addition to fishing regulations and banning the introduction of further invasive alien species,” noted Rajeev Raghavan, South Asia chair of the IUCN Species Survival Commission Freshwater Fish Specialist Group and one of the co-authors of the study, in a press release.

The study also found that water stress and eutrophication are poor “surrogates,” or indicators, to be used in conservation planning for threatened freshwater species; areas with high water stress, where there is high demand and low supply, and areas with more eutrophication, where an excess of nutrients in the water leads to overgrowth of algae and plants, are home to fewer numbers of threatened species than areas with lower water stress and less eutrophication.

The study recommends targeted action to prevent further extinctions and calls for governments and industry to use this data in water management and policy measures. “Lack of data on the status and distribution of freshwater biodiversity can no longer be used as an excuse for inaction,” the study read.

The hump-backed mahseer, for example, badly needs a systematic conservation plan, Raghavan told The Wire. Despite being a “Critically Endangered” species (tigers across the world, in comparison, are only “Endangered” as per the IUCN Red List) and one of India’s mega fish as well as a transboundary species (found in tributaries of the Cauvery in Kerala, Karnataka and Tamil Nadu), there have been no efforts to develop a conservation plan for the species yet, Raghavan said.

“Protection of critical habitats is the most important strategy. There is also need for some more research, as very little information is available on the ecology, movement and early life history of the species,” he added. “This could be a nice example of a flagship species that can bring all three states together (especially as a positive side to the Cauvery water dispute)… securing the future of this species requires an effort from all three states.”

Global implications and call to action

“Freshwater landscapes are home to 10% of all known species on Earth and key for billions of people’s safe drinking water, livelihoods, flood control and climate change mitigation, and must be protected for nature and people alike,” stated Catherine Sayer, IUCN’s freshwater biodiversity lead and lead author of the paper, in a press release. “The IUCN World Conservation Congress this October will guide conservation for the next four years, as the world works to achieve the Sustainable Development Goals and the Kunming-Montreal Global Biodiversity Framework targets by 2030. This information will enable policy makers and actors on the ground to plan freshwater conservation measures where they are most needed,” she added.

“This report really drives home just how under threat freshwater species are globally as a result of human activities,” noted co-author Matthew Gollock, Zoological Society of London’s programme lead for aquatic species and policy and chair of the IUCN Anguillid Eel Specialist Group, in a press release. “The good news is, it’s not too late for us to tackle threats such as habitat loss, pollution and invasive species, to ensure our rivers and lakes are in good condition for the species that call them home.”

Note: This article, first published at 9.32 am on January 13, 2025, was republished at 8.20 am on January 14, 2025.

2024 Hottest Recorded Year, Crossed Global Warming Limit: World Meteorological Organization

The last two years saw average global temperatures exceed a critical warming limit for the first time, Europe’s climate monitor said Friday, as the UN demanded ‘trail-blazing’ climate action.

The last two years saw average global temperatures exceed a critical warming limit for the first time, Europe’s climate monitor said Friday, as the UN demanded “trail-blazing” climate action.

While this does not mean the internationally-agreed 1.5°C warming threshold has been permanently breached, the United Nations warned it was in “grave danger”.

“Today’s assessment from the World Meteorological Organization (WMO) is clear,” UN chief Antonio Guterres said. “Global heating is a cold, hard fact.”

He added: “Blazing temperatures in 2024 require trail-blazing climate action in 2025. There’s still time to avoid the worst of climate catastrophe. But leaders must act – now.”

The WMO said six international datasets all confirmed that 2024 was the hottest year on record, extending a decade-long “extraordinary streak of record-breaking temperatures”.

The United States became the latest country to report its heat record had been shattered, capping a year marked by devastating tornadoes and hurricanes.

The announcement came just days before President-elect Donald Trump, who has pledged to double down on fossil fuel production, was set to take office.

Excess heat is supercharging extreme weather, and 2024 saw countries from Spain to Kenya, the United States and Nepal suffer disasters that cost more than $300 billion by some estimates.

Los Angeles is currently battling deadly wildfires that have destroyed thousands of buildings and forced tens of thousands to flee their homes.

‘Stark warning’

Another record-breaking year is not anticipated in 2025, as a UN deadline looms for nations to commit to curbing greenhouse gas emissions.

“My prediction is it will be the third-warmest year,” said NASA’s top climate scientist Gavin Schmidt, citing the US determination that the year has begun with a weak La Nina, a global weather pattern that is expected to bring slight cooling.

The WMO’s analysis of the six datasets showed global average surface temperatures were 1.55°C above pre-industrial levels.

“This means that we have likely just experienced the first calendar year with a global mean temperature of more than 1.5°C above the 1850-1900 average,” it said.

Europe’s climate monitor Copernicus, which provided one of the datasets, found that both of the past two years had exceeded the warming limit set out in the 2015 Paris Agreement.

Global temperatures had soared “beyond what modern humans have ever experienced”, it said.

Scientists stressed that the 1.5°C threshold in the Paris Agreement refers to a sustained rise over decades, offering a glimmer of hope.

Still, Johan Rockstrom of the Potsdam Institute for Climate Impact Research called the milestone a “stark warning sign.”

“We have now experienced the first taste of a 1.5°C world, which has cost people and the global economy unprecedented suffering and economic costs,” he told AFP.

On the edge

Nearly 200 nations agreed in Paris in 2015 that meeting 1.5C offered the best chance of preventing the most catastrophic repercussions of climate change.

But the world remains far off track.

While Copernicus records date back to 1940, other climate data from ice cores and tree rings suggest Earth is now likely the warmest it has been in tens of thousands of years.

Scientists say every fraction of a degree above 1.5°C matters – and that beyond a certain point the climate could shift in unpredictable ways.

Human-driven climate change is already making droughts, storms, floods and heat waves more frequent and intense.

The death of 1,300 pilgrims in Saudi Arabia during extreme heat, a barrage of powerful tropical storms in Asia and North America, and historic flooding in Europe and Africa marked grim milestones in 2024.

‘Stark warning’

The oceans, which absorb 90% of excess heat from greenhouse gases, warmed to record levels in 2024, straining coral reefs and marine life and stirring violent weather.

Warmer seas drive higher evaporation and atmospheric moisture, leading to heavier rainfall and energising cyclones.

Water vapour in the atmosphere hit fresh highs in 2024, combining with elevated temperatures to trigger floods, heatwaves and “misery for millions of people”, Copernicus climate deputy director Samantha Burgess said.

Scientists attribute some of the record heat to the onset of a warming El Nino in 2023.

But El Nino ended in early 2024, leaving them puzzled by persistently high global temperatures.

“The future is in our hands – swift and decisive action can still alter the trajectory of our future climate,” said Copernicus climate director Carlo Buontempo.

How to Make Electric Car Batteries Without Overwhelming Reliance on China

Building indigenous capacities to produce raw materials for lithium-ion batteries will be crucial for achieving energy transition goals.

The global transition to green technologies has increased the demand for lithium dramatically.

This critical mineral, abundant but distributed unevenly, is essential for energy storage and transport electrification.

According to the International Energy Agency, by 2040, the demand for lithium could be up to 42 times its 2020 levels.

Lithium-ion batteries are used to power electric vehicles and store renewable energy such as wind and solar.

In 2023, the demand for batteries crossed 750 GWh, up 40 percent from 2022

Owing to their high energy density, long cycle life and efficient discharge capacities, these batteries have become crucial in the field of energy storage and electric mobility.

By 2040, over two-thirds of passenger vehicles will be electric. Lithium-ion batteries are also crucial for grid storage systems, ensuring grid reliability by balancing energy inputs and outputs.

Their efficiency and lightweight nature also make them vital for portable electronics.

They are also used in smartphones – in 2022 alone, around 1.39 billion smartphones, mostly powered by lithium-ion batteries, were sold globally.

However, a demand-supply mismatch, particularly in the components used to manufacture these batteries, poses several challenges for these exponentially growing markets.

Major markets for electric vehicles – and thus, lithium-ion batteries – include the US, Europe and China.

India is one of the largest importers of lithium-ion batteries and its lithium-ion battery market size is estimated to be at $US4.71 billion in 2024. By 2029, it is expected to reach $US 13.11 billion.

The problem lies in an overwhelming reliance on China for refining and producing lithium and lithium-ion batteries, which poses a significant challenge for the sustainability goals of several countries.

Challenges in the lithium supply chain 

The production of lithium-ion batteries relies on a complex global supply chain.

This begins with mining companies extracting the mineral, and refining them on site to produce battery-grade raw materials. Raw materials typically contain lithium, cobalt, manganese, nickel and graphite.

Manufacturers buy these raw materials and use them to produce cathode and anode active battery materials.

These active materials are then bought by traders and sold to firms that produce battery cells.

Battery manufacturers assemble the battery cells into modules and then pack and sell them to buyers such as automakers, who place the finished batteries in electric vehicles.

The problem starts with the availability of the prime raw material – lithium – its processing and refining, and finally, the production of active materials.

Nearly 80 percent of the known deposits of lithium are in four countries – the South American lithium triangle of Argentina, Bolivia and Chile, and Australia.

The market, however, is dominated by China – a country with meagre reserves.

Despite holding less than 7 percent of reserves, China is the world’s largest importer, refiner and consumer of lithium.

Sixty percent of the world’s lithium products and 75 percent of all lithium-ion batteries are produced in China. This is primarily fuelling China’s electric vehicle market, which is 60 percent of the world’s total. 

Though the US, Europe and India have begun producing lithium ion battery packs, the production of the most critical components of the lithium-ion battery value chain – cathode and anode active materials – remains concentrated in China.

Depending on the chemistry of the lithium-ion cells, cathode active material would comprise 35-55 percent of the cell, and anode active material would  be 14- 20 percent.

Countries aiming to ramp up lithium-ion battery supply would need to focus on the production of these components.

Today, China represents nearly 90 percent of global cathode active material manufacturing capacity, and over 97 percent of anode active material manufacturing capacity.

The remaining gaps in manufacturing capacity are being filled up by Korea and Japan.

Efforts are underway to zero in on a more sustainable, cost-effective and energy-dense chemistry of the lithium-ion cell.

For instance, there’s the NMC battery cell, where the cathode active material is made from a combination of nickel, manganese, and cobalt. Nickel increases the energy density, and manganese and cobalt are used to improve thermal stability and safety.

Then there’s the NCA cell, or the Nickel Cobalt Aluminium Oxide Cell, where the manganese is replaced with aluminium to increase stability.

One of the more coveted cell chemistry technologies is Lithium Cobalt Oxide. With its high specific energy and long runtimes, it is considered ideal for smartphones, tablets, laptops and cameras.

The star of cell chemistries, however, is LFP — Lithium Iron Phosphate battery.

With their thermal stability, LFP batteries are safer and have a longer cycle life, suitable particularly for off-grid solar systems and electric vehicles. They also perform well in high-temperature conditions and are environment friendly due to the absence of cobalt.

Today, LFP has graduated from a minor share in battery manufacture  to the rising star of the battery industry.

LFP battery cells are powering over 40 percent of electric vehicle demand globally in 2023. This is more than double its share recorded in 2020.

Efforts to increase the manganese content of both NMC and LFP are also underway. This is being done to boost energy density while keeping costs low for LFP batteries, and reduce cost while maintaining high energy density for NMC cells.

Ramping up domestic production

An alternative to making energy storage cost-effective and decreasing reliance on critical minerals such as lithium is sodium-ion batteries. 

Though these batteries still require some critical minerals such as nickel and manganese, they do reduce reliance on lithium.

Sodium-ion batteries, just like LFP, were also initially developed in the US and Europe.

But China has taken the lead here too –  its manufacturing capacity is estimated to be about ten times higher than the rest of the world combined.

Pricing of raw materials is a big factor in sodium-ion batteries replacing lithium ones; currently, prices are low and discouraging investments and delaying expansion plans. 

Then there are supply chain bottlenecks such as for high-quality cathode and anode materials required to manufacture sodium-ion batteries.

Until these issues are resolved, countries will have to build indigenous capacities to ramp up their lithium-ion battery production.

A few companies in India have started their manufacturing projects with support from the government, and many others are planning to do so.

The success of these, and others across the world, however, will depend on the localisation of lithium-ion value chain components such as the cathode and anode active materials, separator and electrolytes.

Separators work by separating the anode and cathode active materials to  prevent a short circuit; they also contribute to the overall working of the cell including its thermal stability and safety.

A few Indian companies are now gearing up to produce lithium-ion cathode and anode active materials as well as separators for the domestic as well as global lithium-ion battery supply chain.

They have also developed the technology for production of active raw materials for sodium-ion and aluminium-based batteries.

Such innovations will be crucial for the energy transition goals of countries such as India which are currently heavily dependent on importing raw materials for batteries.

Abhimanyu Singh Rana is an associate professor and the Director of Research & Development at BML Munjal University, where he  heads research on advanced materials and devices for clean energy and sustainability. BML Munjal University is working with Haryana-based Dawson group of companies for testing of raw materials for Lithium-ion batteries.

Amlan Ajay, director, Dawson Group, contributed significant technical content for this article.

Originally published under Creative Commons by 360info™.

Rajagopala Chidambaram Was the Last Flag Bearer of Homi J. Bhabha’s Vision and Legacy

The nation has lost a highly capable and versatile scientist who had country’s development foremost in his mind.

Rajagopala Chidambaram, the former director of the Bhabha Atomic Research Centre (BARC), the former chairman of the Atomic Energy Commission and the former principal scientific advisor (PSA) to the government of India, passed away in the morning of January 4, 2025.

He was 88, and had served in the above capacities during the years 1990-93, 1993-2000 and 2001-2018 respectively.

Given the key roles that he played in India’s nuclear explosion tests ‘Smiling Buddha’ on May 18, 1974 (Pokhran-I) and the ‘Shakti’ series of five explosions, which included a thermonuclear device (hydrogen bomb) test, on May 11 and 13, 1998 (Pokhran-II), he has been widely described as one of the key architects of the Indian nuclear weapons programme. No doubt, he was definitely that, but a lot more. It is probably fitting to describe him as the last of the flag bearers of the legacy and holistic vision of Homi Jehangir Bhabha, the founder of the Indian nuclear programme.

Anil Kakodkar. Photo: Public domain/Wikipedia.

Some time in 1995, this writer had gone to meet him at the Department of Atomic Energy (DAE) guest house in Kidwai Nagar, New Delhi. As Chidambaram walked into the meeting room, in tow was a certain well-built younger person. He introduced him to me as ‘Dr. Anil Kakodkar, a brilliant nuclear engineer’, and added “You will see and hear more of him in the years to come”. And that has indeed been borne out to be very true.

Brahm Prakash. Photo: IISc official website.

Clearly he had acquired the same knack from Bhabha of identifying the right people just as the latter had first picked the great metallurgist Brahm Prakash to be his right hand man and later Chidambaram, a materials scientist at the Indian Institute of Science (IISc), to work with Brahm Prakash to address the various metallurgical issues involved in the early growing phase of the country’s nuclear programme.

Indeed, Brahm Prakash’s mentorship stood Chidambaram in good stead when he was faced with the metallurgical problems for carrying out India’s first nuclear test, the Peaceful Nuclear Explosion (PNE) at Pokhran in 1974. 

This acute ability of Chidambaram was also in evidence when the Tata Institute of Fundamental Research (TIFR) was faced with a quandary in the selection of a new director after Virendra Singh completed his two-term tenure as the director in 1997. Chidambaram, as the Chairman of the three-man selection committee, wanted someone who knew Bhabha, understood his way of administering the institution and had the capability to strive to keep Bhabha’s legacy alive.

But, to maintain the tradition, there was no one meeting Chidambaram’s requirements under the age of 55 who could serve as director for two terms. The senior most at the TIFR then was Sudhanshu S. Jha, a condensed matter theorist, whom Bhabha had spotted as a highly promising theorist and had sent him to Stanford University to work with Felix Bloch in 1960 when he was barely 20. But in 1997 Jha was already 57. Chidambaram, after taking the faculty, researchers and students into confidence, decided to break tradition and appoint Jha, who then served for one five-year term as the TIFR director.

The PNE of May 18, 1974, was a watershed moment in the Indian nuclear programme and Chidambaram had a very important role in its execution. Sometime around 1967, when Chidambaram was 31, his senior colleague at the BARC, Raja Ramanna, tasked him with the derivation of the ‘equation of state (EOS)’ of plutonium, a physico-metallurgical problem that is critical to the design of a nuclear explosive device. But information about EOS of weapons materials was closely guarded by the nuclear weapon states (NWSs) of the time. The EOS is a thermodynamic equation relating state variables of materials under a given set of physical conditions, such as pressure, volume, temperature, or internal energy. In a nuclear device fissile material such as plutonium is brought under very high pressure for the fissile core to attain a critical density and sustain a chain reaction releasing enormous amounts of energy.

Though in his recent biographical book India Rising: Memoir of a Scientist he says, “I was surprised when he asked me to take up this work because this was a completely different field from what I was working on”, the reasons for the choice are abundantly clear. He was already a known expert in materials science, given his outstanding work in Nuclear Magnetic Resonance for his doctoral thesis followed by work in crystallography and condensed matter physics, and with the knowledge in metallurgy gained from his association with Brahm Prakash in the nuclear establishment, he was the best person who could have a go at the problem. His success in deriving the EOS ab initio formed the basis of the design of India’s first nuclear device in 1971, the one that was finally used in the Pokhran-I PNE.

APSARA reactor and plutonium reprocessing facility at BARC as photographed by a US satellite on February 19, 1966. Photo: GODL India/Public Domain/Wikipedia.

Following the Pokhran-I nuclear test, Chidambaram got interested in the field of high-pressure physics and initiated broad ‘open research’ in the field, with its obvious strategic implications as well. A whole range of instrumentation to carry out research in the field was developed and built indigenously under his guidance. He also laid the foundation of theoretical high-pressure research for calculation of EOS and phase stability of materials from first principles and the high-pressure physics group attained international recognition as well. The paper on ‘Omega Phase in Materials’ by Chidambaram and colleagues is now regarded as textbook material in materials science.

The characteristics of the 1974 test, which used a 12-kt plutonium device emplaced in a shale medium at a depth of 107 m in a chamber at the end of a L-shaped hole, “helped understanding of the explosion phenomenology, fracturing effects in rocks, ground motion, containment of radioactivity etc.”, Chidambaram wrote. He and his colleagues had written a one-dimensional computer code for the numerical simulation of the mechanical effects of underground nuclear explosions in rock called OCENER and had developed a computer simulation model based on that.

“An important aspect of this computer simulation is to delineate the fracture system in the rock medium and to select a depth of emplacement to prevent connection of the ground surface with the cavity containing the hot radioactive gases. In the PNE test of 1974 and the five tests carried out in May 1998, such simulation calculations ensured that there is no residual radioactivity on the surface at the test sites,” Chidambaram (and Surinder Sharma) wrote later in the journal Bulletin of Materials Sciences of the Indian Academy of Sciences.    

Though, technically, it had all the design features of a PNE, and indeed Chidambaram had presented implications of the 1974 nuclear test (Pokhean-I) for peaceful applications of nuclear explosions at the International Atomic Energy (IAEA) in 1975, it had been planned as a technology demonstrator and the first step in the development of a strategic nuclear weapons programme.

Given that the Nuclear Non-Proliferation Treaty (NPT) was an ‘inherently unfair treaty’, Chidambaram did not believe in keeping the nuclear option open. Criticising the arbitrary cut-off date of January 1, 1967, set in the treaty for  a country to be designated as a Nuclear Weapon State (NWS), he wrote “This is equivalent to saying ‘you may have a post-graduate degree but if you got it after January 1, 1967, you will be still be presumed to be uneducated’.” Like his senior colleague Raja Ramanna, he was clear on the strategic importance of the country having a demonstrated nuclear weapons capability as long as others had it. 

Chidambaram as the AEC chairman and A.P.J. Abdul Kalam as the scientific adviser to the defence minister and the head of the Defence Research and Development Organisation (DRDO) were the project coordinators for the Shakti series of tests in May 1998 (Pokhran-II). Unlike the Pokhran-I test, these were openly proclaimed as successful nuclear weapon tests and projected as part of an overt Indian nuclear weapons programme. Following Pokhran-II India declared itself to be a Nuclear Weapon State (NWS).

Chidambaram had then also stated that the data from the Pokhran-II tests were sufficient for building a stockpile of nuclear weapons and no further tests were necessary. He reiterated this in August 1999 when he said that India now had the capability to manufacture a nuclear device “of any size”. An unfortunate and immediate fall-out of his involvement in the Pokhran-II tests was that, even though he was the vice-president of the International Union of Crystallography (IUCr), Chidambaram was denied a visa to attend the executive committee meeting of the IUCr in July 1998. 

Former prime minister Atal Bihari Vajpayee visiting Pokhran after the 1998 nuclear tests. Photo: File photo

Despite the good deal of scientific details that were put out following Pokhran-II tests, there were doubts being expressed about the yields of the tests, the thermonuclear test in particular, not only by Western observers but by no less a person than his predecessor AEC chairman, P. K. Iyengar. Iyengar’s remarks of 2000 and those of Western commentators were picked up as late as 2009 by others like Ashok Parthasarathi and K. Santhanam to dispute the claims made by Chidambaram and Kakodkar and said that India would need to conduct further tests for establishing a credible minimum nuclear deterrence. Perhaps due to political pressure, it was left to Chidambaram to come out defending the tests on September 24, 2009, through a public point-by-point rebuttal of the doubts being expressed.

Notwithstanding his key contributions to the strategic aspects of the Indian nuclear programme, he was more concerned about India’s energy security, especially in the wake of global warming and climate change, and the important role of nuclear power in improving India’s per capita electricity consumption and sustainable development. He criticised the IAEA at its General Conference in 2000 for its increasing proliferation misconceptions and its undue emphasis on nuclear safeguards (and the consequent increasing expenditures towards that) rather than putting its efforts in promoting nuclear power in underdeveloped and developing countries though well planned international technical collaboration and cooperation programmes. 

In his address at the IAEA GC of 2000 he said:

“I would like to emphasise that the IAEA was created with the main objective of accelerating and enlarging the contribution of atomic energy to peace, health and prosperity throughout the world. This is the central pillar on which the Agency should rest while giving due consideration to safeguards measures to prevent the use of Agency assistance for military proposes, and establish safety standards for protection of health and minimisation of danger to life and property. Safety and safeguards are indeed important and necessary supporting activities to enlarging and accelerating the contribution of nuclear energy for peaceful purposes. However, they cannot become activities of the IAEA overshadowing the peaceful uses of atomic energy. Primacy must be accorded to technology. This is the only way we can faithfully interpret the time-tested Statute of the Agency…Our delegation…would like to reiterate that IAEA with its comprehensive in-house expertise, as well as its access to globally available expertise, would do well to pool all resources to facilitate the role of nuclear energy in sustainable development. This is the need of the hour…” 

During his time at the helm of the Indian nuclear establishment between 1993 and 2000, the Indian nuclear power programme too achieved several milestones. In June 1994, India won its first commercial heavy water export deal, with the DAE supplying 100 MT of heavy water to South Korea. Another consignment of 100 MT was exported to South Korea later in 1998 as well. The same year, after France stopped supllying low-enriched uranium (LEU) fuel for the Tarapur plant, the DAE succeeded in negotiating an agreement with China for its supply under IAEA safeguards, the first consignment of which was received in January 1995. Later in 2000, Russia replaced China in the supply of LEU for Tarapur.

The year 1995 also saw the new Narwapahar uranium mine in Jharkhand begin its operations. In March 1996, the second reprocessing plant at Kalpakkam was cold commissioned. Later in the year, India’s first U-233 fuelled research reactor Kamini of 30 kW capacity attained criticality, thus proving DAE’s capability in handling the artificial U-233 isotope, derived by burning thorium in nuclear reactors, as a fissile material for the third stage of the Indian nuclear programme. In 1997 India refused to sign the Comprehensive Test Ban Treaty (CTBT) which, India argued, being discriminatory, hampered the growth of India’s nuclear programme.

Chidambaram’s contributions in the multifarious aspects of the Indian nuclear programme are truly noteworthy. But, at the end of it all, he was basically a scientist deeply interested in material sciences, particularly the subject of quasicrystals (crystals with ordered, but aperiodic, structure). His hallmark achievement in the field was the first ever positron annihilation studies that he carried out with colleagues in 1989 to investigate the gaps and defects in a quasicrystalline aluminum-manganese alloy. Indeed, after the period of his intense involvement in the country’s strategic programmes during the mid- to the late-1990s, he returned to doing research in the area which he continued with younger colleagues till 2004. 

He was equally concerned with the Indian higher education system and the Indian research and development. A little known fact is that Chidambaram (and separately Kalam), heading different committees during the Vajpayee regime, were the earliest to moot the idea of setting up IIT-like institutions dedicated to basic sciences. Their recommendations were later taken up by the then Science Advisory Council to the PM (SAC-PM) headed by C.N.R. Rao, whose presentation of the concept to prime minister Manmohan Singh during the UPA-1 regime resulted in the setting up of a chain of institutions called the Indian Institutes of Science Education and Research (IISERs). 

As the PSA to the government of India – in which position Chidambaram served for a long tenure of 17 years working with three different prime ministers – he launched initiatives such as Rural Technology Action Group (RuTAG), which empowered rural communities through innovative technologies, Society for Electronic Transactions and Security (SETS), to contribute towards advancing India’s cybersecurity and hardware security infrastructure, and National Knowledge Network (NKN) to connect educational and research institutions across the country.

Around 2005, concerned with the inappropriate publications-centric quantitative measure of basic research activity in the country that was being used by analysts to compare India with other countries, particularly China, Chidambaram as the PSA initiated projects for evolving other metrics for progress of S&T in the country which would also include quantitative measures of activities in patenting, mission- and industry-oriented research, agricultural and rural development, country-specific innovations, research and high-technology development. 

In Chidambaram the nation has lost a highly capable and versatile scientist who had country’s development foremost in his mind. He kept science and politics (including that within the S&T apparatus of the country) apart and was thus acceptable to politicians of different hues. It is because of this non-controversial nature of his that he could easily serve under different prime ministers and succeeded in doing his bit for the development of the country. 

R. Ramachandran is a science writer.

Note: The article was edited to correct the name of the treaty that India refused to sign in 1997. It was the Comprehensive Test Ban Treaty and not the Fissile Material Cut-Off Treaty.

J&K Has 11 ‘High-Risk’ Alpine Lakes at Risk of Catastrophic GLOFs, Finds Study

Three teams of researchers from the University of Kashmir, the Central University of Jammu and the Geological Survey of India have conducted a first-of-its-kind study.

Srinagar: Jammu and Kashmir has 67 ‘potentially dangerous’ alpine lakes, 11 of which are ‘high-risk’ and require immediate intervention to mitigate the catastrophic glacial lake outburst floods (GLOF) that could have an impact on hundreds of thousands of people living downstream, according to a new study.

The first of its kind study by three teams of researchers from the University of Kashmir, the Central University of Jammu and the Geological Survey of India (GSI) has warned that the stability of some alpine lakes could be “severely compromised” by external factors such as cloudbursts, mass movements, avalanches or earthquakes.

“These triggers could lead to rapid destabilisation, causing catastrophic outflows with potentially devastating impacts on downstream areas,” the study, which was commissioned by J&K administration through the Department of Disaster Management, Relief, Rehabilitation and Reconstruction (DDMRRR) earlier this year to formulate risk mitigation strategies for tackling the looming threat of GLOF events in the Union territory (UT).

Global warming causes number of alpine lakes to increase

The UT of Jammu and Kashmir, which falls in the high risk seismic zone V, is home to more than 300 alpine lakes. In recent years, these lakes have attracted a high number of alpinists and trekking enthusiasts, raising concerns over the impact of growing human footprint in the eco-fragile Himalayan region.

Due to rapid melting of glaciers amid global warming, the number of alpine lakes and the size of some of the existing alpine lakes in Jammu and Kashmir has grown in the last two decades, according to experts.

A group of tourists from Mumbai on the trail to Greater Lakes of Pir Panjal region taking a break at Sandook Sar. Photo: Jehangir Ali.

The latest study, using advanced remote sensing tools and GIS-based analysis, noted that some alpine lakes in Jammu and Kashmir are dammed by rocks, a phenomenon that occurs when meltwater from retreating glaciers collects in depressions, giving these water bodies ‘a higher degree of stability’.

However, taking Sheshnag Lake in Anantnag as an example, the study found that some lakes have steep slopes that are prone to falling debris, which ‘adds to their vulnerability’. In some cases, such as Sona Sar lake, the researchers said that these lakes are vulnerable because of their ‘steep downstream slopes and deformed glacial feeder tongues’.

Tourist favourites at glacial retreat risk

The study has also warned that some alpine lakes, like Gangabal on ‘Kashmir Great Lakes’ trail which attracts the highest number of professional mountain climbers and trekking enthusiasts every year during summers, pose risk due to glacial retreat which has resulted in “expanding water volume and unstable terrains nearby”.

Professor Pervez Ahmed, who heads the Department of Geography and Disaster Management at the University of Kashmir, said that an inventory of all glacial lakes in Jammu and Kashmir has been prepared after studying their geological structure, volume of water, risk factors, among other parameters approved by the National Disaster Management Authority and the Central Water Commission.

While the University of Kashmir led the study in Kashmir Valley, the alpine lakes in Jammu region were studied by a team led by Prof Sunil Dhar of the Department of Environmental Sciences, Central University of Jammu.

Bram Sar lake, which is located in the higher reaches of south Kashmir’s Kulgam district, was studied by a team of the Geological Survey of India.

The researchers have classified the lakes into four categories – from mildly dangerous to high risk.

“The findings are based on preliminary field studies backed by scientific data. We need to get funding and proper instrumentation, even if through collaborative projects, to devise effective mitigation strategies,” professor Ahmed said.

A cloud of fog floating above Bhag Sar, the third largest alpine lake in Kashmir Valley. Photo: Jehangir Ali.

The preliminary findings of these studies, accessed by The Wire, noted that there are three high-risk lakes – Mundikeswar, Hangu Lake and an unnamed lake in the Kishtwar district of Jammu – that are ‘highly hazardous’, while the remaining eight such alpine lakes are located in the Kashmir valley.

Kishtwar risk greatest

The study has identified Kishtwar in Chenab valley of Jammu division as the “most vulnerable” district of the Union territory.

“These lakes exhibit characteristics such as unstable moraines, steep downstream gradients, and proximity to unstable glacier tongues, making them critical for targeted mitigation measures,” the study, which is being coordinated by Dr Binay Kumar, Associate Director at the Centre for Development of Advanced Computing Ahmedabad, noted.

According to a 2023 study by the International Centre for Integrated Mountain Development, more than 70% of the 700 GLOFs in the world since 1833 have taken place in the past 50 years with the year 1980 witnessing the highest number (15) of GLOF events followed by 13 in 2015.

A GLOF event struck Chorabari Lake in Uttarakhand’s Kedarnath in June 2013, killing thousands of residents, pilgrims and tourists, some of whom were never found, in what became known as the country’s worst natural disaster since the 2004 tsunami.

In April this year, the J&K administration set up a ‘Focused Glacial Lake Outburst Flood Monitoring Committee’ to study the increasing risk of glacial lake overflows which pose a grave risk to lives and livelihoods in the Union territory.

An official said that the research has thrown up “valuable data on the lake conditions, surrounding environmental factors and potential risks of GLOF events” that will be used to formulate “risk mitigation strategy”, including the deployment of early warning systems.

An official spokesperson of DDMRRR said that the strategy, which has not yet been made public, would be implemented in two phases and a Glacial Lake Outburst Flood Early Warning System (EWS) is also proposed to be established to “enhance preparedness”.

Solving the Renewable Energy Puzzle: The Push for Long-Term Power Storage

As nations push toward 100% renewable energy, challenges like “Dunkelflauten” – periods of low solar and wind power – highlight the need for efficient, long-term energy storage solutions.

When the Sun is blazing and the wind is blowing, Germany’s solar and wind power plants swing into high gear. For nine days in July 2023, renewables produced more than 70% of the electricity generated in the country; there are times when wind turbines even need to be turned off to avoid overloading the grid.

But on other days, clouds mute solar energy down to a flicker and wind turbines languish. For nearly a week in January 2023, renewable energy generation fell to less than 30% of the nation’s total, and gas-, oil- and coal-powered plants revved up to pick up the slack.

Germans call these periods Dunkelflauten, meaning “dark doldrums,” and they can last for a week or longer. They’re a major concern for doldrum-afflicted places like Germany and parts of the United States as nations increasingly push renewable-energy development. Solar and wind combined contribute 40% of overall energy generation in Germany and 15% in the US and, as of December 2024, both countries have goals of becoming 100% clean-energy-powered by 2035.

The challenge: how to avoid blackouts without turning to dependable but planet-warming fossil fuels.

Solving the variability problem of solar and wind energy requires reimagining how to power our world, moving from a grid where fossil fuel plants are turned on and off in step with energy needs to one that converts fluctuating energy sources into a continuous power supply. The solution lies, of course, in storing energy when it’s abundant so it’s available for use during lean times.

But the increasingly popular electricity-storage devices today – lithium-ion batteries – are only cost-effective in bridging daily fluctuations in sun and wind, not multiday doldrums. And a decades-old method that stores electricity by pumping water uphill and recouping the energy when it flows back down through a turbine generator typically works only in mountainous terrain. The more solar and wind plants the world installs to wean grids off fossil fuels, the more urgently it needs mature, cost-effective technologies that can cover many locations and store energy for at least eight hours and up to weeks at a time.

Engineers around the world are busy developing those technologies – from newer kinds of batteries to systems that harness air pressure, spinning wheels, heat or chemicals like hydrogen. It’s unclear what will end up sticking.

“The creative part… is happening now,” says Eric Hittinger, an expert on energy policy and markets at Rochester Institute of Technology who coauthored a 2020 deep dive in the Annual Review of Environment and Resources on the benefits and costs of energy storage systems. “A lot of it is going to get winnowed down as front-runners start to show themselves.”

Finding viable storage solutions will help to shape the overall course of the energy transition in the many countries striving to cut carbon emissions in the coming decades, as well as determine the costs of going renewable – a much-debated issue among experts. Some predictions imply that weaning the grid off fossil fuels will invariably save money, thanks to declining costs of solar panels and wind turbines, but those projections don’t include energy storage costs.

Other experts stress the need to do more than build out new storage, like tweaking humanity’s electricity demand. In general, “we have to be very thoughtful about how we design the grid of the future,” says materials scientist and engineer Shirley Meng of the University of Chicago.

Reinventing the battery

The fastest-growing electricity storage devices today – for grids as well as electric vehicles, phones and laptops – are lithium-ion batteries. Recent years have seen massive installations of these around the globe to help balance electricity supply and demand and, more recently, to offset daily fluctuations in solar and wind. One of the world’s largest battery grid storage facilities, in California’s Monterey County, reached its full capacity in 2023 at a site with a natural-gas-powered plant. It can now store 3,000 megawatt-hours (MWh) and is capable of providing 750 MW – enough to power more than 600,000 homes every hour for up to four hours.

Lithium-ion batteries convert electrical energy into chemical energy by using electricity to fuel chemical reactions at two lithium-containing electrode surfaces, storing and releasing energy. Lithium became the material of choice because it stores a lot of energy relative to its weight. But the batteries have shortcomings, including their fire risk, their need for air-conditioning in hot climates and a finite global supply of lithium.

Importantly, lithium-ion batteries aren’t suitable for long-duration storage, explains Meng. Despite monumental price declines in recent years, they remain costly due to their design and the price of mining and extracting lithium and other metals. The battery cost is above $100 per KWh – meaning that a battery container supplying one MW (enough for about 800 homes) every hour for five hours would cost at least $500,000. Providing electricity for longer would quickly become economically unfeasible, Meng says. “I think four to eight hours is really a sweet spot for balancing cost and performance,” she says.

For longer durations, “we want energy storage that costs one tenth of what it does today – or maybe, if we could, one hundredth,” Hittinger says. “If you can’t make it extremely cheap, then you don’t have a product.”

One way of cutting costs is to switch to cheaper ingredients. Several companies in the US, Europe and Asia are working to commercialise sodium-ion batteries that replace lithium with sodium, which is more abundant and cheaper to extract and purify. Different battery architectures are also being developed – such as “redox flow” batteries, in which chemical reactions take place not at electrode surfaces but in two fluid-filled tanks that act as electrodes. With this kind of design, capacity can be enlarged by increasing tank size and electrolyte amount, which is much cheaper than increasing the expensive electrode material of lithium-ion batteries. Redox-flow batteries could supply electricity over days or weeks, Meng says.

US-based company Form Energy, meanwhile, just opened a factory in West Virginia to make “iron-air” batteries. These harness the energy released when iron reacts with air and water to form iron hydroxide – rust, in other words. “Recharging the battery is taking rust and unrusting it,” says William Woodford, Form’s chief technical officer.

Because iron and air are cheap, the batteries are inexpensive. The downside with both iron-air and redox-flow batteries is that they give back up to 60% less energy than is put into them, partly because they gradually discharge with no current applied. Meng thinks both battery types have yet to resolve these issues and prove their reliability and cost-effectiveness. But the efficiency loss of iron-air batteries could be dealt with by making them larger. And since long-duration batteries supply energy at times when solar and wind power is scarce and more costly, “there’s more tolerance for a little bit of loss,” Woodford says.

Spinning wheels and squished air

Other engineers are exploring mechanical storage methods. One device is the flywheel, which employs the same principle that causes a bike wheel to keep spinning once set into motion. Flywheel technology uses electricity to spin large steel discs, and magnetic bearing systems to reduce the friction that causes slowdowns, explains electrical engineering expert Seth Sanders of the University of California, Berkeley. “The energy can be stored for actually a very substantial amount of time,” he says.

Sanders’ company, Amber Kinetics, produces flywheels that can spin for weeks but are most cost-effective when used at least daily. When power is needed, a motor generator turns the movement energy back into electricity. As the wheels can switch quickly from charging to discharging, they’re ideal for covering rapid swings in energy availability, like at sunset or during cloudy periods.

Each flywheel can store 32 KWh of energy, close to the daily electricity demand of an average American household. That’s small for grid applications, but the flywheels are already deployed in many communities, often to balance fluctuations in renewable energy. A municipal utility in Massachusetts, for instance, has installed 16 flywheels next to a solar plant; they supply energy for more than four hours, absorbing electricity during low-demand times and discharging during peak demand, Sanders says.

A different kind of mechanical facility stores electricity by using it to compress air, then stashes the air in caverns. “When the grid needs it, you release that air into an air turbine and it generates electricity again,” explains Jon Norman, president of the Canada-based company Hydrostor, which specialises in compressed-air storage. “It’s just a giant air battery underground.”

Such systems usually require natural caverns, but Hydrostor carves out cavities in hard rock. Compared to batteries or flywheels, these are large infrastructure projects with lengthy permitting and construction processes. But once those hurdles are passed, their capacity can be slowly scaled up by carving the caverns more deeply, at pretty low additional cost, Norman says.

In 2019, Hydrostor launched the first commercial compressed-air storage facility, in Goderich, Ontario, storing around 10 MWh — enough to power some 2,100 homes for more than five hours. The company plans several much larger facilities in California and is building a 200-MW facility in the Australian town Broken Hill that can supply energy for up to eight hours to bridge shortfalls in solar and wind energy.

Storing energy as heat and gas

Around the world, there are efforts afoot to make use of excess renewable electricity by using it to heat up water or other heat-storing materials. This can then provide climate-friendly warmth for buildings or industrial processes, says Katja Esche of the German Energy Storage Association.

Heat can also be used to store energy, though that technology is still being developed. Energy storage and systems expert Zhiwei Ma of Durham University in the United Kingdom recently tested a pumped thermal energy storage system. Here, the main energy-storing process occurs when electricity is used to compress a gas, like argon, to a high pressure, heating it up; electricity is generated when the gas is allowed to expand through a turbine generator. Some experts are skeptical of such thermal storage systems, as they supply up to 60% less electricity than they store – but Ma is optimistic that with more research, such systems could help with daily storage needs.

For even longer-duration storage – over weeks – many experts put their bets on hydrogen gas. Hydrogen exists naturally in the atmosphere but can also be produced using electricity to split water into oxygen and hydrogen. The hydrogen is stored in pressurised tanks and when it reacts with oxygen in a fuel cell or turbine, this generates electricity.

Hydrogen and its derivatives are already being explored as fuel for ships, planes and industrial processes. For long-duration storage, “it looks plausible that that would be the technology of choice,” says energy expert Wolf-Peter Schill of the German Institute for Economic Research who coauthored a 2021 review on the economics of energy storage in the Annual Review of Resource Economics.

The German energy company Enertrag is building a facility that uses hydrogen in both ways. Surplus energy from the company’s 700-MW solar and wind plant near Berlin is used to make hydrogen gas, which is sold to various industries. In the future, about 10% of that hydrogen will be stashed away “as an emergency backup measure” for use during weeks without sun or wind, says mechanical engineer Tobias Bischof-Niemz, who is on Enertrag’s board.

The idea of using hydrogen for electricity storage has many critics. Similar to heat, up to two-thirds of the energy is lost during reconversion into electricity. And storing massive quantities of hydrogen over weeks isn’t cheap, although Enertrag is planning on reducing costs by storing it in natural caverns instead of the customary pressurised steel cylinders.

But Bischof-Niemz argues that these expenses don’t matter much if hydrogen is produced from cheap energy that would otherwise be wasted. And, he adds, hydrogen storage would be used only for Dunkelflauten periods. “Because you only have two or three weeks in the year that are that expensive, it works economically,” he says.

A question of cost

There are many other efforts to develop longer-duration storage methods. Cost is key for all, regardless of how much is paid for by governments or utility companies (the latter typically push such costs onto consumers). All new systems will need to prove that they’re significantly cheaper than lithium-ion batteries, says energy expert Dirk Uwe Sauer of Germany’s RWTH Aachen University. He says he has seen many technologies stall at the demonstration stage because there’s no business case for them.

Developers, for their part, argue that some systems are approaching that of lithium-ion batteries when used to store energy for eight hours or more, and that costs will come down substantially for others when they are manufactured in large volumes. Maybe many technologies could, ultimately, compete with lithium-ion batteries, but getting there, Sauer says, “is extremely difficult.”

The challenge for developers is that the market for long-duration technologies is only beginning to take shape. Many nations, such as the US, are early in their energy transition journey and still lean heavily on fossil fuels. Most regions still have fossil-fuel-powered plants to cover multiday doldrums.

Indeed, Hittinger estimates that the real economic need for long-duration storage will only emerge once solar and wind account for 80% of total power generation. Right now, it can often be cheaper for utilities to build gas plants – fossil fuels, still – to ensure grid reliability.

One important way to make storage technologies more economical is a carbon tax on fossil fuels, says energy systems researcher Anne Liu of Aurora Energy Research. In European countries like Switzerland, utilities are charged up to about $130 per metric ton of carbon emitted. California grid operators, meanwhile, have spurred storage development by requiring utility companies to ensure adequate energy coverage, and helping to cover the cost.

Market incentives can also help. In the Texas energy market, where electricity prices fluctuate a lot, electricity customers are saving hundreds of millions of dollars from the build-out of lithium-ion batteries, despite their costs, as they can store energy when it’s cheap and sell it for a profit when it’s scarce. “Once those power markets have incentive, then the longer-duration batteries will be more viable,” Liu says.

But even when incentives are there, the question remains of who will foot the bill for energy storage, which isn’t considered in many cost projections for transitioning the grid off fossil fuels. “I don’t think there’s been enough time spent studying how much these decarbonisation pathways are going to cost,” says Gabe Murtaugh, director of markets and technology at the nonprofit Long Duration Energy Storage Council.

Without interventions, Murtaugh estimates, California customers, for instance, could eventually see a threefold increase in utility bills. “Thinking about how states and federal governments might help pay for some of this,” Murtaugh says, “is going to be really important.”

Saving costs and resources

Cost considerations are prompting experts to also think of ways to reduce the need for storage. One way to strengthen the grid is building more consistently available forms of renewable energy, such as geothermal technologies that draw energy from the Earth’s heat. Another is to connect the grid over larger regions — such as across the US or Europe — to balance local fluctuations in solar and wind. Ensuring that storage technologies are as long-lived as possible can help to save costs and resources.

So can being smarter about when we draw electricity from the grid, says Seth Mullendore, president of the Vermont-based nonprofit Clean Energy Group. What if, rather than charging electric cars when getting home from work, we charged them at midday when the Sun is blazing? What if we adjusted building heating and cooling so the bulk would happen during windy periods?

Mullendore’s nonprofit recently helped to design a program in Massachusetts where electricity customers could sign up to get paid if they responded to signals from their utilities to use less energy – for instance, by turning their air-conditioning down or delaying electric car charging. In a smart grid of the future, such tweaks could be more widespread and fully automatic, while allowing consumers to override them if needed. Governments could encourage programs by rewarding utility companies for designing grids more efficiently, Mullendore says. “It’s much less expensive to have people not use energy than it is to build more infrastructure to deliver more energy.”

It will take careful thought and a worldwide push by engineers, companies and policymakers to adapt the global grid to a solar- and wind-powered future. Tomorrow’s grids may be studded with lithium-ion or sodium-ion batteries for short-term energy needs and newer varieties for longer-term storage. There may be many more flywheels, while underground caverns may be stuffed with compressed air or hydrogen to survive the dreaded Dunkelflauten. Grids may have smart, built-in ways of adjusting demand and making the very most of excess energy, rather than wasting it.

“The grid,” Meng says, “is probably the most complicated machine ever being built.”

This article was originally published on Knowable Magazine.

Google’s Willow: A Quantum Leap (But With Baby Steps)

Willow is a testament to human ingenuity and our relentless pursuit of knowledge. While the hype surrounding quantum computing may sometimes outpace reality, the progress being made is irrefutable.

Google’s unveiling of Willow, its latest quantum computing marvel, has sent ripples of excitement and curiosity throughout the world of science and technology. It not only marks a monumental development in quantum computing but also reignites profound questions about the future of computing, artificial intelligence (AI), and even the nature of reality itself.

Will quantum computers like Willow be the key to unlocking AI with unimaginable capabilities? Will they shatter the foundations of Bitcoin and online security? Do they really prove the existence of a multiverse? Will quantum computing change the world? Let’s cut through the hype and dig deeper to find out what all this really means.

What breakthroughs did Google announce?

Google claims Willow has achieved two major breakthroughs. First, it boasts of significantly improved error correction, a crucial step towards building reliable quantum computers. Qubits, the building blocks of quantum computers, are incredibly sensitive to environmental disturbances. Even the slightest noise, like a change in temperature, a stray electromagnetic field, or sometimes even cosmic rays, can cause qubits to lose information introducing errors. Imagine performing a complex calculation on a calculator where the numbers constantly flicker and change – that’s the challenge posed by these fragile qubits.

Now, typically, the more qubits you have, the more errors you encounter. It’s like building a towering and intricate structure with LEGO bricks: the larger it gets, the more unstable it becomes. But Willow defies this trend. Google’s engineers have achieved a remarkable feat – as they build larger and larger grids of qubits, the errors actually decrease exponentially. This is indeed a historic achievement, because it opens up possibilities for building larger and more powerful quantum computers. 

Also read: Guardians of the Code: India’s Approach to Tech Regulation and Innovation

The second breakthrough lies in Willow’s sheer speed. Google claims it performed a benchmark computation task in under five minutes that would take the world’s fastest supercomputer an unfathomable ten septillion (10^25) years to solve. To put this in perspective, the universe itself is estimated to be around 13.8 billion years old. 

How could quantum computers change the future of computing?

Taking five minutes for something that would need 10 septillion years isn’t just a speed boost. It is a fundamental shift in what’s possible. It’s like comparing the blink of an eye to the entire history of the universe – a fleeting moment versus billions of years. So, what does this mean for the future of computing?

Imagine simulating molecular interactions with incredible accuracy allowing scientists to design personalised drugs for treating cancer, Alzheimer’s, and other complex diseases. Quantum simulations could help discover new materials like room-temperature superconductors or design stronger, lighter, and more durable materials for use in everything from aircrafts to medical implants. They could analyse complex financial data and model market behaviour with unmatched precision, enabling far superior risk assessment, portfolio optimisation, and fraud detection.

And then there’s AI. Quantum computers could not only train AI models faster using larger datasets, but they could also identify patterns invisible to classical computers leading us to create AI that is far more intelligent, accurate, and insightful.

Why are we talking about multiverses?

Google’s announcement of Willow contained a rather intriguing aside – a suggestion that its development “lends credence to the notion that quantum computation occurs in many parallel universes, in line with the idea that we live in a multiverse.”

This seemingly casual remark touches upon a profound idea championed by David Deutsch, an Oxford physicist and quantum computing pioneer. Deutsch subscribes to the “many-worlds interpretation” of quantum mechanics, a theory proposing that every quantum measurement spawns a multitude of universes, with each possible outcome of the measurement unfolding in a separate reality.

In his 1997 book titled The Fabric of Reality, Deutsch issued an interesting challenge to “all those who still cling to a single-universe worldview”. He points to Shor’s algorithm, one of the earliest quantum algorithms to be developed, which theoretically allows for the factorisation of incredibly large numbers. However, factorising a sufficiently large number in Deutsch’s example would require 10^500 times the computational resources present in our entire universe, which has only about 10^80 atoms.

“So if the visible universe were the extent of physical reality,” writes Deutsch, “physical reality would not even remotely contain the resources required to factorise such a large number. Who did factorise it, then? How, and where, was the computation performed?” His answer: across the multiverse with each parallel universe contributing to the solution. 

Google’s offhand remark that Willow lends credence to us living in a multiverse might not be as outlandish as it first appears. Deutsch’s challenge forces us to confront the limitations of our conventional understanding of reality. If our universe simply doesn’t possess the capacity to perform such calculations, then where do they occur? The many-worlds interpretation offers an elegant, albeit radical, solution. While definitive proof of the multiverse remains elusive, the extraordinary capabilities of quantum computers like Willow certainly compel us to consider the possibility that our reality is far stranger than we ever imagined.

A sober look at quantum computing

While it’s easy to get swept up in the excitement surrounding Willow, it’s also important to step back and examine the bigger picture. Back in 2019, Google claimed “quantum supremacy” with their Sycamore processor, stating it solved a problem in 200 seconds that would take a supercomputer 10,000 years. However, this claim was quickly challenged by IBM researchers, who showed a classical computer could solve the same problem in just 2.5 days. Later, Chinese researchers devised a clever technique using multiple GPUs to solve the problem even faster, casting further doubt on Google’s supremacy claim.

IBM, a formidable rival to Google in developing quantum computers, cautions against using terms like “quantum supremacy” or “quantum advantage”. These terms create a misleading narrative of a battle between quantum and classical computers, where one ultimately triumphs over the other. This is simply not the case.

Quantum computers excel at specific types of problems, while classical computers remain superior for many everyday tasks. The future of computing lies in them working together by leveraging their unique strengths. Terms like “supremacy” fuel unrealistic expectations and hype, leading to disappointment and disillusionment when quantum computers don’t immediately revolutionise every aspect of our lives. A more nuanced approach highlighting the specific areas where quantum computers offer a clear advantage and emphasising the collaborative nature of quantum and classical computing can foster a more realistic understanding of this evolving technology.

Also read: AI: Beyond Hype, Towards Science

While Google’s work is undeniably impressive from a scientific and engineering standpoint, its immediate consequences for our everyday life, as physicist Sabine Hossenfelder points out, are still exactly zero. Practical applications like drug discovery and cryptography will require millions of qubits, while Willow has just 105. As we explored in our 2020 article, The Inconvenient Truth About Quantum Computing, there is a chasm between current quantum capabilities and those needed for practical applications. Bridging it demands multiple breakthroughs. While Willow is a significant milestone, it’s crucial to recognise the long road and formidable challenges ahead in building commercially viable quantum computers.

Every new announcement in quantum computing triggers a new wave of obituaries for Bitcoin. But no, Willow is no threat to Bitcoin. Cracking Bitcoin’s encryption will require millions of qubits, far beyond our current capabilities. Furthermore, Bitcoin developers have been anticipating and addressing the potential quantum threat since the cryptocurrency’s earliest days.

Willow is a testament to human ingenuity and our relentless pursuit of knowledge. While the hype surrounding quantum computing may sometimes outpace reality, the progress being made is irrefutable. From revolutionising medicine and materials science to reshaping AI and perhaps even unlocking the secrets of the multiverse, quantum computers hold the key to a future that strains the limits of our imagination. As we continue to push the boundaries of this extraordinary technology, we may find ourselves not just changing the world but redefining our very understanding of reality itself.

Viraj Kulkarni is a quantum computing researcher, amateur historian and science writer. His twitter handle is: @VirajZero.