A PhD in Chemistry Doesn’t Mean You’ll Find a Good Job in India

While public university jobs are hard to come by, other jobs don’t provide the same benefits and stability.

New Delhi: As conversations around unemployment in India have grown, so have headlines like this: ‘Graduates, Post-Graduates Among Candidates in Race to Become Helpers in Railways‘. This is just one example of a slew of news stories about how people with higher education degrees – even PhDs – are applying for relatively unskilled work because a government job still comes with a certain amount of benefits and stability, and there just aren’t enough other good jobs out there.

A recent article Chemistry World examined the value of a chemistry PhD in India, and found that for a lot of people, it doesn’t quite get them were they want to go. Permanent, well-paying academic jobs are hard to come by, and private university or firm jobs do offer the same incentives. Each opening at a public university now sees about 250 eligible candidates.

According to the author, this has meant that a number of chemistry PhDs from India spend years moving from one post-doctoral fellowship to the other.

Why are qualified professionals finding it so hard to find a job? “‘Unfortunately, we are not witnessing any major expansion among Indian firms and multi-national corporations, be it pharma, chemicals or personal care. No disruptive trend in India has emerged that could drive the job market, like contract research organisations or drug discovery units,” Shyam Suryanarayan of CDrive, a specialist recruitment firm, told Chemistry World.

Also read: Let’s Leave Research to the Researchers

In addition, it doesn’t look like things are getting better anytime soon. “For example, IIT Madras alone will be producing several hundred [chemistry] PhDs by 2024. The number will be massive when one takes into account all the IISERs, IITs and central universities. But there are no jobs at such institutions,” Thalappil Pradeep, a professor of chemistry at IIT Madras, told the publication.

As expected, this means young PhD scholars often look abroad for the right opportunities. To prevent this ‘brain drain’, the Department of Science and Technology launched the ‘Innovation in Science Pursuit for Inspired Research’ or INSPIRE programme, which provides fellowships for research. But like The Wire has reported before, INSPIRE hasn’t been achieving its goals:

The recipients search for a host research institute or university department to conduct their research in. The term “assured opportunity” has led to expectations that they would eventually be absorbed by the institute or department. But about 35% of the initial batches of INSPIRE faculty fellows now find themselves at the end of the road, with neither a job in hand nor any encouraging prospects.

The Prime Minister’s Research Fellowship encourages students to enrol in PhD programmes, by providing monetary support during the degree, it does not talk about employment once you finish. And that fellowship too has left students feeling underwhelmed (and underpaid).

Also read: Research Has to Be Nudged Into ‘National Interest’ Areas – Not Sledgehammered

Some people argue that the recruitment process at universities is flawed – which is why positions remain vacant even when qualified candidates are available. Abhishek Dey, from the Kolkata-based Indian Association for Cultivation of Sciences, told Chemistry World that a central agency, with experts and scientists, should be set up and tasked with hiring for all positions.

One claim the author makes is that “Currently, Indian policymakers are trying to change how PhDs are viewed, repositioning them as evidence of a skillset to solve complex problems rather than a passport to an academic job.” However, he doesn’t quite substantiate what he means by that – and his claim has triggered a conversation on Twitter.

Scientists and post-doctoral fellows commenting on the article have said that if the government has such plans, they are yet to make them public. Funding for higher education remains a problem, and key issues that are leading to this job crisis remain unaddressed. And if that continues, so will the dilemma for many researchers – in chemistry and other subjects – who want to put their training to good use.

Lise Meitner – the Forgotten Woman of Nuclear Physics Who Deserved a Nobel Prize

Left off publications due to Nazi prejudice, this Jewish woman lost her rightful place in the scientific pantheon as the discoverer of nuclear fission.

Nuclear fission – the physical process by which very large atoms like uranium split into pairs of smaller atoms – is what makes nuclear bombs and nuclear power plants possible. But for many years, physicists believed it energetically impossible for atoms as large as uranium (atomic mass = 235 or 238) to be split into two.

That all changed on February 11, 1939, with a letter to the editor of Nature – a premier international scientific journal – that described exactly how such a thing could occur and even named it fission. In that letter, physicist Lise Meitner, with the assistance of her young nephew Otto Frisch, provided a physical explanation of how nuclear fission could happen.

It was a massive leap forward in nuclear physics, but today Lise Meitner remains obscure and largely forgotten. She was excluded from the victory celebration because she was a Jewish woman. Her story is a sad one.

What happens when you split an atom

Meitner based her fission argument on the “liquid droplet model” of nuclear structure – a model that likened the forces that hold the atomic nucleus together to the surface tension that gives a water droplet its structure.

She noted that the surface tension of an atomic nucleus weakens as the charge of the nucleus increases, and could even approach zero tension if the nuclear charge was very high, as is the case for uranium (charge = 92+). The lack of sufficient nuclear surface tension would then allow the nucleus to split into two fragments when struck by a neutron – a chargeless subatomic particle – with each fragment carrying away very high levels of kinetic energy. Meisner remarked: “The whole ‘fission’ process can thus be described in an essentially classical [physics] way.” Just that simple, right?

Meitner went further to explain how her scientific colleagues had gotten it wrong. When scientists bombarded uranium with neutrons, they believed the uranium nucleus, rather than splitting, captured some neutrons. These captured neutrons were then converted into positively charged protons and thus transformed the uranium into the incrementally larger elements on the periodic table of elements – the so-called “transuranium,” or beyond uranium, elements.

Also read: What the Nobel Prizes Are Not

Some people were skeptical that neutron bombardment could produce transuranium elements, including Irene Joliot-Curie – Marie Curie’s daughter – and Meitner. Joliot-Curie had found that one of these new alleged transuranium elements actually behaved chemically just like radium, the element her mother had discovered. Joliot-Curie suggested that it might be just radium (atomic mass = 226) – an element somewhat smaller than uranium – that was coming from the neutron-bombarded uranium.

Meitner had an alternative explanation. She thought that, rather than radium, the element in question might actually be barium – an element with a chemistry very similar to radium. The issue of radium versus barium was very important to Meitner because barium (atomic mass = 139) was a possible fission product according to her split uranium theory, but radium was not – it was too big (atomic mass = 226).

When a neutron bombards a uranium atom, the uranium nucleus splits into two different smaller nuclei. Credit: Stefan-Xp/Wikimedia Commons, CC BY-SA

Meitner urged her chemist colleague Otto Hahn to try to further purify the uranium bombardment samples and assess whether they were, in fact, made up of radium or its chemical cousin barium. Hahn complied, and he found that Meitner was correct: the element in the sample was indeed barium, not radium. Hahn’s finding suggested that the uranium nucleus had split into pieces – becoming two different elements with smaller nuclei – just as Meitner had suspected.

As a Jewish woman, Meitner was left behind

Meitner should have been the hero of the day, and the physicists and chemists should have jointly published their findings and waited to receive the world’s accolades for their discovery of nuclear fission. But unfortunately, that’s not what happened.

Meitner had two difficulties: She was a Jew living as an exile in Sweden because of the Jewish persecution going on in Nazi Germany, and she was a woman. She might have overcome either one of these obstacles to scientific success, but both proved insurmountable.

Lise Meitner and Otto Hahn in Berlin, 1913.

Meitner had been working as Hahn’s academic equal when they were on the faculty of the Kaiser Wilhelm Institute in Berlin together. By all accounts they were close colleagues and friends for many years. When the Nazis took over, however, Meitner was forced to leave Germany. She took a position in Stockholm, and continued to work on nuclear issues with Hahn and his junior colleague Fritz Strassmann through regular correspondence. This working relationship, though not ideal, was still highly productive. The barium discovery was the latest fruit of that collaboration.

Yet when it came time to publish, Hahn knew that including a Jewish woman on the paper would cost him his career in Germany. So he published without her, falsely claiming that the discovery was based solely on insights gleaned from his own chemical purification work, and that any physical insight contributed by Meitner played an insignificant role. All this despite the fact he wouldn’t have even thought to isolate barium from his samples had Meitner not directed him to do so.

Hahn had trouble explaining his own findings, though. In his paper, he put forth no plausible mechanism as to how uranium atoms had split into barium atoms. But Meitner had the explanation. So a few weeks later, Meitner wrote her famous fission letter to the editor, ironically explaining the mechanism of “Hahn’s discovery.”

Even that didn’t help her situation. The Nobel Committee awarded the 1944 Nobel Prize in Chemistry “for the discovery of the fission of heavy nuclei” to Hahn alone. Paradoxically, the word “fission” never appeared in Hahn’s original publication, as Meitner had been the first to coin the term in the letter published afterward.

Also read: Chemistry Nobel Prize Goes to Frances Arnold, George Smith and Gregory Winter

A controversy has raged about the discovery of nuclear fission ever since, with critics claiming it represents one of the worst examples of blatant racism and sexism by the Nobel committee. Unlike another prominent female nuclear physicist whose career preceded her – Marie Curie – Meitner’s contributions to nuclear physics were never recognised by the Nobel committee. She has been totally left out in the cold, and remains unknown to most of the public.

Meitner received the Enrico Fermi Award in 1966. Her nephew Otto Frisch is on the left. Credit: IAEA, CC BY-SA

After the war, Meitner remained in Stockholm and became a Swedish citizen. Later in life, she decided to let bygones be bygones. She reconnected with Hahn, and the two octogenarians resumed their friendship. Although the Nobel committee never acknowledged its mistake, the slight to Meitner was partly mitigated in 1966 when the US Department of Energy jointly awarded her, Hahn and Strassmann its prestigious Enrico Fermi Award “for pioneering research in the naturally occurring radioactivities and extensive experimental studies leading to the discovery of fission.”

The two-decade late recognition came just in time for Meitner. She and Hahn died within months of each other in 1968; they were both 89 years old.

Timothy J. Jorgensen, Director of the Health Physics and Radiation Protection Graduate Program and Associate Professor of Radiation Medicine, Georgetown University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

On the 150th Anniversary of the Periodic Table: What It Could’ve Looked Like

Contrary to popular belief, the periodic table didn’t actually start with Dimitri Mendeleev. Many had tinkered with arranging the elements.

The periodic table stares down from the walls of just about every chemistry lab. The credit for its creation generally goes to Dimitri Mendeleev, a Russian chemist who in 1869 wrote out the known elements (of which there were 63 at the time) on cards and then arranged them in columns and rows according to their chemical and physical properties. To celebrate the 150th anniversary of this pivotal moment in science, the UN has proclaimed 2019 to be the International year of the Periodic Table.

John Dalton’s element list Credit: Wikimedia Commons

But the periodic table didn’t actually start with Mendeleev. Many had tinkered with arranging the elements. Decades before, chemist John Dalton tried to create a table as well as some rather interesting symbols for the elements (they didn’t catch on). And just a few years before Mendeleev sat down with his deck of homemade cards, John Newlands also created a table sorting the elements by their properties.

Mendeleev’s genius was in what he left out of his table. He recognised that certain elements were missing, yet to be discovered. So where Dalton, Newlands and others had laid out what was known, Mendeleev left space for the unknown. Even more amazingly, he accurately predicted the properties of the missing elements.

Dimitry Mendeleev’s table complete with missing elements Credit: Wikimedia Commons

Notice the question marks in his table above? For example, next to Al (aluminium) there’s space for an unknown metal. Mendeleev foretold it would have an atomic mass of 68, a density of six grams per cubic centimetre and a very low melting point. Six years later Paul Émile Lecoq de Boisbaudran, isolated gallium and sure enough it slotted right into the gap with an atomic mass of 69.7, a density of 5.9g/cm³ and a melting point so low that it becomes liquid in your hand. Mendeleev did the same for scandium, germanium and technetium (which wasn’t discovered until 1937, 30 years after his death).

At first glance Mendeleev’s table doesn’t look much like the one we are familiar with. For one thing, the modern table has a bunch of elements that Mendeleev overlooked (and failed to leave room for), most notably the noble gases (such as helium, neon, argon). And the table is oriented differently to our modern version, with elements we now place together in columns arranged in rows.

Today’s periodic table Credit: Offnfopt/Wikipedia

But once you give Mendeleev’s table a 90-degree turn, the similarity to the modern version becomes apparent. For example, the halogens – fluorine (F), chlorine (Cl), bromine (Br), and Iodine (I) (the J symbol in Mendeleev’s table) – all appear next to one another. Today they are arranged in the table’s 17th column (or group 17 as chemists prefer to call it).

Period of experimentation

It may seem a small leap from this to the familiar diagram but, years after Mendeleev’s publications, there was plenty of experimentation with alternative layouts for the elements. Even before the table got its permanent right-angle flip, folks suggested some weird and wonderful twists.

Heinrich Baumhauer’s spiral. Reprinted (adapted) with permission from Types of graphic classifications of the elements. III. Spiral, helical, and miscellaneous charts, G. N. Quam, Mary Battell Quam. Credit: American Chemical Society.

One particularly striking example is Heinrich Baumhauer’s spiral, published in 1870, with hydrogen at its centre and elements with increasing atomic mass spiralling outwards. The elements that fall on each of the wheel’s spokes share common properties just as those in a column (group) do so in today’s table. There was also Henry Basset’s rather odd “dumb-bell” formulation of 1892.

Nevertheless, by the beginning of the 20th century, the table had settled down into a familiar horizontal format with the strikingly modern looking version from Heinrich Werner in 1905. For the first time, the noble gases appeared in their now familiar position on the far right of the table. Werner also tried to take a leaf out of Mendeleev’s book by leaving gaps, although he rather overdid the guess work with suggestions for elements lighter than hydrogen and another sitting between hydrogen and helium (none of which exist).

Heinrich Werner’s modern incarnation.
Reprinted (adapted) with permission from Types of graphic classifications of the elements. I. Introduction and short tables, G. N. Quam, Mary Battell Quam. Copyright (1934) American Chemical Society.

Despite this rather modern looking table, there was still a bit of rearranging to be done. Particularly influential was Charles Janet’s version. He took a physicist’s approach to the table and used a newly discovered quantum theory to create a layout based on electron configurations. The resulting “left step” table is still preferred by many physicists. Interestingly, Janet also provided space for elements right up to number 120 despite only 92 being known at the time (we’re only at 118 now).

Charles Janet’s left-step table. Credit: Wikipedia

Settling on a design

The modern table is actually a direct evolution of Janet’s version. The alkali metals (the group topped by lithium) and the alkaline earth metals (topped by beryllium) got shifted from far right to the far left to create a very wide looking (long form) periodic table. The problem with this format is that it doesn’t fit nicely on a page or poster, so largely for aesthetic reasons the f-block elements are usually cut out and deposited below the main table. That’s how we arrived at the table we recognise today.

That’s not to say folks haven’t tinkered with layouts, often as an attempt to highlight correlations between elements that aren’t readily apparent in the conventional table. There are literally hundreds of variations (check out Mark Leach’s database) with spirals and 3D versions being particularly popular, not to mention more tongue-in-cheek variants.

3D ‘Mendeleev flower’ version of the table. Credit: Тимохова Ольга/Wikipedia

How about my own fusion of two iconic graphics, Mendeleev’s table and Henry Beck’s London Underground map below?

The author’s underground map of the elements. Credit: Mark Lorch

Or the dizzy array of imitations that aim to give a science feel to categorising everything from beer to Disney characters, and my particular favourite “irrational nonsense”. All of which go to show how the periodic table of elements has become the iconic symbol of science.The Conversation

This article is republished from The Conversation. Read the original article here.

Some Onions Make Us Cry, and Some Don’t. Here’s Why

The chemical formula behind your tears.

The chemical formula behind your tears.

Why do some onions have more of an eye-stinging effect than others? Credit: Pixabay

Why do some onions have more of an eye-stinging effect than others? Credit: Pixabay

Mark Anthony in Shakespeare’s Cleopatra may have referred to “the tears that live in the onion”. But why do onions actually make us cry? And why do only some onions make us blub in this way when others, including related “allium” plants such as garlic, barely ever draw a tear when chopped?

When any vegetable is damaged, its cells are ripped open. The plant often then tries to defend itself by releasing bitter-tasting chemicals called polyphenols that can be off-putting to hungry animals trying to eat it. But an onion’s defence mechanism goes further, producing an even more irritating chemical, propanthial s-oxide, meant to stop the plant being consumed by pests.

This volatile chemical is what’s known as a lachrymatory factor. Its volatility means that, once it’s released, it quickly evaporates and finds its way into our eyes. There it dissolves in the water covering the surface of our eyes to form sulphenic acid. This irritates the lacrimal gland also known as the tear gland, hence the rather grand name of lachrymatory factor. Because the amount of acid produced is so small, its effect is only irritating and not harmful.

The release of propanthial s-oxide was originally thought to be down to one enzyme in the onion known as allicinase, a biological catalyst that speeds up the production of the eye-irritating compound. But some research has suggested two enzymes could be needed to produces these eye-watering effects.

This more complex explanation starts with the sulphur the onion absorbs from the ground and holds in a compound called PRENCSO 1 (1-propenyl-L-cysteine sulphoxide). When the onion is damaged it releases the allicinase, which reacts with the PRENCSO to produce ammonia and another chemical called 1-propenylsulphenic acid. The second enzyme, known as a lachrymatory-factor synthase, then turns this into the troublesome propanthial s-oxide.

So why do some onions have more of an eye-stinging effect than others? There is lots of debate about this. One plausible explanation is that it’s related to the amount of sulphur the onion has absorbed from the ground, which can depend on the soil and the growing conditions. Higher levels of sulphur in the soil help boost both the yield and pungency of onions.

Certainly sweeter onions tend to have less of the sulphur-containing compounds that eventually produce the propanthial s-oxide. But it’s also possible that no two onions from the same bag will have the same effect, so cutting into the vegetable may be the only way to know if it will make you cry.

However, we have a better idea why onion’s cousin garlic doesn’t have the same effect. It contains a slightly different compound called alliin or PRENCSO 2, which doesn’t breakdown further into eye-stinging chemicals. Instead it produces allicin, which has been linked to many of garlic’s health benefits.

Stop the tears

One solution to the crying problem may be to re-engineer the humble onion by selective breeding or genetic modification to suppress the lachrymatory-factor synthase enzyme. This might also have the added benefit of improving how onions taste as less propanthial S-oxide would mean more thiosulphinate, the compound associated with the flavour of fresh onions.

There are also a number of lower-tech solutions that have been suggested to solve the onion-chopping problem. As the reaction involves enzymes, the rate of reaction and amount of irritating chemicals produced can be cut by either damaging the enzymes or slowing them down.

In theory, blanching the onions (scalding them with boiling water then plunging them into freezing cold water) will denature the enzymes involved and so prevent the reaction from happening. This method is used when freezing many vegetables but it may not be practical to boil your onions before chopping them.

Slowing the reaction can be achieved by by putting your onions in the fridge or freezer before chopping. But it’s best not to store onions in a fridge in the long term as they become soggy and soft and lose their flavour, as well as making an unpleasant smell. It is best to keep your onions in a cool dark place with air flow that is not as humid as the fridge.

Other approaches involve drawing the volatile chemicals away from you as you are chopping the onion. This could be done by using a cooker hood or running water, stopping the compounds making their way to your eyes. You can even buy goggles to stop the irritant reaching your eyes. But the ability of evaporated propanthial s-oxide to reach our eyes regardless means that even then you should be prepared to weep as you slice.

Duane Mellor is senior lecturer at Coventry University.

This article was originally published on The Conversation.

Crosstalk: A Toast To Scientific Theories, Laws and Modelling

It’s important to appreciate why a future in biological discovery for a student with no appreciation of mathematics seems somewhat bleak.

It’s important to appreciate why a future in biological discovery for a student with no appreciation of mathematics seems somewhat bleak.

All theories can and should be refined by new experiments, and this leads to new, testable predictions. A macro photo of a Romanesco broccoli. Credit: Nicolas Raymond/Flickr, CC BY 2.0

All theories can and should be refined by new experiments, and this leads to new, testable predictions. A macro photo of a Romanesco broccoli. Credit: Nicolas Raymond/Flickr, CC BY 2.0

“Essentially all models are wrong, but some are useful.”
– George E.P. Box

This is an old anecdote that always draws a chuckle. A farmer, a biologist and a theoretical physicist walk into a bar, and end up discussing how cows can produce more milk. The farmer ruminates for a moment, and says, “We need to improve the food content, nutrition and living conditions of cows. Then they’ll give more milk!” The biologist pauses just for a moment and declares emphatically that we should make genetically engineered cows that will produce much more milk. Meanwhile the physicist, who has been furiously thinking, suddenly breaks into a victorious smile, and says, “Assume that the cow is a sphere…”

It’s ridiculously easy to heckle physicists and mathematicians with such jibes. After all, who else can reduce complexity to unknown constants and ignore messy realities for an ideal, impossible scenario? And who else grapples for years with abstract, impractical and seemingly useless problems in order to describe a natural phenomenon? But it is exactly this feature of physicists and mathematicians that have helped them come up with theory and models that explain complex phenomena from the natural world, and which have transformed our understanding of nature.

A cornerstone of the scientific method is the scientific theory, and at the heart of scientific theories are scientific models. Theory in science is very different from ‘theory’ in commonspeak. To most of us, unfortunately, theory means an idea or a random thought. This usually means that on a good day, our ‘theories’ might be a reasonable scientific hypothesis. But in science, a theory is a well-established principle that explains some phenomenon of the natural world.

So scientific theory is at a pinnacle and provides explanations for phenomena that are built upon observations and evidence. Theories are the most substantial form of scientific knowledge. Famous scientific theories include the theory of gravity, evolution by natural selection and many more. Also, there is some confusion about a scientific law and a theory, and this vague belief that a law is somehow superior to a theory. This is not true because they are not mutually exclusive. A law only describes how a particular phenomenon will work under certain conditions. A theory aspires to be broader and tries to give us an all-encompassing view for how natural phenomena work. So a theory can contain one or many laws within it, and both laws and theories can be fact.

There are two features of a scientific theory. Theories give us models that explain data and theories provide predictions that are testable. So all theories can and should be refined by new experiments, and this leads to new, testable predictions. When you properly stick to the scientific method, this becomes a cycle of theory driving experiments driving theory, all leading to new knowledge.

Scientist need those “spherical cows” to develop testable models and theories. These encompass the need for assumptions. By common definition, an assumption is something that is accepted without evidence. But just like the meaning of a theory changes from commonspeak to science, so does the meaning of an assumption in science. First, since you need to start somewhere; a scientific theory needs assumptions but uses as few assumptions as possible. Second, assumptions should be of the kinds that have to be made (like the assumption that truth exists) and these assumptions need to be built on actual evidence.

So the hallmark of scientific theory is this combination of falsifiability and testability, something we have explored before. But the best part of a theory, and models coming from such a theory is the opening up of whole new worlds of testable possibilities and creating new areas of research. This is what has made the past century such a supercharged era for scientific discovery. And this is what allows the process of discovery to “boldly go where no man has gone before”.

Sticking to stereotype, physicists have rightly delighted in theory and models. Yet in biology (outside of the evolutionary sciences), theory has maintained a lower profile and is perhaps under-appreciated. But just as in physics, theory, and models emerging from theory, have transformed the landscape of biology. There are famous theories and models in biology – the most being Darwin’s and Wallace’s theories of evolution by natural selection. The theory of evolution by natural selection relied on a series of observations Darwin and Wallace had independently made, in different parts of the world.

While the ideas suggested by their theory far exceeded what their observations then showed, they were within the framework of testable ideas, and over the coming decades a whole host of science coming from a range of disciplines ranging from biology to genomics to geology built, refined and expanded the existing framework. There are other famous models in biology that came from limited data but could be proposed because they could provide an explanation that fit known rules, and could be experimentally tested. They also opened up new possibilities for other research, which could also be tested. A famous such example is the DNA double helix model by Watson and Crick.   

When Watson and Crick got into the race for discovering what DNA looked like and how it worked, many things were known. DNA had been discovered decades earlier, and biochemical giants like Phoebus Levene and Erwin Chargaff had worked out the composition and chemistry of nucleic acids, of which DNA is a type. Oswald Avery and his colleagues had shown that all hereditary units, or ‘genes’, were made up of DNA. So there was great excitement in understanding how DNA managed to code this information. So the search was for a model that could explain all of this.

The only real data they had at hand were spots on an X-ray film, but scientists of the time, notably Linus Pauling and Max Perutz, had come up with a way of deciphering how crystals of proteins diffracted X-rays, and translating that information to bond-angles and structure. But DNA’s structure was much harder to decipher, and here Rosalind Franklin’s data and tragic story played a critical role. Still, the spots of DNA diffraction could not easily be built into an understandable model, and Franklin herself was struggling with it. But Watson and Crick used a combination of a thorough understanding of DNA chemistry, the physical rules by which they worked and building actual cardboard-and-wire models of the four nucleic acids making up DNA, to work out how DNA could assemble. Their physical model, like a jigsaw puzzle, fit perfectly. The experiments needed to test it were self-evident (and proved to be immediately true). The model also explained lots of existing data that had remained inexplicable.

This moment truly transformed biology.

Now, all these models we’ve talked about come from experimental data and are built by using parsimonious explanations of the data to explain broader natural phenomenon. They don’t require a special ability in mathematics as such. But even in biology, there is a special place for the type of theory provided by physicists and mathematicians. At its best, mathematical modelling explains what is and is not possible and decisively helps rule out something very unlikely to be. This is very powerful when deterministic models are used. These can determine outcomes exactly the same way for a given set of initial conditions. This is also very powerful in stochastic models, which really are statistical models, where randomness is present but the outcomes are probabilistic distributions.

When done rigorously, statistical models make a compelling argument on what is the most statistically likely explanation for a phenomenon or outcome for an event. There are other types of modelling in biology that heavily rely on aspects of mathematical modelling. This includes constructing metabolic or signalling networks in cells or organisms, which tell you how information is transferred within organisms (these could be food, chemical molecules or external stimulus). This also includes cellular modelling –  understanding how proteins fold and function. And concepts from mathematical modelling are heavily used in assembling big genomes as well as making sense of the enormous amounts of information in the metagenomes of multiple organisms.

Theory, in biology, has now almost come full circle, from being prominent a century ago to fading into near obscurity to now coming back into prominence. The importance of theory is only going to increase in the coming years of Big Data. A future in biological discovery for a student with no appreciation of mathematics seems somewhat bleak. So in our current frenetic era of experimental biology, let us raise a small toast of appreciation for theory and models, which help us understand how the natural world works.

Sunil Laxman is a scientist at the Institute for Stem Cell Biology and Regenerative Medicine, studying cellular decision-making. He has a keen interest in the history and process of science and how science influences society.

Makers of World’s Smallest Machines Win Nobel Prize for Chemistry

These machines are a thousand times smaller than the width of a human hair, and have been able to function like motors and elevators, and even mimic muscles.

These machines are a thousand times smaller than the width of a human hair, and have been able to function like motors and elevators, and even mimic muscles.

Jean-Pierre Sauvage, J. Fraser Stoddart and Bernard L. Feringa, the winners of the 2016 Nobel Prize for chemistry. Credit: nobelprize.org

Jean-Pierre Sauvage, J. Fraser Stoddart and Bernard L. Feringa, the winners of the 2016 Nobel Prize for chemistry. Credit: nobelprize.org

The 2016 Nobel Prize for chemistry has been awarded to Jean-Pierre Sauvage, J. Fraser Stoddart and Bernard L. Feringa for “for the design and synthesis of molecular machines”. These machines are a thousand times smaller than the width of a human hair, and have been able to function like motors and elevators, and even mimic muscles.

The first machine built among the laureates was by Sauvage in 1983, when he linked two ring-shaped molecules together like a chain. His invention demonstrated that it was possible to keep molecules not through a chemical bond, where they share electrons, but through a mechanical bond. The chain he built was called a catenane and set the stage for more complex machines.

Sauvage was followed by Stoddart in 1991, when he built the rotaxane: a ‘rotor’ molecule threaded onto an ‘axle’ molecule such that, together, they could function like the parts of a very small car. Eight years later, Feringa built a molecular motor, and with which he embarked on building the world’s first nanocar.

According to nobelprize.org: “In terms of development, the molecular motor is at the same stage as the electric motor was in the 1830s, when scientists displayed various spinning cranks and wheels, unaware that they would lead to electric trains, washing machines, fans and food processors.” The hope is that molecular machines of the future can be all these things – as well as make for new materials and sensors.

Sauvage is French, Stoddart Scottish, and Feringa Dutch, and they’re aged 72, 74 and 65 respectively. As a result, the average age of the laureates of the Nobel Prize for chemistry has slightly increased from 58 years, while the modal demographic remains that of a white male.

What the Photon is the Ig Nobel Prize?

The Ig Nobel Prizes break stereotypes, telling students that everyday things have a research question latent in them, and then provide a lifetime pass to think crazy.

The Ig Nobel Prizes break stereotypes, telling students that everyday things have a research question latent in them, and then provide a lifetime pass to think crazy.

The Ig Nobel Prize ceremony in 2006. Paper planes are visible on the stage. Credit: jdlouhy/Flickr, CC BY 2.0

The Ig Nobel Prize ceremony in 2006. Paper planes are visible on the stage. Credit: jdlouhy/Flickr, CC BY 2.0

Picture this: An alarm clock rings. As is the universal wont, you try to snooze/stop the shrieking contraption from hell, but wonder of wonders, it runs away from you. You promptly do what any self-respecting individual does when confronted with the situation: you cuss at the blasted thing, and run after it in a bid to stop it from shrieking any further. The device finally captured, you shut it off and look at it with an expression that can only be described as a weird mix of bewilderment and rage. But in all this din, there’s one good thing that has happened: you have gotten yourself out of the bed. And now that you are up, you think to yourself you might as well jump into the shower, and get set to begin the day.

How would you like to possess one such alarm clock? Well, the thing exists, and the series of events described as it rings are more or less correct.

Now, answer these: What is the surface area of an Indian elephant? What is the amount of friction between the sole of your shoe and a banana peel as you step on it? What happens in the brains of people who see the face of Jesus in a piece of toast?

How did you fare? No, wait, that’s the incorrect question. Why would one even know the answers to these questions? The answer, it turns out, is the reason why we put a rover on Mars and the reason why we touch a park bench that has a signboard declaring “Freshly painted, don’t touch!”

Curiosity. Wonder. That’s where science begins.

Those questions above and many more of their kind caught the fancy of several groups of researchers, and they doggedly went on to find their answers. A job well done, they thought, and all was well. But then some people thought that was exceedingly more than well and they gave them an award for it. The award is the Ig Nobel Prize.

Not to be confused with the Nobel Prizes, the Ig Nobel Prizes were instituted by the science humour magazine Annals of Improbable Research. These prizes have been given away every year since 1991 to honour achievements that “make people laugh, and then think”. The prizes celebrate the unusual in the scientific imagination in 10 disciplines, one of which is ‘interdisciplinary’.

The awards are presented by Nobel laureates at a grand, rather quirky, ceremony in the Sanders Theatre at Harvard University. The awardees have exactly 60 seconds to present their acceptance speech. A forbidding eight-year-old girl appointed for the purpose makes her displeasure known when the 60-second rule is violated. She simply gets on the stage and coos: “Please stop, I am bored.” The trick works. Watched by a thousand-odd spectators in the theatre, who are also given the chance to shoot paper aeroplanes on to the stage, the event is telecast live on the internet. This year’s Ig Nobel Prizes will be announced on September 22.

The Ig Nobels are a powerful tool in the storytelling of science. Your story losing steam and faltering and stumbling? No problem! Just nonchalantly segue to an Ig Nobel research question: How do reindeer react to seeing humans disguised as polar bears? Chances are you now have a better story because the all-important reindeer are smack-dab at the centre of the universe, as your previous story blissfully sorties the unimaginable expanses of an unheard of multiverse. Your readers are happy. And thankful.

The postmodern science student wants more, more than what an iPhone can deliver. And so the postmodern science teacher brings in the Ig Nobels when she teaches electrochemistry. To wit: why in certain houses in the town of Anderslov, Sweden, did people’s hair turn green?  The electrochemical series is never more fun and the students love it.

The Ig Nobels enthuse people to science, technology and medicine. While on medicine, here’s an Ig Nobel winning piece of research: a scientist from Papua New Guinea investigated injuries due to falling coconuts. In his paper, he describes four patients with head injuries resulting from this. The last two sentences of his paper’s abstract read: “Two required craniotomy. Two others died instantly in the village after being struck by dropping nuts.”

No, the Ig Nobels do not ridicule science. On the contrary, they make it evident that science is fun. The Ig Nobels break stereotypes. They enable students to realise that a scientist isn’t a weird looking, bespectacled person with unkempt hair attired in a lab coat embellished with a pocket protector. These are the awards that tell students that everyday things have a research question latent in them. They give students the lifetime pass to think crazy, and come up with zany ideas. And then use the scientific method to whittle and sieve the idea through.

Back in the classroom, as we discuss another Ig Nobel story, I ask my students what they think of the prizes. All love them and some boldly claim that they want an Ig Nobel as well. I am heartened. But for a split-second I wonder if I am falling short in making my students think big. But then the thought disappears, and I know the students will be good. They are after all the postmodern science students that want more. They want the Ig Nobel. And I can live knowing that, very happily so.

Sangeetha Balakrishnan teaches chemistry at the Women’s Christian College, Chennai.

Here’s the Clever Chemistry That Can Stop Your Food Rotting

Anyone for a 2,516-day-old burger?

Anyone for a 2,516-day-old burger?

A 2,516 day old burger. Source: Live feed from the Bus Hostel

A 2,516 day old burger. Source: Screenshot of the live feed of the burger from the Bus Hostel, Reykjavik.

A hotel in Reykjavík has on display a McDonald’s burger and fries, seemingly un-decomposed after 2,512 days – and counting. It was bought on October 30, 2009, the day that the last McDonald’s in Iceland closed. But you don’t have to go to Reykjavík to see it: it has its own webcam so you can watch it from your armchair.

What makes this meal so long-lived? Well, I haven’t examined this particular burger myself, but chemical reactions cause food to decay – and understanding them can help us to keep food better and for longer.

Let’s start with uncooked rice – in many people’s minds it’s a foodstuff that will keep for a long while. Experts reckon that polished white rice will keep for 30 years when properly sealed and stored in a cool, dry place. This means in an airtight container with oxygen absorbers that remove the gas that can oxidise molecules in the rice.

Hotter food goes off faster; as you may remember from school science lessons, chemical reactions are faster at high temperatures because hotter molecules have more energy and so are more likely to react when they collide. It’s one reason we have fridges. But there is a limit. Above a certain temperature (approximately 50-100°C), the enzymes in a bacterium get denatured – their ‘active site’, where its catalytic activity happens and it binds to molecules to carry out reactions on them, loses its shape and can no longer carry out reactions.

Back in the 19th century, Louis Pasteur invented the process that bears his name. Pasteurisation kills the bacteria that make food go off and today this is applied mainly to milk. Milk that has been pasteurised by heating to just over 70°C will keep for two to three weeks when refrigerated, while UHT milk, made by heating to 140°C, will keep in airtight, sterile containers for up to nine months. Raw milk left in the fridge would last only a few days.

Living off the land

The short life of food was the reason that medieval armies ‘lived off the land’ by scavenging, but in 1809 a Frenchman named Nicholas Appert won a prize offered by his government for a process for preserving food. He showed that food sealed inside a container to exclude air and then cooked to a high enough temperature to kill microbes such as Clostridium botulinum kept for a long time.

He’d invented canning, which came into widespread use and not just for feeding armies and expeditions – it was immediately taken up by the civilian sector, too. Tinned food certainly works. Sir William Edward Parry, for example, took 26 tons of canned pea soup, beef and mutton with him in 1824 on his expedition to find the Northwest Passage. One of these mutton cans was opened in 1939 and found to be edible, if not very palatable.

Conversely, cold slows germ growth. Keeping food at around 5°C in a fridge slows microbial growth – but it doesn’t stop it. People living in very cold areas like the Arctic discovered this sooner, of course, without the need for fridges. And watching the Inuit fish under thick ice gave Clarence Birdseye the idea of fast-freezing food; this creates smaller ice crystals than ordinary freezing, resulting in less damage to cell walls, so the food not only keeps for longer but also tastes better.

Sugar and spice and all things nice

Beginning with communities in hotter regions like the Middle East, dried food has been around for thousands of years – the earliest cases are thought to date back to 12,000 BC. Drying food, whether using the sun (and wind) or modern factory processes, removes water from the cells of the microbes that break down food. This stops them reproducing and ultimately kills them.

An extension of this is the use of salt (or sugar) to preserve food. While salt beef and pork may conjure up thoughts of the Royal Navy in the days of Jack Aubrey and Stephen Maturin– heroes of Patrick O’Brian’s Napoleonic novels – the process goes back much further than that.

Master and Commander: Aubrey and Maturin.

In the Middle Ages, salted fish like herring and cod were widely eaten in northern Europe and fish was of course essential during Lent. The cells of microorganisms have walls that are permeable to water but not to salt. When the cell is in contact with salt, osmosis takes place, so water moves out of the cell in order to try to equalise the salt concentration inside and outside the cell and eventually so much water is removed from the cell that it dies. No more bacteria.

Sugar has a similar effect: just think of fruit preserves, jam or jellies. Smoking also dries out food. Some of the molecules formed when wood is burned, like vanillin, will add flavour, while others, including formaldehyde and organic acids have preservative properties.

Freeze-drying is an up-to-date way of removing water from food, perhaps this is the kind of coffee that you use. Modern manufacturers are tapping into something that the Incas in the High Andes developed 2,000 years ago to prepare freeze-dried potatoes, known as chuño. The practice continues today. Potatoes are left out overnight, when freezing temperatures are guaranteed, then they trample on them, bare-footed, to mash them up. The blistering sun then completes the job – you have a food that will keep for months, food either for the Inca armies or the peasants of Bolivia and Peru.

How about spices? Well, both onion and garlic have antimicrobial properties. There is evidence that the use of spices in warmer climates is linked with their antimicrobial properties, so adding them to food can help preserve it.

The antibacterial activity of some spices, notably cinnamon and coriander, is probably due to the aldehydes – reactive molecules containing a –CHO group, formed by oxidising alcohols and including hexenal, the molecule we smell when grass is freshly cut – they contain.

The spice that has got most attention is turmeric, made from the roots of a plant in the ginger family, Curcuma longa, and particularly a molecule it contains, called curcumin. Turmeric was used in food in the Indus valley over 4,000 years ago, as well as in medicine. Today, it may be a useful lead molecule against Alzheimer’s disease, as well as possibly interfering with various signalling pathways implicated in cancers.

So there is sound science behind the processes used to preserve food and some of these substances may have hidden benefits to our health. That hamburger in Iceland, however, remains a mystery. There certainly have been plenty of media stories trying to get the bottom of its apparent immortality – but the only way to be sure would be to subject it to rigorous scientific enquiry. Perhaps I’ll book my flight.

The Conversation

Simon Cotton is a senior lecturer in chemistry at the University of Birmingham. 

This article was originally published on The Conversation. Read the original article.

Five Chemistry Inventions That Enabled the Modern World

It turns out most people just don’t have a good idea of what it is chemists do, or how chemistry contributes to the modern world

Did you know that the discovery of a way to make ammonia was the single most important reason for the world’s population explosion from 1.6 billion in 1900 to 7 billion today? Or that polythene, the world’s most common plastic, was accidentally invented twice?

The chances are you didn’t, as chemistry tends to get overlooked compared to the other sciences. Not a single chemist made it into Science magazine’s Top 50 Science stars on Twitter. Chemistry news just don’t get the same coverage as the physics projects, even when the project was all about landing a chemistry lab on a comet.

So the Royal Society of Chemistry decided to look into what people really think of chemistry, chemists and chemicals. It turns out most people just don’t have a good idea of what it is chemists do, or how chemistry contributes to the modern world.

Chemistry hall of fame. Credit: Andy Brunning/[Compound Interest], Author provided

Chemistry hall of fame. Credit: Andy Brunning/[Compound Interest], Author provided

This is a real shame, because the world as we know it wouldn’t exist without chemistry. Here’s my top five chemistry inventions that make the world you live in.

1. Penicillin

Not a cowshed, but a wartime penicillin production plant. Credit: Wellcome Images

Not a cowshed, but a wartime penicillin production plant. Credit: Wellcome Images

There’s a good chance that penicillin has saved your life. Without it, a prick from a thorn or sore throat can easily turn fatal. Alexander Fleming generally gets the credit for penicillin when, in 1928, he famously observed how a mould growing on his petri dishes suppressed the growth of nearby bacteria. But, despite his best efforts, he failed to extract any usable penicillin. Fleming gave up and the story of penicillin took a 10-year hiatus. Until in 1939 it took Australian pharmacologist Howard Florey and his team of chemists to figure out a way of purifying penicillin in usable quantities.

No, I rather not say cheese / Howard Florey. Credit: Wikimedia Commons

No, I rather not say cheese / Howard Florey. Credit: Wikimedia Commons

However, as World War II was raging at the time, scientific equipment was in short supply. The team therefore cobbled together a totally functional penicillin production plant from from bath tubs, milk churns and book shelves. Not surprisingly the media were extremely excited about this new wonder drug, but Florey and his colleagues were rather shy of publicity. Instead Fleming took the limelight.

Full-scale production of penicillin took off in 1944 when the chemical engineer Margaret Hutchinson Rousseau took Florey’s Heath Robinson-esque design and converted it into a full-scale production plant.

2. The Haber-Bosch process

Ammonia revolutionised agriculture. Credit: eutrophication&hypoxia/Flickr, CC BY-SA

Ammonia revolutionised agriculture. Credit: eutrophication&hypoxia/Flickr, CC BY-SA

Nitrogen plays a critical role in the biochemistry of every living thing. It is also the most common gas in our atmosphere. But nitrogen gas doesn’t like reacting with very much, which means that plants and animals can’t extract it from the air. Consequently a major limiting factor in agriculture has been the availability of nitrogen.

In 1910, German chemists Fritz Haber and Carl Bosch changed all this when they combined atmospheric nitrogen and hydrogen into ammonia. This in turn can be used as crop fertiliser, eventually filtering up the food chain to us.

Today about 80% of the nitrogen in our bodies comes from the Haber-Bosch process, making this single chemical reaction probably the most important factor in the population explosion of the past 100 years.

3. Polythene – the accidental invention

They may be plastic but they are vintage and very valuable. Credit: Davidd/Flickr, CC BY-SA

They may be plastic but they are vintage and very valuable. Credit: Davidd/Flickr, CC BY-SA

Most common plastic objects, from water pipes to food packaging and hardhats, are forms of polythene. The 80m tonnes of the stuff that is made each year is the result of two accidental discoveries.

The first occurred in 1898 when German chemist Hans von Pechmann, while investigating something quite different, noticed a waxy substance at the bottom of his tubes. Along with his colleagues he investigated and discovered that it was made up of very long molecular chains which they termed polymethylene. The method they used to make their plastic wasn’t particularly practical, so much like the penicillin story, no progress was made for some considerable time.

Then in 1933 an entirely different method for making the plastic was discovered by chemists at, the now defunct chemical company, ICI. They were working on high-pressure reactions and noticed the same waxy substance as von Pechmann. At first they failed to reproduce the effect until they noticed that in the original reaction oxygen had leaked into the system. Two years later ICI had turned this serendipitous discovery into a practical method for producing the common plastic that’s almost certainly within easy reach of you now.

4. The Pill and the Mexican yam

Yum - Mexican yam! Credit: Katja Schulz/Flickr, CC BY-SA

Yum – Mexican yam! Credit: Katja Schulz/Flickr, CC BY-SA

In the 1930s physicians understood the potential for hormone-based therapies to treat cancers, menstrual disorders and of course, for contraception. But research and treatments were held back by massively time-consuming and inefficient methods for synthesising hormones. Back then progesterone cost the equivalent (in today’s prices) of $1,000 per gram while now the same amount can be bought for just a few dollars. Russel Marker, a professor of organic chemistry at Pennsylvania State University, slashed the costs of producing progesterone by discovering a simple shortcut in the synthetic pathway. He went scavenging for plants with progesterone-like molecules and stumbled upon a Mexican yam. From this root vegetable he isolated a compound that took one simple step to convert into progesterone for the first contraceptive pill.

5. The screen you are reading on

LCD screens rock. Credit: Ian T. McFarland/Flickr, CC BY-SA

LCD screens rock. Credit: Ian T. McFarland/Flickr, CC BY-SA

Incredibly, plans for a flat-screen colour displays date back to the late 1960s! When the British Ministry of Defence decided it wanted flat-screens to replace bulky and expensive cathode ray tubes in its military vehicles. It settled on an idea based on liquid crystals. It was already known that liquid crystal displays (LCDs) were possible, the problem was that they only really worked at high temperatures. So not much good unless you are sitting in an oven.

In 1970 the MoD commissioned George Gray at the University of Hull to work on a way to make LCDs function at more pleasant (and useful) temperatures. He did just that when he invented a molecule known as 5CB). By the late 1970s and early 1980s, 90% of the LCD devices in the world contained 5CB and you’ll still find it in the likes of cheap watches and calculator. Meanwhile derivates of 5CB make the phones, computers and TVs possible.

Mark Lorch tweets as @sci_ents

The Conversation

Mark Lorch is Senior Lecturer in Biological Chemistry at University of Hull.

This article was originally published on The Conversation. Read the original article.