The Dangerous Salesman of Science 

Oppenheimer tells himself a lie, says my scholar friend. ‘That the bomb has a moral end.’

In his 1987 Nobel lecture, Joseph Brodsky said, anthropologically speaking, a human being is primarily a creature of aesthetics, and only after, an ethical one. 

This assertion sounds true in the case of J. Robert Oppenheimer. The scientific leaps in the field of quantum physics fascinated Oppenheimer. He was driven to follow the path of Niels Bohr and Werner Heisenberg. Returning from Cambridge to expand his research in Berkeley, he fell into the arms of the American state and became part of the Manhattan Project to develop an atomic bomb. 

It is comic irony that Lewis Strauss, who secretly plotted against Oppenheimer, was forced to work as a shoe salesman during the recession, while Oppenheimer achieved the distinction of Edward Teller calling him, “the great salesman of science.” This explains the moral turn in the life of Oppenheimer. Christopher Nolan likened his character to the titan Prometheus, though midway he seemed to metamorphose into Frankenstein. The hamartia of Oppenheimer’s life, Aristotle’s term for the Greek tragic hero’s fatal flaw, turned into a modern horror story.

The poet Joseph Brodsky’s distinction becomes relevant at this point: Oppenheimer abandoned the moral for the aesthetic. My scholar friend (who wishes to remain unnamed) shared the opinion that Oppenheimer, initially lost in the beauty of pure theory, transforms that aesthetic obsession into a monstrous one. She added the sharp insight: “Oppenheimer tells himself a lie. That the bomb has a moral end.” The act of lying to oneself produced a psychic wound within Oppenheimer. He lost sight of the moral aspect within his aesthetic pursuit. The lie made the transformation possible. The sublime beauty of studying quantum physics was ruined the moment Oppenheimer decided to use his expertise for a detrimental cause.

A still from the film ‘Oppenheimer’.

The sale of his scientific skills to the American state for making the bomb had a clear political objective for Oppenheimer: to finish off Hitler. This logic led him to overcome the moral dilemma behind his job. Any force that can destroy evil is legitimate. The destructive power of science was a seductive option to nullify the power of fascism. The Jewish Oppenheimer did not have his revenge over the Nazis (who were already defeated when the bomb was ready). The American state used it against a weakened Japan to declare its omnipotence.   

Young Oppenheimer’s interest in T.S. Eliot’s ‘The Wasteland’ and the Gita has a deep connection: Eliot’s poem ends by evoking the Upanishad, “Shantih shantih shantih”, a peace of the grave that fell upon a world torn apart by the end of World War I and the flu epidemic. Oppenheimer’s translation of the line from the Gita, “I am become Death, destroyer of worlds” was what Krishna said about his divinity being time itself that destroys the world at will. It was meant to exhort a weak-kneed Arjuna (who did not want to kill his cousins, seniors and kinsmen), reminding him of his duty as a warrior to prepare him for battle. The figure of divine incarnation and warrior-prince got fused into the scientist who invented a weapon that could kill millions.

Oppenheimer’s interest in the evocative moments in the two texts shows a certain death wish he carried within himself. When you are hell-bent to destroy the enemy, you are also out to kill a part of yourself through the act of retributive justice. 

Oppenheimer was not able to distinguish between the ethical difference between annihilating a system of power and annihilating people. This failure, however, is an intimate part of the modern West’s history. It produced ideas of the state – fascism, communism and imperial democracies – where the other within and outside one’s ideological fold was demonised as the absolute enemy and was meant to be exterminated. Making the bomb to be used for war, Oppenheimer not just used science as a tool for destruction, but created an ideology of science as divine power that could kill uncountable numbers of people as much as it could heal the world. 

It has been acknowledged that Nolan did not glorify war by not showing the bomb being dropped on the two Japanese cities. Still, as my scholar friend pointed out, Nolan could not prevent himself from indulging in Hollywood’s fetish for spectacle. There was a clear lack of self-restraint. The slow-motion explosion of the bomb that filled the screen numbed the audience, and engulfed it into the terror of its silence.

Contrast it with Abbas Kiarostami, who did not display the earthquakes that rocked Iran in Koker Trilogy in order to portray its psycho-social repercussion on the lives of residents who suffered its impact. Kiarostami’s art of filmmaking is deeply informed by his ethical hesitation.

A still from Abbas Kiarostami’s ‘And Life Goes on’.

Nolan had more reasons to hold back from depicting the technological grandeur of an instrument of death. The temptation to recreate the spectacle is not simply an aesthetic flaw. 

The euphoria of the scientific feat was viscerally exhibited by bodies of people stomping the floor of the hall celebrating Oppenheimer. It announced the coming of a new crowd in world history that took nationalist pride in mass destruction of other people. Oppenheimer looked conflicted, remorseful and eaten by guilt. But there were no indications to suggest he completely regretted his success. Truman, embodying the masculine pragmatism of the American state, lampooned Oppenheimer as “crybaby”. No one cared about the real babies in Hiroshima and Nagasaki. Such is the moral indifference of war. It causes deafness of the soul.  

Manash Firaq Bhattacharjee is an author. His latest book is Nehru and the Spirit of India.

‘Barbie’, ‘Oppenheimer’, and Why We Shouldn’t Avert Our Eyes from Hiroshima

Yoshito Matsushige was a photographer for the local newspaper and his are the only photographs from the city that day. He remembered asking them for forgiveness, wiping away his tears, and saying “I just took a picture of you as you are suffering, but this is my duty.”

Today, August 6, marks the 78th anniversary of the destruction of Hiroshima by an atomic bomb dropped on the city by the United States during the Second World War.

Tokyo: America’s pathologies are, in my experience, more apparent (though no less troubling) from afar. The simultaneous release of the films Barbie and Oppenheimer resulting in the distasteful but hardly surprising “Barbenheimer” meme is a case in point: America’s twin obsessions of how it looks in the mirror and how it’s remembered in the history books have collided head on, leaving a twisted mess of wreckage – however harmless at this point – revealing more about our culture and ourselves than we care to admit.

As someone who has yet to see either film, I’ll withhold judgment on the filmmakers’ vision and their success or failure in realising it on the big screen. Like last year, I’m happy to report that I’m spending much of this summer in Japan visiting family, meaning my 11-year-old son’s grandparents and a whole host of welcoming uncles, aunts, cousins, nephews, nieces, and neighbours.

These, it should be said, are exactly the kind of ordinary Japanese folks that director Christopher Nolan chose to leave out of his film about the “mastermind” of the atomic bomb, and precisely the people who suffered and died in the hundreds of thousands when the US dropped atomic bombs on Hiroshima and Nagasaki.

I respect Nolan as a filmmaker and, once again, will withhold judgment until I see the film. Being in Japan – which has yet to set a release date for Oppenheimer, but is expected to later this year after the August 6 and August 9 anniversaries of the 1945 atomic bombings – I haven’t had the opportunity to see his film, which I certainly will see. I have, however, had the opportunity to visit Hiroshima on a number of occasions, first as a young journalist nearly 30 years ago, and last summer with my wife and son.

On that first visit in 1995, not long before the 50th anniversary of the bombing of Hiroshima, I had the great honour of interviewing Yoshito Matsushige – a photographer for the local newspaper who was just 2.7 kilometres from the hypocentre when the blast occurred at 8:15 that August morning.

As I wrote last August in Hiroshima’s Message, “His immediate reaction was to grab his camera and head toward the fire. But when he saw ‘the hellish state of things’ he couldn’t bring himself to take pictures. ‘It was great weather that morning,’ he said, ‘without a single cloud. But under that blue sky, people were exposed directly to heat rays. They were burned all over, on the face, back, arms, legs—their skin burst, hanging. There were people lying on the asphalt, their burnt bodies sticking to it, people squatting down, their faces burnt and blackened. I struggled to push the shutter button.’”

After what seemed like an eternity, Matsushige said, he finally brought himself to take two pictures of people, suffering horribly, who had gathered on Miyuki Bridge, about 2.3 kilometres from the hypocenter. Many were middle-school children, their bodies terribly burned. Someone was applying cooking oil to their wounds. He remembered asking them for forgiveness, wiping away his tears, and saying “I just took a picture of you as you are suffering, but this is my duty.”

In all, Matsushige snapped his shutter just seven times – the only photos taken in Hiroshima on August 6, 1945, that survive to this day. He died in 2005, at the age of 92, a dedicated peace activist who shared his story with people around the globe, including before the UN General Assembly.

West end of Miyuki Bridge in Hiroshima, morning of Aug. 6, 1945. This was taken moving in closer after the photo above, explained Yoshito Matsushige. That evening the injured were taken by truck to Ujina and Ninoshima Island. Photo: Yoshito Matsushige, Chugoku Shimbun.

Christopher Nolan has said his film – which was inspired by Kai Bird and Martin J. Sherwin’s Pulitzer Prize-winning biography of Oppenheimer, American Prometheus – is focused more on the moral dilemmas facing the scientist tasked with making a bomb that could end World War II than on making a war “documentary.”

“He [Oppenheimer] learned about the bombings of Hiroshima and Nagasaki on the radio—the same as the rest of the world,” Nolan told MSNBC’s Chuck Todd. “That, to me, was a shock… Everything is his experience, or my interpretation of his experience. Because as I keep reminding everyone, it’s not a documentary. It is an interpretation. That’s my job.”

Fair enough. But I can’t help thinking of the photographer Matsushige and what he told me over a quarter-century ago, while taking pictures of children whose clothes and skin were charred and hanging from their bodies when only a few moments prior they were walking to school on a clear August morning: “I just took a picture of you as you are suffering, but this is my duty.”

Below is an interview, also from 1995, with the then-mayor of Hiroshima, Takashi Hiraoka, who, coincidentally, was a journalist before entering politics and worked for the same newspaper as Matsushige, the Chugoku Shimbun. I remember him as a true gentleman in his mid-60s, at ease with his role in local politics and passionate about sharing Hiroshima’s “Never Again” message with the world.

Now 95, Hiraoka served eight years as mayor of Hiroshima before retiring in 1998. Since our interview, two more countries – Pakistan and North Korea – have joined the nine-member “nuclear club.” According to the Stockholm International Peace Research Institute, the US, the UK, Russia, France, China, India, Pakistan, Israel, and North Korea have among them nearly 16,000 nuclear weapons, all of which are many times more powerful than the two bombs dropped on Japan in August 1945.

§

Excerpts from the 1995  interview of the then-mayor of Hiroshima, Takashi Hiraoka, as published by The Japan Times Weekly, August 5, 1995. Used with the author’s permission

MJ: As mayor of Hiroshima, what is your message to the world on the 50th anniversary of the atomic bombing?

TH: The atomic bombings of Hiroshima and Nagasaki marked not only the end of World War II but the beginning of the nuclear age. In this respect, the bombings were a tragedy for all of humanity. The people of Hiroshima have chosen to see their experience as a lesson for humanity. The 50th anniversary of the atomic bombing of Hiroshima is an excellent opportunity for us to look back on our past and think about our future.

Our message has always been that the tragedies of Hiroshima and Nagasaki should never be repeated. Now that we have reached the half-century landmark, this message should be re-emphasized, together with the call for nuclear disarmament. I see the 50th anniversary as an opportunity to come together with the people of the world so that we can work toward the abolition of nuclear weapons.

Japan does not consider the use of nuclear weapons to be against international law. What is Hiroshima’s official stance on the deployment of nuclear weapons ?

As the first city to have experienced a nuclear attack, we firmly believe that the use of nuclear weapons violates international law. We believe this for two main reasons. The first is the indiscriminate nature of nuclear weapons. It is extremely difficult, if not impossible, to restrict the destructive power of nuclear weapons. The second reason is the extraordinary cruelty of nuclear weapons. What I mean by this is that there are still many hibakusha (atomic bomb survivors) suffering the effects of radiation exposure.

International law prohibits the deployment of weapons that inflict unnecessary suffering on human beings such as “dumdum” bullets and chemical weapons. The United Nations General Assembly has passed a number of resolutions prohibiting the use of nuclear weapons for the very same reason.

The Japanese government has three non-nuclear principles: not to produce, possess, or harbour nuclear weapons. We must continue to push the government to uphold these three principles. Unfortunately, the Japanese government does not have a strong stance toward U.S. foreign policy because it wants to maintain good U.S.-Japan relations. But I think the Japanese government should have a stronger stance toward the United States, particularly in regard to its nuclear-weapons policy.

In what ways does the city of Hiroshima influence the governments of other nations? How do you get your message across to the world?

Whenever a foreign country conducts a nuclear-weapons test, we immediately send a telegram protesting the test and calling for an end to further nuclear-weapons testing. We also have a program called the International Conference of Mayors for Peace Through Inter-city Solidarity. Currently, 404 cities in 97 countries are a part of the program and support our call for the total abolition of nuclear weapons. The purpose of the program is to contribute to lasting world peace by strengthening the ties between the cities of the world…

Hiroshima and Nagasaki have long been calling for the total abolition of nuclear weapons. How can this goal be attained when many countries—Iran and North Korea, for example—see having a nuclear-weapons program as the key to gaining respect on the world stage?

This is a very difficult problem, and one the whole world will have to work on together to solve. We must continue to tell the citizens and leaders of the world that possessing nuclear weapons will never be a positive thing. Governments justify their nuclear arsenals with language like “national security.” But what about global security?

Not only does nuclear war mean the annihilation of humans, but every time a nuclear weapon is tested, the environment is irreparably damaged. What we have to do is raise public awareness of the dangers. We can push for the ratification of the Comprehensive Test Ban Treaty as soon as the negotiations are completed next year…

[Apart from the Comprehensive Test Ban Treaty] … we must also enact a law or treaty that ensures nations which possess nuclear weapons will never use them against nations that do not. Such a treaty will ease the concern of nations that do not have nuclear weapons and, hopefully, lessen the incentive to initiate a nuclear-weapons program.

We must also have strict control over the materials required to produce nuclear weapons. Those countries currently trying to develop nuclear weapons feel they are not given equal consideration in international politics. So, on the one hand, we need strict control over nuclear materials, and on the other we have to address the needs and concerns of all the nations of the world in equal measure.

In the United States and Japan, there was a great deal of controversy over a commemorative stamp that was to be issued by the U.S. Postal Service. The stamp, which was never issued, featured a painting of an atomic mushroom cloud accompanied by the caption “Atomic bombs hasten war’s end, August 1945.” What kind of message do you think it sends the people around the world?

I have many reasons to believe that the statement “Atomic bombs hasten war’s end” is simply not true. By August 1945, Japan had neither the ability nor the will to continue waging war. The Japanese government was trying to find a path to peace as early as the spring of 1945. I believe the U.S. government was aware of this when it decided to drop the atomic bomb.

If the United States had wanted only to end the war, it did not have to use nuclear weapons. The United States possessed more than enough conventional weapons to destroy Hiroshima and Nagasaki and to end the war. There are many different opinions as to why the U.S. government decided to drop atomic bombs on the cities of Hiroshima and Nagasaki.

I would like to leave the answer to the scholars, but I do have a question. In 1945, President Harry Truman said that dropping the atomic bombs saved 250,000 to 500,000 lives. In 1985, President Ronald Reagan said dropping the atomic bombs saved one million American soldiers’ lives. In 1991, President George Bush said that several million lives were saved as a result of the atomic bombings. I wonder what this change means? I understand that the U.S. government uses these figures to justify the bombings, and that once a government has committed itself to a certain policy or decision, it does not want to change its stand.

But why do the numbers keep rising? Before the atomic bombs were dropped, many U.S. officials, including military personnel, argued that the bombings were not needed to end the war.

What is your reaction to the Smithsonian Institution’s decision to scale back its controversial Enola Gay exhibit at the National Air and Space Museum in Washington D.C.?

A couple of years ago the Smithsonian Institution had an extensive exhibition on World War II that included an exhibit on the plight of the Japanese Americans who were interned during the war. Due in part to this exhibit, the U.S. government admitted that its policy was a mistake and compensated the surviving Japanese Americans who had been interned. This led me to believe that the people at the Smithsonian Institution were committed to historical accuracy.

Now, the Smithsonian has yielded to political pressure and has missed an opportunity to thoroughly examine the history surrounding the bombings. I am disappointed with the Smithsonian’s decision, as are many people of conscience in this world. We could spend hours talking about whether the atomic bombings of Hiroshima and Nagasaki were justifiable or not, but such discussion is futile—it happened 50 years ago. What we have to do now is learn from the experience and make sure it never happens again. We are not asking for an apology.

Some Americans say that we are trying to make ourselves look like innocent victims, that we are indulging in our grief in an attempt to diminish the atrocities committed by the Japanese military during the war.

Nothing could be further from the truth. As the mayor of Hiroshima I acknowledge that the Japanese military carried out a war of aggression and committed many atrocities. I have personally done a lot of soul-searching on this subject and have publicly apologized to those who suffered at the hands of the Japanese military. I would also like to add that if nuclear weapons are not abolished, the horrors that Hiroshima and Nagasaki experienced will be experienced by others. The question is not if but when it will happen. The nuclear weapons that exist today are tens of thousands of times more powerful than the atomic bombs dropped on Hiroshima and Nagasaki. Obviously, a tragedy brought about by a nuclear war today would be far greater than the tragedies of Hiroshima and Nagasaki.

Why do you think the Smithsonian Institution decided against displaying photographs of and items belonging to the victims of the bombings?

It seems many Americans are not willing to face the reality of what happened after the bomb was released, but for humanity that is where the lesson begins. Hiroshima’s mission is to let the people of the world know what happened on the ground—what happened to the people of our city. I first came to Hiroshima in September 1945, so I remember very well the devastation and the initial rebuilding of the city. What moved me most was the strength of the survivors. That strength has evolved into a determination to prevent others from experiencing the horrors of nuclear war.

They feel it is their duty to make a constant appeal for world peace and nuclear disarmament. This is their mission, and with this mission they have overcome their tragedy. They do not harbor any hatred toward the American people. Instead, they have chosen to work for peace. The people of Hiroshima have come to understand what peace means for the world. And as the mayor of Hiroshima, I am very proud of them.

A version of this article was originally published on the author’s Substack newsletter, ‘The First Person’.

 

 

Oppenheimer Did Not Stop at Building a Nuclear Bomb. He Also Pushed for its Use in Japan.

Christopher Nolan’s movie is skilfully made, but its effect is to place a glossy veneer on the ugly reality of the US imperial project and the complicity of those who contribute to it.

Today, August 6, marks the 78th anniversary of the destruction of Hiroshima by an atomic bomb dropped on the city by the United States during the Second World War.

Oppenheimer is a well-made movie. It portrays Oppenheimer’s personal struggles, and draws attention to the dangers of a nuclear arms race. However, the movie fails to emphasise that during the Second World War, Oppenheimer did not restrict himself to the problem of building a nuclear bomb. He advocated for its use in Japan, over other possible options, and played a role in planning its delivery so that it would take as many lives as possible.

In May 1945, just days after Germany’s surrender, a committee comprising a number of scientists and some military officials convened to discuss possible targets for the bomb. The movie alludes to Oppenheimer’s involvement with this process but, contrary to what is suggested there, Oppenheimer was not a marginal participant; the committee met in his office and he was the one who set out the agenda. The meeting’s summary reveals how the committee calmly considered the most effective possible destruction of various cities.

Kyoto was placed on top of the list and ranked as an “AA target”. The committee noted that it had “a population of 1,000,000 … and many people and industries are now being moved there as other areas are being destroyed”. It emphasised that “Kyoto has the advantage of the people being more highly intelligent and hence better able to appreciate the significance of the weapon.” Even by Orientalist standards, this argument was extraordinary: did the committee seriously believe that those in other parts of Japan were too dull to feel the terror of a nuclear bomb?

As the movie notes, Kyoto was spared because of the intervention of the US secretary of war, Henry Stimson. Next on the target committee’s list was Hiroshima, which was also rated as an “AA target”. The committee observed that “it is such a size that a large part of the city could be extensively damaged. There are adjacent hills, which are likely to produce a focusing effect which would considerably increase the blast damage.”

A month later, a group of scientists led by James Franck, and including the physicist Leo Szilard, compiled a prescient report that analysed the dangers of an arms race, and the possibility of an international agreement to control nuclear weapons. It advised the US government not to “be the first to release this new means of indiscriminate destruction upon mankind”. Instead it recommended that “nuclear bombs…[be]…first revealed to the world by a demonstration in an appropriately selected uninhabited area”.

In a few days, the scientific advisory panel to the “interim committee” — the apex wartime body on nuclear issues — dismissed the Franck report stating that “we see no acceptable alternative to direct military use”. Oppenheimer signed the memo, titled “recommendations on the immediate use of nuclear weapons”, on behalf of the four-member scientific panel.

Szilard then drafted a petition to the US president. The petition urged the president “to rule that the United States shall not resort to the use of atomic bombs in this war unless the terms which will be imposed upon Japan have been made public in detail” and consider “all the other moral responsibilities which are involved”. This was so reasonable that even the hawkish physicist Edward Teller agreed with it. Oppenheimer not only refused to sign the petition — as a scene in the movie shows — he prevailed on others at Los Alamos, including Teller, to withhold their signature.

Soon after the war, in 1949, Oppenheimer testified before the House Un-American Activities Committee. This was separate from his later hearings before the US Atomic Energy Commission (AEC) that provide the setting for the movie. In 1949, the committee was deferential to Oppenheimer and he, in turn, freely denounced a number of his associates, including his former student Bernard Peters. Peters was eventually forced from the United States, and spent several years in Mumbai at  the Tata Institute of Fundamental Research before moving to Denmark.

Oppenheimer did argue for a more rational nuclear policy after the war. His perspective was not rooted in a principled opposition to US hegemony or in a desire for a more equitable world order. His argument was simpler: “looking ten years ahead, it is likely to be small comfort that the Soviet Union is four years behind us” in developing an atomic arsenal. “Our twenty-thousandth bomb…will not in any deep strategic sense offset their two-thousandth”. Even this was unacceptable to sections of the US establishment and eventually led to his political downfall.

The AEC’s decision to revoke Oppenheimer’s security clearance in 1954 meant that he ceased to be a formal government advisor. But Oppenheimer remained the director of the Institute for Advanced Study at Princeton and a privileged member of US society.

Although this author is not an expert on Oppenheimer’s life, it is clear that he possessed a complex personality. Perhaps Oppenheimer’s actions should be viewed within the framework of the “banality of evil”. As an ambitious individual seeking advancement within the US system, he made repeated compromises and lost sight of the true nature of the US military establishment.

For this reason, the movie’s most problematic aspect is not that it is overly sympathetic to Oppenheimer. Rather, by  glorifying the Manhattan project and the US victory in the technical race to build the bomb, it obscures the enormity of the crime committed by the Truman administration when it bombed Hiroshima and Nagasaki in August 1945.

The bombs killed hundreds of thousands of people, the vast majority of whom were innocent civilians. A grim indicator of how many children lost their lives is that when scientists sought to estimate the toll of the bombings, they relied on school records to calculate a statistical mortality rate in the general population.

US apologists have consistently sought to justify these acts by arguing that a land invasion of Japan might have been even more brutal. Some historians have questioned the veracity of such claims. But this is a false dichotomy since these were not the only two options before the US government. And accepting these terms of discourse leads to a blind alley where one is forced to a debate a counterfactual scenario and rely on internal US military sources that are not neutral.

A simpler question can be used to form an ethical judgment: “Did the Truman administration do everything possible to save lives and seek less violent alternatives to the atomic bomb?” Even the limited discussion presented above, which forms a small part of the voluminous historical record, shows that the answer is negative.

A different perspective is provided by Szilard’s recollection of his meeting with James Byrnes, who was Truman’s secretary of state at the time of the bombings. When Szilard tried to caution Byrnes against using the bomb, Byrnes explained that “Russia might be more manageable if impressed by American military might, and that a demonstration of the bomb might impress Russia”. This implies that the bomb was used by the United States, not out of necessity, but to establish its geopolitical dominance.

An examination of postwar US policy bolsters this view, since it shows how readily the US government is prepared to use violence and terror in pursuit of its strategic objectives. When the United States attacked Southeast Asia, it dropped millions of tons of bombs and this intervention led to the loss of millions of lives. The invasion of Iraq began with a “shock-and-awe” campaign that explicitly sought to achieve the “non-nuclear equivalent of the impact that the atomic weapons dropped on Hiroshima and Nagasaki had on the Japanese”. This invasion led to hundreds of thousands of deaths in Iraq.

Hollywood is well known for glorifying US wars. Oppenheimer is directed more skilfully than most movies, and its message is more subtle. But, ultimately, its effect is to place a glossy veneer on the ugly reality of the US imperial project and the complicity of those who contribute to it.

Suvrat Raju is a theoretical physicist with the International Centre for Theoretical Sciences (Bengaluru). The views expressed are personal and do not reflect those of his institution.

The article provides links to and details of online and offline resources, wherever possible, and the author encourages readers to follow them. 

 

Anurag Thakur’s Fuming Reaction to Oppenheimer Is Simply Absurd

Self-proclaimed saviours of Hinduism are unclear about what they are fighting against. Just loudly proclaiming ‘insult’ is vague and, indeed, self-defeating.

Did Julius Oppenheimer quote from the Bhagavad Gita while having sex? And if he did, is that an ‘insult’ to Hindus? And if it indeed is, should the Central Board of Film Certification, which has passed the film as fit to be screened, cut those scenes after it is in the theatres? These are the vexed questions on which some trolls and a few persons in high positions have given their considered views which can be summed up simply as Yes, yes and yes.

Among these personages is Anurag Thakur, minister for information and broadcasting, under whom the CBFC functions. The minister surely knows that once the censor, as the CBFC is called, has permitted the screening of the films, he should stay away. Instead, he is threatening this organisation with stringent action. He hasn’t explained, however, what exact action he will take ― perhaps he will have the ‘traitors’ shot? Whatever the case is, he is out of order here.

While the trolls and Thakur say this scene is anti-Hindu, they have not explained the reasons. What exactly is it that they find objectionable? Is it the fact of sex, that Oppenheimer and his lover Jean Tatlock are in bed and soon proceed to make love? Which, by the way, has not been shown. Is it that he reads from the Gita? Or that he actually picks up the book while they are in bed, which is somehow sacrilegious?

Because, that would raise further questions – is the Gita (or indeed Hinduism) anti-sex? Is the act of sex dirty and impure? Or is the conjunction of the two somehow anti-Hinduism?

This, then, is the problem. All these self-proclaimed saviours of Hinduism are unclear about what they are fighting against. Just loudly proclaiming ‘insult’ is vague and, indeed, self-defeating. Laughable, really. To any sensible person, it shows a perverse mind at work, which finds anything to do with sex filthy.

This is not just a problem with those who have objected to Oppenhemier. The producers have already blacked out scenes in the film which are remotely nude and ‘sexual’ to secure a release here. Our own censors have added that ridiculous disclaimer about smoking. Plus, our government is actively considering censoring films and shows on OTT platforms of ‘vulgar’ content. Why this prudishness, at a time when everyone has access to all kinds of stuff on the Internet? Unless there is a plan to censor that, too.

That Oppenheimer had studied Sanskrit and had read the Gita deeply is well documented. One would have thought that the Hindutva types, who never resist an opportunity to proclaim from the rooftops that India (i.e., Hindu India) was superior to all other religions, would have been overjoyed at this. Add to that the fact that he invented the atom bomb (though it was first made in India several centuries ago), a destructive device, would have fed right into their masculinist and militaristic fantasies. They would have then rushed to see the film in large numbers. Instead, they are carping about the use of the Gita in a sex scene?

Those interested in good cinema are heading to theatres to see this film and coming back impressed — at the scale, the story and most of all, the lessons the film holds. That the father of the most destructive weapon created till then, which killed lakhs of Japanese, is filled with doubt about his invention. Nuclear devices were supposed to create peace for all time — they didn’t, as we have seen in the war-filled decades since then.

Oppenheimer’s self-doubts turned him into an object of suspicion in the American security establishment. He was suspected of being a communist, the worst crime in their eyes in the years just before the Cold War began — the FBI kept tabs on him and his top security Q clearance as the man working on the bomb was revoked in 1954; the decision was nullified only in 2022, long after he had died.

The so-called insult has not caused any comment from sensible viewers, but only a handful of trolls who seem to have nothing better to do have got hot and bothered. A bit of advice to Thakur et al ― go and see the film, take in the spectacle that Christopher Nolan has created, see how well everyone has acted, and think deeply about the message of the film, instead of trying to burnish your Hindutva credentials by threatening a statutory body like the CBFC. It just makes you look absurd and silly.

The Many Sides and Dilemmas of ‘Oppenheimer’, Father of the Atomic Bomb

Director Christopher Nolan’s thoughtful film is full of familiar stars, but it is Cillian Murphy as the protagonist who gives it depth.

That Oppenheimer is director Christopher Nolan’s most ‘grown-up’ work in years is something we sense from the opening scene, when the protagonist – an astoundingly good Cillian Murphy – stares at ripples in a puddle. The ‘father of the atomic bomb’ is probably thinking about legacy. And yet, nothing quite prepares us for a scene when an unclothed Oppenheimer is slinkily seated on a sofa with his legs crossed, across from his lover Jean (Florence Pugh) in a hotel room.

It’s a flashback linked to a hearing, where the pioneer scientist is being grilled about his past ‘transgressions’ while being vetted for security clearance. Oppenheimer was married to Kitty (Emily Blunt) at the time, and Jean was known to be a card-carrying communist, especially on the brink of the Cold War. The hearing seems determined to discredit the man behind the Manhattan Project.

Seated at the hearing wearing a crisp suit, the camera slowly pans from behind a character and we see stark-naked Oppenheimer in front of the committee. We soon realise we’re watching the scene from Kitty’s point-of-view, who is seeing her husband stripped of all dignity. She imagines Jean seated on his lap, staring directly at her. It’s an ingenious and confident swing by a director who hasn’t filmed a sex scene or nudity in nearly two decades.

Nolan – one of the most hotly-debated filmmakers among cinephiles – revered for his big canvas ideas and, simultaneously reviled for writing himself into corners, is on solid ground in the biopic of one of the most important figures of the 20th century. Adapted from Kai Bird and Martin J Sherwin’s Pulitzer-winner, American Prometheus: The Triumph & Tragedy of J. Robert Oppenheimer – this is arguably Nolan at his most socially conscious. Like most of his heroes, obsessive men each of them, Nolan mines Oppenheimer for a haunted guy, long before he manifested the power for humans to become destroyers of the world. When he’s asked about his time studying amongst the world’s greatest scientific minds in Europe, he testifies to being homesick, terrible in the laboratory, and unable to sleep because of visions of ‘another world’. Nolan’s filmmaking is dynamic in this part – filling the screen with wondrous images.

He tells the story of Oppenheimer primarily through two sets of hearings – one where the scientist is asked about his alleged links to the Communist party, while his colleagues and friends are being made to testify about patriotism for America. Another is a congressional hearing of Lewis Strauss (Robert Downey Jr.) being vetted for a Cabinet position and asked about his dynamic with Oppenheimer over the years. Nolan differentiates the two timelines by showing Oppenheimer’s hearing and flashbacks in radiant colour, while Strauss’ flashbacks and hearing appear in sharp black-and-white. It’s an efficient way to distinguish, prompting a rhythm for the visuals.

Oppenheimer is a busy film with familiar faces in abundance. Downey Jr, who spent the better part of the last decade playing the smartest man in most films, plays a slimy politician with a chip on his shoulder. Matt Damon, who had a sensational special appearance in Interstellar (2014), gets a sizable role in this one as Lt. General Groves – who recruits Oppenheimer to be the director of the Manhattan project. Damon’s portly physique and self-assured tone is nicely at odds with Cillian Murphy’s frail build, but high-powered rebuttals. As Jean, Florence Pugh burns bright like a shooting star, for the few minutes she is on screen. Emily Blunt as Kitty is ferociously no-nonsense, which reminded me of Claire Foy’s turn as Janet Armstrong in First Man (2018). Casey Affleck is chilling in a cameo as Boris Pash – in-charge of security of the Manhattan project – sneaking up on his subjects with his boyish, unassuming manner, before quietly trapping them.

The film, however, belongs to Cillian Murphy showcasing the many sides to Oppenheimer’s personality. A brilliant scientist, he’s also equally immersed in social justice. We’re told he contributes a portion of his paycheck to help German colleagues escape the Nazi regime. He’s the “mayor” and “sheriff” of the town built around the Los Alamos laboratory in New Mexico but sobs like a teenager after being delivered bad news about a loved one. He’s curious and ambitious about accomplishing something dangerous – because he says it’s better he discovers it, as opposed to fascists in Germany or Russia. However, he’s also equally torn about where this game of one-upmanship stops. A colleague accuses him of having become a ‘politician, who has left science far behind’, while trying to convince him to be their voice in the ears of the bigwig politicians in D.C, who haven’t fully grasped the direction they’re taking the world in.

As much as Oppenheimer is a showcase for its lead actor and director, the film also benefits tremendously from Ludwig Gorranson’s terrific score and Richard King’s excellent sound design. Especially, in three of the biggest scenes that Nolan sets up – the Trinity tests, the scene in which Kitty is summoned to testify about her husband’s character, and the ‘victory’ speech that Oppenheimer has to find his way through after the bombings of Hiroshima and Nagasaki are announced as a ‘success’. Gorranson’s score is equal parts delicate and muscular, creeping up on the visuals or otherwise enhancing them in every which way. King’s impeccably designed silence fills up our senses about the magnitude of the Trinity test, without having us hear it.

It’s probably for the best that Nolan fell out with long-time producers, Warner Bros, after the botched-up experience of releasing Tenet (2020) in theatres. It obviously resulted in every major studio and streaming service lining up outside the writer/director’s house for Oppenheimer. This has resulted in Nolan’s most unambiguously personal film in a long time, where he takes the life of a scientist – treated like a prophet during a World War, and later demoted to a mortal as soon as he began casting doubts on his own breakthrough. Gary Oldman as president Harry Truman – is outstanding in the one scene, becoming the face of an establishment that winces at Oppenheimer’s “cry-baby” attitude, when he asks them to scale down on nuclear weapons.

One of the most stirring things about this excellently dense and jumpy biopic is that Nolan never tries to reconcile his subjects’ contradictions. If anything, he realises the futility of trying to know the unknowable. Who was J. Robert Oppenheimer? A film can make an educated guess based on recorded actions and anecdotes. There will always be gaps, and Nolan doesn’t pretend otherwise. Instead, choosing to grapple with a larger question – what does Oppenheimer’s predicament mean for the generation today? Only our lives depend on it.

The Joker’s Origin Story Comes at a Perfect Moment: Clowns Define Our Times

With the rise of pompous leaders who gain support through boisterous antics and lofty ideals, the Joker’s plot is but a reflection of the society that we live in today.

The joker, the trickster, the jester, the provocateur – there is a rich cultural history of these roles going back at least as far as Greek mythology’s Hermes.

One of the most famous jester figures of the modern age is the Joker, who made his debut in the first issue of Batman comics in 1940.

Joker’s first comic book appearance can be dated to this April 25, 1940 issue of Batman #1. Photo: Wikimedia Commons

The Joker is funny, cool, and refreshingly intelligent. He is also back in theatres next month in the aptly named Joker, which this week won Best Film at the Venice Film Festival. As Batman’s arch-nemesis, the Joker offers a reprieve from the less interesting narcissistic, angst-ridden histrionics of the hero. The Joker’s punishment of society is often comical, and his relentlessly ironic spirit of rebellion contrasts with Batman’s dour moral self-righteousness.

The cultural provocateur

In a deck of cards the joker is (most of the time) formally useless. The two joker cards are omitted from most games, yet the deck is incomplete without them.

The joker is a necessary non-card, the exception that glues together the rest of the pack. A card of shifting rank and use, the joker offers a spark of improvisation within a rigidly hierarchical order.

Culturally, the joker reaffirms the social order through his lampooning of it, turning socially significant places into spaces of carnival and clowning, revealing the comical and absurd cracks in a spirit of anarchic play.

There are many of these self-styled “maverick” figures in global politics today, who strategically position themselves as somehow outside of the power structures they in fact serve to reproduce.

The card offers ‘a spark of improvisation’. Photo: Wikimedia Commons, CC BY

Yet this role has always been intimately tied up with the institutions it appears to subvert. The court jester, for example, functioned in part to legitimise the social order. He maintained a performative relationship with the people, but his acts of subversion of power reaffirmed its very boundaries in the first place.

The words and actions of such provocateurs flirting with the boundaries of social good taste and etiquette should always be taken with a grain of salt. Power can reproduce itself in multiple ways -including through its apparent critique.

1989: Wackiness with a nasty edge

Within the Batman franchise, the most effective characterisations of the Joker have him tottering dangerously between comedic whimsy and psychopathic sadism – that liminal space in which, arguably, all great comedy occurs.

Perhaps the greatest actor to portray the role is Jack Nicholson in Tim Burton’s Batman (1989). Nicholson’s Joker embraces the wackiness of Cesar Romero’s earlier interpretation in the 1960s TV series but adds a genuinely nasty edge, and this combination of colourful zaniness with lethal brutality makes for a disturbing experience for the viewer.

Also read: The Psychology Behind Why Clowns Creep Us out

“I make art until someone dies,” Nicholson’s Joker says to journalist Vicki Vale (Kim Basinger) in an art museum after he and his goons have defaced several pieces whilst bopping along to Prince.

“See, I am the world’s first fully functioning homicidal artist.”

Screen grab from the movie The Batman (1989) shows Jack Nicholson as The Joker pulling out his iconic long-barrelled revolver.

By the late 1980s, Nicholson, appearing as the perfect sleazeball in films like The Witches of Eastwick (1987), was the man behind some of the most hated characters in cinema. He was, thus, perfectly cast as the Joker – it helps that the Joker’s demonically twisted face isn’t that far from his own.

Nicholson received first billing in Batman and, as Roger Ebert commented, the viewer’s tendency is to root for the Joker over Batman. It is this ambiguity that makes Burton’s film so compelling.

2008: Why so serious?

Heath Ledger’s Joker from The Dark Knight (2008), for which he received a posthumous Best Supporting Actor Oscar, was virtuosically full-bodied. Ledger is eerily, vitally intense. Yet the famous question he asks in the film – “Why so serious?” – could easily be turned back on Ledger’s own performance.

Screen grab of Heath Ledger’s Joker in The Dark Knight (2008). The movie provided a fresh and realistic take on the infamous villain and went down as one of the best comic book movies till date.

Ledger endows the role with a psychological realism that, paradoxically, makes for a less interesting (and less complex) experience for the viewer than more ambiguous portrayals.

The uncomfortable mixture of the comical and the sadistic is what makes the character perennially appealing – we never know which Joker we will be getting at any time. Ledger, by making the character “real”, turns him into, merely, a rather humourless creep.

2017: Caught in a bad bromance

The symbiotic nature of the relationship between Batman and the Joker usually remains unexplored. Wonderfully, The Lego Batman Movie (2017) makes this relationship centre stage.

The film follows the Joker (Zach Galifianakis) as he tries to get Batman (Will Arnett) to admit that he needs the Joker as much as the Joker needs him. Batman refuses to acknowledge the bond the two share throughout most of the film; when he finally does, their bromance can fully mature.

Screen grab from a Turkish Airlines in-flight safety video featuring Batman and the Joker from The Lego Batman Movie. Photo: YouTube

2019: A mental deterioration

The latest version of the Joker is played by Joaquin Phoenix, an actor whose career has oscillated between the absurdly intense (Walk the Line) and the disarmingly clownish (I’m Still Here). Todd Phillips’ film promises to revitalise the character in an origin story following down-on-his-luck comedian/clown Arthur Fleck who transforms into the Joker as his mental health deteriorates.

Early reviews have praised the film’s representation of the current political landscape. Time Out calls it a “nightmarish vision of late-era capitalism”, and IndieWire suggests it is “about the dehumanising effects of a capitalistic system that greases the economic ladder”.

In the context of the incel movement – in which men rally around the perception of their own unjust victimhood – a narrative of a violent folk hero forming through the failure of his dreams of celebrity glory seems strikingly poignant.

Also read: ‘Joker’ Wins Best Film at Venice Film Festival

The frequency with which mass shootings now occur in America (in 2012 James Holmes killed 12 people at a screening of The Dark Night in Aurora, Colorado) has also lead to concerns about how the story will be read. The same Indiewire review criticised the film as “a toxic rallying cry for self-pitying incels”.

Given the necessity for a law and order stalwart against which the Joker can launch his antics, it is notable that there is no Batman in this film. Will the Joker be able to sustain a feature-length narrative on his own?

Screen grab from the trailer of the movie Joker. Courtesy: YouTube

Send in the clowns

Clownish figures seem to be becoming the new normal in professional politics. In April, comedian Volodymyr Zelensky was elected president of the Ukraine. The UK’s new prime minister, Boris Johnson, has been dubbed “BoJo” by the press – and they’re not just alluding to his name.

Much of the popularity of Trump has emerged from his presentation of himself as an outsider to the elite willing to lampoon and ridicule power – never mind that, as a rich New York City businessman, he is power personified.

The broader significance of this phenomenon is a little trickier to diagnose. It makes sense that, in an age when everything is valued in terms of its entertainment function (and when most people are aware of the common sleights of hand of the mainstream media they consume), clownish reality TV stars, provocateur comedians and gregariously sleazy entrepreneurs would amass unprecedented levels of power in the public domain.

Politicians entertain us by donning the outfit of the jester and making fun of politicians.

Perhaps this reflects a more widespread public cynicism regarding professional politics, or perhaps it is simply a reflection of a desire to be perpetually distracted by entertaining clowns.

At any rate, the film should be a hoot to watch.

Joker will be released in India on October 4.

Ari Mattes, Lecturer in Media Studies, University of Notre Dame Australia

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Is Acting Hazardous? On The Risks of Immersing Oneself in a Role

One particular myth that attached itself to Heath Ledger was that his death was somehow a result of immersing himself in the character of the Joker in Christopher Nolan’s ‘The Dark Knight’.

In 2009, Heath Ledger posthumously received an Academy Award for his performance as the Joker in Christopher Nolan’s film The Dark Knight (2008). To say that Ledger earned the recognition of his peers is to vastly understate his accomplishment. Ledger’s unflinching and disquieting performance as an anarchic sociopath – ostensibly, he played a comic-book villain, but his performance far transcended the source material – earned near-universal praise from critics and audiences alike.

By the time filming wrapped up, Ledger had completed his professional transition from ingénu to serious actor. As his final director, Terry Gilliam, remarked – “I think we all thought that this was somebody, without a doubt, who was going to be the greatest actor of his generation.”

During post-production, Ledger, who reportedly suffered from insomnia, accidentally overdosed on sleeping pills and died, aged 28.

In the wake of Ledger’s untimely death, his performance – and the events leading up to it – were voyeuristically scrutinised. His dedication to the craft of acting was well-known, as were rumours of his ill-health during filming.

He prepared obsessively for role of the Joker, isolating himself from public life to ‘galvanise’ the character in his own mind. And he said that his work took its toll on his sleep. So perhaps it’s unsurprising that his performance was mythologised and his cause of death psychologised. To put it cynically – people like a good tragedy.

One particular myth that attached itself to Ledger was that his death was somehow a result of immersing himself in the character of the Joker. The idea is that Ledger’s battle with insomnia was rooted in some sort of existential angst – an angst borne of ‘becoming’ an abhorrent character. Film critics stoked various versions of this narrative.

David Denby of The New Yorker wrote:

“As you’re watching [Ledger], you can’t help wondering … how badly he messed himself up in order to play the role this way. His performance is a heroic, unsettling final act: this young actor looked into the abyss.”

Christopher Orr of The New Republic added:

“Even without Ledger’s death, this would be a deeply discomfiting performance; as it is, it’s hard not to view it as sign or symptom of the subsequent tragedy.”

And, on the day of Ledger’s death, The New Yorker’s Richard Brody mused –

“As we remember Ledger, it’s worth recalling the agonies that actors, from amateurs to stars, have to pull from their guts.”

Comments like these seriously misconstrue the nature of character immersion – a misunderstanding that begins with the idea that actors ‘lose themselves’ in character or ‘forget’ who they are. Supposedly, this is especially true of method actors, who are trained to become at ‘one’ with their role.

Also Read: Christopher Nolan’s Vision of ‘Future of Film’ May Not Hold True for India

There’s a grain of truth to this talk, but merely a grain. To see why, consider a theoretical model developed by cognitive scientists Shaun Nichols and Stephen Stich designed to help make sense of the act of pretending. Nichols and Stich invite us to think of our minds as collections of boxes. Each box represents a different type of propositional attitude toward a sentence.

For example, if you believe that Bigfoot exists, your Belief Box contains ‘Bigfoot exists’; if you desire that your crush likes you back, your Desire Box contains ‘my crush likes me back’; and so on. Nichols and Stich add a ‘Possible World Box’, which contains things you neither believe nor desire, but simply think. Thus, if you think that grass is blue, your Possible World Box contains ‘grass is blue’; and if you pretend that you are a hermit crab, your Possible World Box contains ‘I am a hermit crab.’

I recently extended this model by looking at situations where character immersion comes into play. When you’re fully immersed in a character, you cognitively attend exclusively to statements your character would endorse. Your attention is fixed exclusively on your Possible World Box, and your Possible World Box contains only the beliefs and desires of your character.

For example, if and when Ledger was fully immersed in the character of the Joker, he consciously thought things such as ‘Chaos is beautiful’ or ‘Chance alone is fair,’ and he did not consciously think ‘I am Heath Ledger’ or ‘I am acting on a soundstage.’ In other words, Ledger attended only to his Possible World Box, paying no attention to his Belief and Desire boxes.

That’s the way that method actors ‘lose themselves’ or ‘forget who they are’. They don’t literally forget who they are, since their actual beliefs and desires remain the same. (Put in terms of the model: their Belief and Desire boxes retain their original contents.) However, fully immersed actors ‘forget themselves’ in the sense that they actively ignore facts about who they are, temporarily subordinating their own thoughts and feelings to those of their character. Actors forget their identities like stoners forget the quadratic formula. The information isn’t gone – just temporarily offline.

This way of thinking about character immersion has several advantages: it distinguishes immersion from delusion at the level of cognitive architecture; it countenances the phenomenon of falling out of character; and it explains how preparatory research can facilitate immersion. A similar model can be found in the works of Konstantin Stanislavski, creator of the ‘system’ that ultimately inspired method acting. But the model described here has a particular advantage: it accommodates actors’ talk of ‘getting lost in character’ without taking such talk too literally.

Misplaced fear about ‘staring into the abyss’ belies an oft-forgotten truth about acting: it’s fun. Even the most serious roles can be enacted with childlike joy; it is play, after all. Ledger himself said that portraying the Joker was ‘the most fun I’ve ever had, or probably ever will have, playing a character’. In our eagerness to honour the ‘serious actor’, let us not forget that Ledger, like all truly serious actors, played his part with joy, and graciously invited us to watch.Aeon counter – do not remove

This article was originally published at Aeon and has been republished under Creative Commons.

First Light From the M87 Black Hole: What Are We Looking At?

How does a picture of a luminous ring around a black region square with the popular idea that black holes trap everything including light?

As everyone knows by now, the Event Horizon Telescope (EHT) has imaged a black hole in the galaxy M87. While astronomers already knew that black holes were real objects in the Universe, the picture provides direct ocular proof of their existence.

But what exactly are we looking at? How does a picture of a luminous ring around a black region square with the popular idea that black holes trap everything including light?

To understand this, let us step back a bit and ask what we mean by an image. It is true that black holes do not emit light. In this, they are no different from any other non-luminous object, like the reader and writer of this article. When you take a selfie, you use a flash to illuminate yourself. Selfies taken in total darkness are not the kind of image you can put up on Instagram. With a flash, the light bounces off your body and is received by the camera lens. More technically, we can say that light is scattered by your body into the camera lens and this is what produces the image you post on Instagram.

The same is true for black holes. Imagine that we transport ourselves magically to the vicinity of an isolated non-rotating black hole – taking care not to fall in – and remembering to take a camera along. If we snap a picture of the hole, it would be sheer black, nothing to write home about. This fits with the popular belief that you cannot “see” a black hole, precisely because it is black.

Also read: Stephen Hawking, the Cosmic Bard

If you use the flash on your camera, the picture changes dramatically. Light from the flash is emitted in all directions. Imagine a line going from your camera to the centre of the black hole. Light going along this central line of sight to the hole will be absorbed and not return to your camera. This also applies to rays that make a small angle with the line of sight to the centre.

Thus we expect a central dark region in the picture, the “shadow” of the black hole. Similarly, rays that are emitted at large angles to the line of sight will go nowhere near the black hole and disappear into space.

The rays in between these extremes are more interesting. Rays going at angles closer to the line of sight will be bent by the gravitational field of the black hole. If the angle is small enough, they  will bend all the way around the black hole and return to the camera! Since all rays making the same angle with the line of sight will behave the same way, we can expect to see  a ring of light around the black hole. As the rays get closer and closer in, one would find an angle at which the bent ray goes twice around the black hole and returns to the camera.

Thus, we get a series of concentric rings as shown in the image.

A black hole selfie taken with a flash. Credit: Joseph Samuel

A black hole selfie taken with a flash. Credit: Joseph Samuel

The rings pile up near the innermost one, and for still smaller angles, we see only blackness. This is how a black hole scatters light. This is what a non-rotating black hole’s selfie would look like, if black holes were into selfies! Definitely worth posting on Instagram.

Astrophysical black holes are not isolated, but accrete matter from neighbouring stars. This matter falls into the gravitational field of a black hole, and – if it has angular momentum – circles around the black hole just the way the planets circle the Sun.

The gravity of the black hole pulls the gaseous stellar matter in, causing it to accelerate to near-light – or relativistic – speeds. Friction  causes the gas to heat up and radiate energy. The hot disc radiates at all frequencies across the electromagnetic spectrum and this is the source of illumination that enables us to “see'” the black hole.

Now, suppose we have a non-rotating black hole with an accretion disc in its equatorial plane and we view the hole from slightly above the plane. As we learned, we need only concern ourselves with rays of light that go from the light source (the accretion disc) to our eyes. In the absence of strong gravity and relativistic effects, we would expect to see a disc rather like the rings of Saturn. However, the relativistic speed of the swirling matter causes the radiation to be ‘beamed’: matter that appears to be coming towards us would appear to radiate more strongly and appear brighter, while receding matter appears dimmer.

Further, some of the light emitted by the disc on the far side of the black hole would be bent toward us and we would see it as apparently coming from “above” the black hole. The final picture we see looks something like in the one below.

An artist's impression of a black hole accretion disc (based on a simulation by Jean-Pierre Luminet). The brightness on the left is due to motion towards the observer. Credit: Roshni Rebecca Samuel

An artist’s impression of a black hole accretion disc (based on a simulation by Jean-Pierre Luminet). The brightness on the left is due to motion towards the observer. Credit: Roshni Rebecca Samuel

Armed with the understanding gained from these idealised situations, we can begin to understand the the real image taken by the EHT. The black hole in M87 is rotating and, as a result, the picture is somewhat more complicated. But our qualitative understanding can still be brought to bear on the problem. If we trace the light that enters our eye backwards towards its source, it will go to the black hole, bend in the gravitational field and eventually end up at the source of light, in this case the accretion disc. Even if the disc is uniform, those parts that appear to be moving towards us will appear brighter due to beaming effects. We expect to find a ring of light with a dark spot in the middle representing those rays that fall into the black hole.

The EHT collaboration performed a detailed simulation of a magnetised accretion disc around a rotating black hole. The researchers found its results agreed with their image.

Now, let’s clear up a few points that we glossed over. We talked of “seeing'”, which colloquially means visible light is involved. In fact, the EHT works with radio waves whose wavelength is 1.3 mm. Eyes and cameras are replaced by a combination of telescopes spread all over Earth.

Also read: Look Behind the Low-Res Black Hole

The essential principle is still the same. The EHT uses a technique called very-long-baseline interferometry, or VLBI, to image the distant black hole with the accuracy of a microarcsecond, a feat of both technology and science. It involves spectacularly accurate time-keeping developed over the last few decades, and a fundamental understanding of the wave-nature of light developed over the last few centuries.

So, are we looking at a black hole? Or is it an accretion disc seen through the distorting gravitational lens of the black hole? It is both, actually. When you take a selfie, you are looking at light from a flash scattered by the atoms of your body. To a physicist or an astronomer, there is no fundamental difference between light scattered from a black hole or from an atom. The disc is like the flash and the black hole is the scatterer.

Take a good look at the dark patch in the middle of the image. You are staring into the desolate darkness of the event horizon. Wow!

Joseph Samuel is a professor at the Raman Research Institute, Bengaluru.

A 200-Year-Old Experiment Has Helped Us See a Black Hole’s Shadow

With one fuzzy image, history has been made.

Note: At 6:30 pm (IST) on April 10, members of the Event Horizon Telescope (EHT) published the first direct image of a black hole, specifically the shadow of its event horizon. The EHT is a globally coordinated network of telescopes using which scientists achieved this feat. The black hole in question lies at the centre of a galaxy called M87, about 53 million lightyears away (towards the constellation Virgo in the night sky).

The EHT collaboration is also observing the black hole at the centre of the Milky Way, located at a point astronomers call Sagittarius A*. The separate location and local context of the two black holes aside, the underlying principles concerning their observation are the same. They are delineated below, first published on April 9, 2019.

§

While the black hole at Sagittarius A* is over 20 million km wide and weighs 3.5-4 million solar masses, it is also extremely far away: 26,000 lightyears. So astrophysicists who wanted to study it had a challenge: to find a way to view something the size of an idli on the Moon’s surface from Earth. They responded by developing the EHT. And the EHT solved their problem using a technique called VLBI, described below.

On June 25, 2014, scientists announced the discovery of a trio of supermassive black holes at the centre of a galaxy 4.2 billion light years away. The find was credited to the European VLBI Network. A Space.com report said that this network “could see details 50-times finer than is possible with the Hubble Space Telescope”. How was this achieved?

VLBI stands for very-long-baseline interferometry. It is a technique used in astronomy to obtain high resolution images of the sky using a network of telescopes across the planet that can – with the aid of high-tech computing – come close to mimicking the sharpness of a hypothetical telescope nearly as large as the planet. It is commonly used to image distant cosmic radio sources, such as quasars, although it is also sometimes used to study stars.

The concept has its roots in Thomas Young’s famous double-slit experiment, which he conducted in 1801. When Young placed a screen with two extremely narrow slits in front of a light source, such as a burning candle, the shadow cast on the other side was not simply two bright patches. It was actually an alternating patchwork of bright and dull bands, as if the candle light had passed through multiple slits. This was the interference pattern. Young’s experiment was important to establish that light travels as a wave, overturning Newton’s conviction that light was composed of particles.

The interference pattern

An illustration (that doesn't appear in the book) showing the double-slit experiment with electrons instead of light, although the principles are the same. Credit: Wikimedia Commons

An illustration showing the double-slit experiment with electrons instead of light, although the principles are the same. Credit: Wikimedia Commons

When light passes through each slit, it diffracts, i.e. starts to spread out. At some point in front of the slits, the diffracted waves meet and interfere. Where crest of one wave met the crest of another, the combined wave had a higher crest than the two, and cast a bright spot on the screen. Where crest met trough, however, they cancelled each other. And where trough met trough, there was a dark band on the screen – a shadow. When the position of the slits was changed, the interference pattern also shifted.

In VLBI, the candle is replaced by a distant source of radio waves, like a black hole. The slits are replaced by radio antennae on telescopes. Since Earth is rotating, the antennae are in motion relative to the black hole, and receive the radio waves at different times. When these signals are allowed to interfere with each other, they produce an interference pattern that is processed at a central location to recreate the state of the black hole, whether visually or any other way.

Radio waves have greater wavelength than visible light. So radio telescopes have an inherently poorer angular resolution than optical telescopes of the same size. Angular resolution is defined as the ratio of an emission’s wavelength to the diameter of the telescope receiving it. Qualitatively, it denotes the smallest separation between two points that the telescope can distinguish in the image, and engineers like it to be as low as possible. For example, a 50-meter-wide radio telescope will have an angular resolution of 50/0.01 = ~41.2 arc-second. An optical telescope of the same size will have an angular resolution of 0.004 arc-second.

Also read: The DIY Experiment That Captures ‘All the Mystery of Quantum Physics’

In other words, the optical telescope will be able to view a feature 10,000-times smaller in its image than will a radio telescope of the same size. The question does arise: why don’t we simply view the black hole’s immediate surroundings in visible light then?

This is because the astronomical objects that do emit radio waves encode certain information in them that visible radiation does not carry. Additionally, radio waves of wavelength 1.3 mm – that the EHT tracks – are not absorbed or scattered by dust in the Milky Way or in Earth’s atmosphere, allowing antennas on the surface to capture them. But this in turn requires a telescope’s dish antenna to be wider than Earth.

Fortunately, astrophysicists discovered in the late 1990s that the black hole’s prodigious gravity could be bending light ‘flowing’ near it towards itself, forming a gravitational lens that magnified it by five times. In turn, this meant a telescope required to ‘look’ at Sagittarius A* would need to have a diameter of a few thousand kilometres. Believe it or not, this was much more manageable.

The baseline, and atomic clocks

The Giant Metre-wave Radio Telescope, Pune. A radio telescope's antenna is its dish. Credit: NCRA/TIFR

The Giant Metre-wave Radio Telescope, Pune. A radio telescope’s antenna is its dish. Credit: NCRA/TIFR

Enter VLBI. Because there are multiple telescopes receiving the radio signals, the angular resolution of a so-called interferometric telescope is defined in a different way. It is no longer the ratio between the wavelength and the diameter of the telescope. Instead, it is the ratio between the wavelength and the maximum physical separation between two telescopes in the array, called the baseline. If, say, the baseline is 1,000 km, the angular resolution of an array of radio telescopes becomes 0.002 arc-second – already 20,000-times better.

However, this technique couldn’t be implemented properly until the atomic clock was invented in the 1950s. Before these advanced timekeepers existed, a single metronome had to be connected to multiple telescopes with cables, which limited the baseline to the amount of cable you had. With atomic clocks, telescopes could be placed on different continents because the clocks were kept in sync using international protocols.

Also read: How We’re Probing the Secrets of a Giant Black Hole at Our Galaxy’s Centre

All together now: a telescope receives a radio signal, a computer sticks a timestamp on it and sends it to the receiver. The receiver collates such data from different telescopes and creates the characteristic interference pattern. Using this pattern, a processor recreates the source of all the radio waves at different locations, together with the time at which each signal was received.

There are also many systems in between to stabilise and improve the quality of the signal, to coordinate observations between the telescopes, etc. But the basic principle is the same as in Young’s experiment two centuries ago.

VLBI itself has been around since the 1960s. At first, it could detect radio waves with a wavelength of a few centimetres, and gradually moved to lower and lower wavelengths – or higher and higher frequencies.

§

A black hole’s shadow

Telescopes participating in the EHT experiment are shown in blue. Credit: ESO/O. Furtak, CC BY 4.0

Telescopes participating in the EHT experiment are shown in blue. Credit: ESO/O. Furtak, CC BY 4.0

The EHT itself has over 30 participating telescopes spread over the North and South Americas, Europe, the Pacific Ocean and Antarctica. Because of their need to work together and their varied geographical locations, the EHT can study the Sagittarius A* site only when there are clear skies over all these telescopes at the same time. This is about one week per year – which makes each observation very precious.

It is also notable that for all of its sophistication, the EHT is not capable of producing an image of the shadow of a black hole the way Christopher Nolan and Kip Thorne did for the movie Interstellar (2014). We are likely to see a few pixelated images tomorrow put together from radio data. However, and assuming that is indeed going to be the case, it will still be a landmark achievement and a significant moment in the history of humankind.

This infographic shows a simulation of the outflow (bright red) from a black hole and the accretion disk around it, with simulated images of the three potential shapes of the event horizon’s shadow. Credit: ESO/N. Bartmann/A. Broderick/C.K. Chan/D. Psaltis/F. Ozel

This infographic shows a simulation of the outflow (bright red) from a black hole and the accretion disk around it, with simulated images of the three potential shapes of the event horizon’s shadow. Credit: ESO/N. Bartmann/A. Broderick/C.K. Chan/D. Psaltis/F. Ozel

It will not have been possible without the stars and other bodies that died being torn apart by the black hole. The black hole would have accrued the ‘dead’ matter around itself, accelerating them to very high velocities and twisting them around in monstrous magnetic fields. This causes frictional heating that then prompts the matter to emit high-energy radiation, such as X-rays. According to the experiment’s website, “The details of accretion mechanisms are still a very active area of research, and we hope that the images the EHT will take of the extreme environment of [Sagittarius A*] will help us understand them.”

Now, because the black hole bends light around itself, radiation from the accretion disk from behind the black hole will be visible to telescopes that are looking at its front. Finally, because the material in the accretion disk is swirling around – say, from left to right – an effect called gravitational redshift will cause light on the black hole’s left to appear brighter, and of higher frequency, than that on its right. This will make the black hole at Sagittarius A* appear like in the infographic above. And the EHT uses VLBI to capture the black hole’s shadow against this light, this light created by the sacrifice of entire stars.

With thanks to Prajval Shastri, an astrophysicist at the Indian Institute of Astrophysics, Bengaluru, for extended inputs on the article.

Some portions of the text above were originally published as a post on the author’s blog in 2014.

Netflix’s ‘Trotsky’ Is a Sinister Rewriting of History

Watching ‘Trotsky’, one would believe that Trotsky was the shadowy mastermind of the revolution, hiding behind Lenin’s public image, the man who created Stalin as his “golem” and then lost control.

Watching the miniseries Trotsky, made by Russia’s Channel One in 2017, it is hard not be reminded of Christopher Nolan’s entries into the Batman franchise, both in ideology and aesthetics.

In the first episode, we are treated to seeing Lev Bronstein, an idealistic and naïve revolutionary concerned with human rights, becoming the cold and devious Leon Trotsky, a man beguiled by power and fame, disinterested in the amount of blood on his hands.

This transformation is facilitated by the other Trotsky, Nikolai, the chief warden at Odessa Prison, a classical Dostoevsky-styled reactionary who warns the future Trotsky, over a game of chess, that liberating the Russian masses would lead to an untold level of destruction of society, and that power, once claimed, can only be exercised through terror.

Also read: Year One of the October Revolution, One Hundred Years On

Trotsky is haunted by these words in his dark solitary cell and undergoes a terrifying metamorphosis, becoming in his words, the “greatest monster”, and he puts on the pelt of his jailer, through his adoption of the name Trotsky.

These scenes are highly reminiscent of Bruce Wayne’s transformation into Batman, the master of fear and darkness, under the tuition of the venerable Ra’s al Ghul. Both transformations belong to fiction. In his 1930 autobiography My Life, Trotsky assigns his choice of nom de guerre, written in a forged passport, to a completely random memory. Isaac Deutscher’s first volume of his colossal biography of Trotsky, The Prophet Armed, identifies the source of the name Trotsky as originally belonging to a jailer, but one that was “obscure” and certainly not the chief warden of the Odessa Prison.

According to Deutscher, Trotksy’s actual relationship with the gendarme in charge of his interrogations while imprisoned in Odessa was one of mockery.

So, who in fact is the Nikolai Trotsky portrayed in the miniseries? Where do his words and “wisdom” come from? The answer can be found in current Russian official history. The personal view of Vladimir Putin on the October Revolution was summed up in a 2017 speech to teachers and students:

“Someone decided to shake Russia from inside, and rocked things so much that the Russian state crumbled. A complete betrayal of national interests! We have such people today as well.”

Officially, modern Russia walks a tightrope between two historiographies. Lenin’s tomb exists alongside shrines to the now canonised Romanov family; the Soviet Union is invoked as a great power, but never as the outcome of a mass uprising.

One of the more sinister parallels between the series and official historical discourse is the introduction of the figure of Alexander Parvus, a Russian-Jewish socialist and writer for exile publications such as Iskra.

While it is undeniable that Parvus was indeed a collaborator with German Military Intelligence in hope that the defeat of the Russian Empire in wartime would hasten socialist revolution in his homeland, Trotsky portrays Parvus via the antisemitic trope of Jews as bankrollers and profiteers of revolutionary chaos, in imagery pulled straight from the propaganda of the White Armies in the Russian Civil War.

Whereas Nikolai Trotsky teaches Lev Bronstein the art of ruthlessness and terror, Parvus is portrayed as teaching Trotsky manipulation and deception, creating his image as professional revolutionary through new clothes, in order to “conceal his demons” and “appeal to the masses.”

In a scene set in 1918, we are shown Trotsky using such showmanship and deception to convince revolutionary soldiers to, as the script puts it, “kill fellow Russians” by gifting a soldier the watch off his wrist, only for it later to be revealed that Trotsky has a drawerful of similar watches. Trotsky then orders a regiment of men to be decimated by firing squad.

Within just the first 45 minutes, wildly antisemitic imagery from a long and terrible tradition of Russian reactionary thought has set the course for the rest of the series. Trotsky ultimately resembles a marriage of the antediluvian politics of aristocratic White Russian emigres with the contemporary populist blockbuster aesthetics of Zack Snyder or Christopher Nolan.

Also read: The October Revolution Is Now a Historical Footnote in Russia

Much as the contemporary far right have gone about rebranding themselves as “populists” opposed to “globalists” backed with “Soros money,” Trotsky has wrapped those politics in a matching populist aesthetic. This isn’t some meticulous historical epic in the Soviet tradition of Eisenstein, Bondarchuk, or Tarkovsky, but a lurid tale of supervillainy, enticing viewers with lashings of sex and violence.

As the series goes on, it becomes clear why the character of Trotsky was chosen as the focal point for a series released on the centenary of the Russian Revolution. Not because of a breaking of Soviet-era taboos, as has been suggested, but to create an image of Trotsky that can serve as a terrible scapegoat for a historical period that still raises uncomfortable questions in modern Russia.

Watching Trotsky, one would believe that Trotsky was the shadowy mastermind of the revolution, hiding behind Lenin’s public image, the man who created Stalin as his “golem” and then lost control, a man who made himself a monster, obsessed with power and control, surrounded by sex and death – and yet at the same time a puppet of an anti-Russian conspiracy.

Quietly, series producer Konstantin Ernst has admitted that the series is intended as a “semi-fictional” dramatisation, “based on” the character of Trotsky. It’s a rather sinister and reactionary fantasy, born out of the harsh political climate of contemporary Russia.

Benjamin Stephens is a historian currently working in education.

This article was published on Jacobin. Read the original here