Coronavirus: The Lockdown Has Caught Us Between Expertise and Common Sense

Expertise has been humankind’s way to quickly make sense of a world that has only been becoming more confusing – but historically, expertise has also been a reason of state.

On March 27, the Johns Hopkins University said an article published on the website of the Centre For Disease Dynamics, Economics and Policy (CDDEP), a Washington-based think tank, had used its logo without permission and distanced itself from the study, which had concluded that the number of people in India who could test positive for the new coronavirus could swell into the millions by May 2020. Soon after, a basement of trolls latched onto CDDEP founder-director Ramanan Laxminarayan’s credentials as an economist to dismiss his work as a public-health researcher, including denying the study’s conclusions without discussing its scientific merits and demerits.

A lot of issues are wound up in this little controversy. One of them is our seemingly naïve relationship with expertise.

Expertise is supposed to be a straightforward thing: you either have it or you don’t. But just as specialised knowledge is complicated, so too is expertise.

Many of us have heard stories of someone who’s “great at something even though he didn’t go to college” and another someone who’s “a bit of a tubelight despite having been to Oxbridge”. Irrespective of whether they’re exceptions or the rule, there’s a lot of expertise in the world that a deference to degrees would miss.

More importantly, by conflating academic qualifications with expertise, we risk flattening a three-dimensional picture to one. For example, there are more scientists who can speak confidently about statistical regression and the features of exponential growth than there are who can comment on the false vacua of string theory or discuss why protein folding is such a hard problem to solve. These hierarchies arise because of differences in complexity. We don’t have to insist only a virologist or an epidemiologist is allowed to answer questions about whether a clinical trial was done right.

But when we insist someone is not good enough because they have a degree in a different subject, we could be embellishing the implicit assumption that we don’t want to look beyond expertise, and are content with being told the answers. Granted, this argument is better directed at individuals privileged enough to learn something new every day, but maintaining this chasm – between who in the public consciousness is allowed to provide answers and who isn’t – also continues to keep power in fewer hands.

Of course, many questions that have arisen during the coronavirus pandemic have often stood between life and death, and it is important to stay safe. However, there is a penalty to think the closer we drift towards expertise, the safer we become — because then we may be drifting away from common sense and accruing a different kind of burden, especially when we insist only specialised experts can comment on a far less specialist topic. Such convictions have already created a class of people that believes ad hominem is a legitimate argumentative ploy, and won’t back down from an increasingly acrimonious quarrel until they find the cherry-picked data they have been looking for.

Most people occupy a less radical but still problematic position: even when neither life nor fortune is at stake, they claim to wait for expertise to change one’s behaviour and/or beliefs. Most of them are really waiting for something that arrived long ago and are only trying to find new ways to persist with the status quo. The all-or-nothing attitude of the rest – assuming they exist – is, simply put, epistemologically inefficient.

Our deference to the views of experts should be a function of how complex it really is and therefore the extent to which it can be interrogated. So when the topic at hand is whether a clinical trial was done right or whether the Indian Council of Medical Research is testing enough, the net we cast to find independent scientists to speak to can include those who aren’t medical researchers but whose academic or vocational trajectories familiarised them to some parts of these issues as well as who are transparent about their reasoning, methods and opinions.

If we can’t be sure if the scientist we’re speaking to is making sense, obviously it would be better to go with someone whose words we can just trust. And if we’re not comfortable having such a negotiated relationship with an expert – sadly, it’s always going to be this way. The only way to make matters simpler is by choosing to deliberately shut ourselves off, to take what we’re hearing and, instead of questioning it further, running with it.

This said, we all shut ourselves off at one time or another. It’s only important that we do it knowing we do it, instead of harbouring pretensions of superiority. At no point does it become reasonable to dismiss anyone based on their academic qualifications alone the way, say, Times of India and OpIndia have done (see below).

What’s more, Dr Giridhar Gyani is neither a medical practitioner nor epidemiologist. He is academically an electrical engineer, who later did a PhD in quality management. He is currently director general at Association of Healthcare Providers (India). – Times of India, March 28

Ramanan Laxminarayanan, who was pitched up as an expert on diseases and epidemics by the media outlets of the country, however, in reality, is not an epidemiologist. Dr Ramanan Laxminarayanan is not even a doctor but has a PhD in economics. – OpIndia, March 22

Expertise has been humankind’s way to quickly make sense of a world that has only been becoming more confusing. But historically, expertise has also been a reason of state, used to suppress dissenting voices and concentrate political, industrial and military power in the hands of a few. The former is in many ways a useful feature of society for its liberating potential while the latter is undesirable because it enslaves. People frequently straddle both tendencies together – especially now, with the government in charge of the national anti-coronavirus response.

An immediately viable way to break this tension is to negotiate our relationship with experts themselves.

Slaying the Snark: What Lewis Carroll’s Nonsense Poem Tell Us About Reality

We talk about ‘common sense’ or dismiss things as ‘nonsense’, but we rarely think about what sense itself is until it goes missing.

The English writer Lewis Carroll’s nonsense poem The Hunting of the Snark (1876) is an exceptionally difficult read. In it, a crew of improbable characters boards a ship to hunt a Snark, which might sound like a plot were it not for the fact that nobody knows what a Snark actually is. It doesn’t help that any attempt to describe a Snark turns into a pile-up of increasingly incoherent attributes: it is said to taste ‘meagre and hollow, but crisp: / Like a coat that is rather too tight in the waist’.

The only significant piece of information we have about the Snark’s identity is that it might be a Boojum. Unfortunately, nobody knows what that is either, apart from the fact that anyone who encounters a Boojum will ‘softly and suddenly vanish away’ into nothingness.

Nothingness also characterises the crew’s map: a ‘perfect and absolute blank!’

‘What’s the good of Mercator’s North Poles and Equators,
Tropics, Zones and Meridian Lines?’
So the Bellman would cry: and the crew would reply,
‘They are merely conventional signs!’

Nonsense such as this might get tiresome to read, but it can make for a useful thought-experiment – particularly about language. In the Snark, as in the Alice books of 1865 and 1871, the commonsense assumptions that usually govern language and meaning are turned upside down. It makes us wonder what all of those assumptions are up to, and how they work. How do we know that this sentence is trying to say something serious, or that where we are now is not a dream?

Language can’t always convey meaning alone – it might need sense, which is the governing context that framed it. We talk about ‘common sense’, or whether something ‘makes sense’, or dismiss things as ‘nonsense’, but we rarely think about what sense itself is until it goes missing. The German logician Gottlob Frege in 1892 used sense to describe a proposition’s meaning, as something distinct from what it denoted. Sense, therefore, appears to be a mental entity, resistant to fixed definition.

Shortly after Carroll’s death in 1898, a seismic turn took place in both logic and metaphysics. Building on Frege, logical positivists such as Bertrand Russell sought to deploy logic and mathematics in order to establish unconditional truths. A logical truth was, like mathematics, true whether or not people changed their minds about it. Realism, the belief in a mind-independent reality, began to assert itself afresh after a long spell in the philosophical wilderness.


Also read: There’s Been a Coup in the House of Literature


Sense and nonsense would therefore become landmines in a battle over logic’s ability to untether truth from thought. If an issue over meaning seeks recourse in sense, it seeks recourse in thought too. Carroll anticipated where logic was headed, and the strangest of his creations was more than a game, an experiment conceived, as the English author G K Chesterton once wrote of his work, ‘in order to study that darkest problem of metaphysics’.

In 1901, the pragmatist philosopher and provocateur F.C.S. Schiller created a parody Christmas edition of the philosophical journal Mind called Mind!. The frontispiece was a ‘Portrait of Its Immanence the Absolute’, which, Schiller noted, was ‘very like the Bellman’s map in the Hunting of the Snark’: completely blank.

The Absolute – or the Infinite or Ultimate Reality, among other grand aliases – was the sum of all experience and being, and inconceivable to the human mind. It was monistic, consuming all into the One. If it sounded like something you’d struggle to get your head around, that was pretty much the point. The Absolute was an emblem of metaphysical idealism, the doctrine that truth could exist only within the domain of thought. Idealism had dominated the academy for the entirety of Carroll’s career, and it was beginning to come under attack. The realist mission, headed by Russell, was to clean up philosophy’s act with the sound application of mathematics and objective facts, and it felt like a breath of fresh air.

Schiller delighted in trolling absolute idealists in general and the English idealist philosopher F. H. Bradley in particular. In Mind!, Schiller claimed that the Snark was a satire on the Absolute, whose notorious ineffability drove its seekers to derangement. But this was disingenuous. Bradley’s major work, Appearance and Reality (1893), mirrors the point, insofar that there is one, of the Snark. When you home in on a thing and try to pin it down by describing its attributes, and then try to pin down what those are too – Bradley uses the example of a lump of sugar – it all begins to crumble, and must be something other instead. What appeared to be there was only ever an idea. Carroll was, contrariwise, in line with idealist thinking.

A passionate logician, Carroll had been working on a three-part book on symbolic logic that remained unfinished at his death. Two logical paradoxes that he posed in Mind and shared privately with friends and colleagues, such as Bradley, hint at a troublemaking sentiment regarding where logic might be headed. ‘A Logical Paradox’ (1894) resulted in two contradictory statements being simultaneously true; ‘What the Tortoise Said to Achilles’ (1895) set up a predicament in which each proposition requires an additional supporting proposition, creating an infinite regress.

A few years after Carroll’s death, Russell began to flex logic as a tool for denoting the world and testing the validity of propositions about it. Carroll’s paradoxes were problematic and demanded a solution. Russell’s response to ‘A Logical Paradox’ was to legislate nonsense away into a ‘null-class’ – a set of nonexistent propositions that, because it had no real members, didn’t exist either.


Also read: Our ‘Attention’ Isn’t Just a Resource, It’s the Way We Interact With the World


Russell’s solution to ‘What the Tortoise Said to Achilles’, tucked away in a footnote to the Principles of Mathematics (1903), entailed a recourse to sense in order to determine whether or not a proposition should be asserted in the first place, teetering into the mind-dependent realm of idealism. Mentally determining meaning is a bit like mentally determining reality, and it wasn’t a neat win for logic’s role as objective sword of truth.

In the Snark, the principles of narrative self-immolate, so that the story, rather than describing things and events in the world, undoes them into something other. It ends like this:

In the midst of the word he was trying to say,
In the midst of his laughter and glee,
He had softly and suddenly vanished away –
For the Snark was a Boojum, you see.

Strip the plot down to those eight final words, and it is all there. The thing sought turned out, upon examination, to be something else entirely. Beyond the flimsy veil of appearance, formed from words and riddled with holes, lies an inexpressible reality.

By the late-20th century, when Russell had won the battle of ideas and commonsense realism prevailed, critics such as Martin Gardner, author of The Annotated Hunting of the Snark (2006), were rattled by Carroll’s antirealism. If the reality we perceive is all there is, and it falls apart, we are left with nothing.

Carroll’s attacks on realism might look nihilistic or radical to a postwar mind steeped in atheist scientism, but they were neither. Carroll was a man of his time, taking a philosophically conservative party line on absolute idealism and its theistic implications. But he was also prophetic, seeing conflict at the limits of language, logic and reality, and laying a series of conceptual traps that continue to provoke it.

The Snark is one such trap. Carroll rejected his illustrator Henry Holiday’s image of the Boojum on the basis that it needed to remain unimaginable, for, after all, how can you illustrate the incomprehensible nature of ultimate reality? It is a task as doomed as saying the unsayable – which, paradoxically, was a task Carroll himself couldn’t quite resist.Aeon counter – do not remove

 

Nina Lyon is working on a Ph.D. on The Hunting of the Snark at Cardiff University. Her first book, Uprooted (2016), was the recipient of the Roger Deakin Award. 

This article was originally published at Aeon and has been republished under Creative Commons. Read the original article.

Infinite in All Directions: Joanne’s Emails, Alien Religions, the Effects of Awe

Infinite in All Directions is The Wire‘s science newsletter. Subscribe and receive a digest of the most interesting science news and analysis from around the web every Monday, 10 am.

Infinite in All Directions is The Wire‘s science newsletter. Click here to subscribe and receive a digest of the most interesting science news and analysis from around the web every Monday, 10 am.

Reading science right

Credit: eekim/Flickr, CC BY 2.0

Credit: eekim/Flickr, CC BY 2.0

Erik Klemetti gives a quick overview of whatever is ailing science communication at the moment – and how it’s important that you, as a reader, be savvy enough to sidestep the effects of the ailments. For example:

So, be careful when you read science in the news. You should take sensational conclusions with a grain of salt the size of an Gerald Ford-class aircraft carrier. Some ways you can be confident:

  • Does the article link back to the original study?
  • Do they speak to scientists that are not part of the original study?
  • Do they present the findings as certainty or hypotheses that need continued work?
  • Is there an indication of the size of the data set from which the findings were made?
  • Does the article just feel like it’s trying to makes things bigger than they seem (is it believable?)

I’d like to add one more point to this list: What’re the odds that a big finding is coming along in science? A ‘big finding’ the way some mediapersons like comes about way less often in science, which advances incrementally. And don’t let those people tell you that that kind of incremental progress is not good enough to cover – it is and has always been. After all, it’s such progress that’s given us every damned result. If anything, the media has a responsibility to drive home that point.

§

Annotated science writing

screen-shot-2016-09-25-at-3-09-47-pm

My science writer and friend Shannon Hall is the managing editor of Storygram, which I’m recommending that you go check out for its brilliant, why-didn’t-I-think-of-it-first concept: to have successful science writers annotate science articles that have been published at various outlets. The annotations are in-line (i.e. they appear adjacent to the words they’re addressing) and are written in a didactic style. In the process, what appears is a criticism of the piece that readers can learn from and emulate – as readers or as writers.

Here’s an example:

At 2:46 p.m. on March 11, 2011, the Pacific Plate, just off Japan’s northeast coast, suddenly thrust downward, unleashing a monstrous, 9.0-magnitude earthquake that rocked the country for the next six minutes. The massive Tohoku quake and resulting tsunami are believed to have killed at least 16,000 people and injured 6,000 more. Another 2,600 people are still missing and presumed dead. The quake was the most powerful to ever strike Japan, and was the fourth-largest ever recorded. It also was the first earthquake to be heard in outer space, and was the most expensive natural disaster in human history, generating $235 billion in total damage. Leading with numbers could potentially sound boring; it also breaks a general rule about starting with a specific, human connection to the reader. But I think in no other way could Ghorayshi show you immediately that the scale of this earthquake was inhumanly large. Sentence after sentence, the numbers keep coming and pile up on each other until you think, “Yes, that’s big.”

But there was a silver lining, if you could call it that: Tohoku was also the first time that Japanese citizens were given the precious, if limited, gift of time. And yay, this story isn’t going where you’d expect it to! After that buildup, you’d expect a story about the heroism of recovery, not a 60-second early-warning system. I think of this as a knight’s-move structure: Here’s where you think this is headed … but whoops! We’re turning left. You can get away with this only if you know exactly where you’re going—otherwise the reader just gets confused—which I’m betting Ghorayshi does. (Note added later: When I wrote this comment, I hadn’t read the subhed, signaling that this is a story about California earthquakes. I never read subheds. If I had, I wouldn’t have been as happily surprised about where Ghorayshi went as I would have been about the unconventional place where she began, with an earthquake not in California but in Japan. These little knight’s-moves give nonfiction stories—which often follow predictable plots—some of the unexpected delight of real life.)

§

Statistical common sense

Source: YouTube

Source: YouTube

This isn’t a new or contemporaneous piece of news but something I’ve always had to struggle to come to terms with: the Monty Hall problem. If you haven’t heard of it (which, believe me, would have to be a bit strange), this is Alan Bellows:

There is a classic mathematical nuisance known as the Monty Hall problem which can be hard to wrap the mind around. It is named after the classic game show “Let’s Make a Deal,” where a contestant was allowed to choose one of three doors, knowing that a valuable prize waited behind one, and worthless prizes behind the others.

On the show, once the contestant made their choice, Monty Hall (the host) opened one of the other doors, revealing one of the worthless prizes. He would then open the contestant’s chosen door to reveal whether they picked correctly. The Monty Hall problem asks, what if the contestant were allowed to change her door choice after she saw the worthless prize? Would it be to her advantage to switch doors? In other words, if the contestant guesses that the new car lay behind door #1, and Monty opened door #2 to reveal a goat, is the new car more likely to be behind door #1, or door #3?

Common sense dictates that switching shouldn’t make a difference – but the correct answer is that you should switch every time. To get the hang of why, take a look at this simulator put together by the folks at Damn Interesting. It simulates the actions of two players – A (who never switches doors) and B (who always switches doors). As hundreds of scenarios are played out, you realise where you might be getting it wrong.

When you switch doors, your chances of success move up from 33% to 67%. So even if you switched and then found a goat behind a door, the switch-everytime method would still be the right way to go because it provides better outcomes over multiple attempts.

§

Recap

unnamed-6

Since you’re so into Infinite in All Directions (sign up here), you could also check out the amazing stuff that appeared in The Wire recently.

  • A PSLV launch happened about 48 minutes ago. I wrote about the seven reasons the launch is noteworthy (check out #4).
  • Sangeetha Balakrishnan wrote a quick primer on what the Ig Nobel Prizes are and how they make for an important teaching moment in this day and age. Quick take: “The postmodern science student wants more – and so the postmodern science teacher brings in the Ig Nobels when she teaches electrochemistry.”
  • Janaki Lenin, Meghna Uniyal and Abi Vanak had an important, if provocative, argument to make: if dogs threaten the safety of people on the streets (particularly in India), should we kill the dogs? Because clearly Indian administrators and welfare workers are doing nothing to solve this problem.
  • Curiosity among curiosities, both the states’ and the national health policies in India make no reference to rare genetic disorders. One effect of this has been that, earlier this month, the country’s thalassaemia patients were deprived of a life-saving drug.
  • Divers found human remains in the famous Antikythera shipwreck, over 2,000 years old and off the coast of Greece. Researchers think the DNA in the remains can be reliably tested. Would the answers provide any clues about whence the enigmatic Antikythera mechanism?

§

Joanne Cohn’s mailing list

Credit: adikos/Flickr, CC BY 2.0

Credit: adikos/Flickr, CC BY 2.0

Chances are you haven’t heard of Joanne Cohn. She’s an astrophysicist at the University of California, Berkeley. In 1989, she started an email list through which she distributed pre-prints of scientific papers to people who were interested in reading them before they got published. As she writes,

Slightly later on, I learned how to use an email exploder. I began to systematically expand the number of names on the list I was sending papers to. I also expanded the role of the mailing list from just a list which received papers I had, to a group of people who both received and contributed papers. In this way, it became a way for people to exchange papers more generally. I started asking people I knew to send me their papers. I also asked people I didn’t know, if I saw their papers and their emails were available, for their papers (and simultaneously invited them to join the list). People also requested to join the list who heard of it via word of mouth; I would add them and also request them to send their papers. Often papers would go to one person who would print it out for the group (although some research groups requested me to send papers to all members individually, which I did). Eventually (by 1991 summer) it was reaching several countries and institutions. I believe it had about 180 people (a small number by today’s standards!) and reached over 20 countries.

In 1991, Paul Ginsparg, a friend of Cohn’s, automated the system by building a network through which registered users could add new papers while others could access or download them. The network – i.e. a less labour-intensive version of what was essentially Cohn’s mailing list – went online on August 14, 1991, under the name ‘arXiv’. Today, the arXiv server contains over a million pre-print papers with over 8,000 submissions a month.

Thank you, Joanne!

§

Saving alien souls

Credit: remix-man/Flickr, CC BY 2.0

Credit: remix-man/Flickr, CC BY 2.0

Fantastic plot-point right here (Dan Brown, hope you’re listening): if we find aliens, then will the Church be obliged to convert them into Christianity?

I’d quote the entire article, by Ian Lovett in the WSJ, if I could, but here’s the proto-problem as it were:

“It’s just planet Earth that has spiritual beings in need of redemption,” said Hugh Ross, an astrophysicist who founded Reasons to Believe, a ministry that seeks to show that science supports Christian scripture. “That doesn’t rule out dolphins or grass or bacteria on another planet,” he said, but he doesn’t expect to find life anywhere else in the universe. He added, “It’s not Jesus Christ dying on 1,000 planets.”

Some theologians argue that Ms. Vanderwall’s observation—that the Bible “teaches us to be good people”—is precisely why Christians shouldn’t plan to baptize alien life-forms. The Gospel tells of the fall and salvation of humanity, they say, not of other beings on some far-off world. Other theologians posit that intelligent extraterrestrials would have their own relationships with God.

Beyond the theological questions lie serious practical obstacles. “Communication would take a long time, obviously,” said Deborah Haarsma, president of BioLogos, a group that promotes the idea that Christianity and science are in harmony. Even if communication were possible, she added, “Can we communicate about something as profound as God?”

§

See your irony, raise you a gamble

Credit: oliverdodd/Flickr, CC BY 2.0

Credit: oliverdodd/Flickr, CC BY 2.0

Companies operating cruise-liners to remote locations, especially where receding glaciers are exposing long-preserved ecosystems, have an easy pitch: the trip, according to one, takes the traveller “through majestic waterways, spectacular glaciers, and towering fjords… where nature is truly wild and landscapes are absolutely breathtaking.” Well-written, but oh the irony.

According to Smithsonian Mag, environmentalists were quick to point out that taking the cruise would add to the carbon footprint, which in turn would drive global warming and melt those glaciers more. A somewhat more touchy peg to have this discussion over would be the annual Conference of the Parties to the UNFCCC, when leaders jet across the world in their private planes to a meeting and talk about polluting the world less.

Clearly, it’s a question of priorities: there’s no way leaders are going to travel in one big plane with everyone else together, but can a person who’s never observed the effects of climate change first-hand have enough of a moving experience to want to work on finding a solution after taking the cruise?

Judith Stark, a professor at Seton Hall University who specializes in applied ethics, thinks about these questions all the time. “Going to these really remote places, what does that do to the ecological integrity to the places themselves?” she says. “It’s really a matter of balancing the value of that experience and the educational opportunity of that experience with the inherent value of nature and species that are not simply there for our use and our entertainment. To try and balance those two is difficult.”

For people living in developed countries—especially people that live away from the coast and aren’t familiar with coastal flooding or sea level rise—the consequences of climate change can feel far off and impersonal. Traveling to a place impacted by climate change can bring it home. If a journey has enough of an impact that it causes someone to make changes in their daily life, or gets them talk to friends and family about the dangers of climate change, Stark says, then that trip could be considered “morally acceptable.”

“Travelling to a place impacted by climate change can bring it home.” Can it really? People who visit these places experience a sense of awe that, scientific research has shown, opens them up to being more mindful of their environment as well as to new information. At the same time, repeated doses of awe can blunt the effects of the sensation and make the person experiencing it feel more alienated – at least according to Dacher Keltner, a psychologist at the University of California, Berkeley. Based on Keltner’s review of how humans experience awe and the effect it has on them across various cultures, Michelle Nijhuis concludes for The Atlantic that, “Only with the luxury of distance, it seems, can we experience awe as awesome in every sense.”

If you’re going on that cruise, maybe just do it once.

§

Go big but never go homeo

Credit: mikeblogs/Flickr, CC BY 2.0

Credit: mikeblogs/Flickr, CC BY 2.0

I cannot recommend this article by Edzard Ernst in the Guardian enough, where he clearly and bluntly describes how the weight of evidence is against homeopathy being anything more than a placebo. The piece was published after “a comprehensive, transparent and evidence-based review from a panel of experts who are competent and free of conflicts of interest” in Australia found nothing to suggest homeopathy could be an effective system of medicine.

Ernst is a British academician specialising in the study and practices of alternative medicine. He’s been writing about the dangers of pseudo-scientific medical treatments, particularly homeopathy, on his popular blog as well. I highly recommend you follow it (as well as Andy Lewis’s Quackometer) for authoritative takedowns of the latest in medical BS.

§

Dyson’s future

Freeman Dyson. Source: YouTube

Freeman Dyson. Source: YouTube

If you’re going to look at the future of space exploration as a problem of engineering, you’re going to find it boring. At least, that’s the takeaway from Freeman Dyson’s new essay in the New York Review of Books. Dyson is a celebrated theoretical physicist and science communicator. (This newsletter is named for a phrase in one of his books, also of the same name.)

Actually, it’s a review – of three books, and finished with in the dullest fashion in the first three-quarters of the piece, after which Dyson embarks on a speculative adventure. Specifically, he recommends paying more attention to biotechnology, to the things we will do after we’ve built a rocket and gotten from point A to point B. Sample this from the full:

From this point on, everything I say is pure speculation, a sketch of a possible future suggested by [Konstantin] Tsiolkovsky’s ideas. Sometime in the next few hundred years, biotechnology will have advanced to the point where we can design and breed entire ecologies of living creatures adapted to survive in remote places away from Earth. I give the name Noah’s Ark culture to this style of space operation. A Noah’s Ark spacecraft is an object about the size and weight of an ostrich egg, containing living seeds with the genetic instructions for growing millions of species of microbes and plants and animals, including males and females of sexual species, adapted to live together and support one another in an alien environment.

After the inevitable mistakes and failures, we will have acquired the knowledge and skill to build such Noah’s Arks and put them gently into suitable places in the sky. Suitable places where life could take root are planets and moons, and also the more numerous cold dark objects far from the sun, where air is absent, water is frozen into ice, and gravity is weak. The purpose is no longer to explore space with unmanned or manned missions, but to expand the domain of life from one small planet to the universe. Each Noah’s Ark will grow into a living world of creatures, as diverse as the creatures of Earth but different. For each world it may be possible to develop genetic and other instructions for growing a protected habitat where humans can live in an Earth-like environment. The expansion of human societies into the universe will be a small part of the expansion of life. After the expansion of life and the expansion of human societies have started, the new ecologies will continue to evolve in ways that we cannot plan or predict. The humans in remote places will then also have the freedom to evolve, so that they can move out of protected habitats and walk freely on the worlds where they have settled.

Like what you read? Subscribe to the newsletter.