Alan Turing and the Limits of Computation

Turing wore many hats during his lifetime. But his fundamental contribution to the development of computing as a mathematical discipline is possibly where his significant scientific impact persists to date.

Note: June 23 is Alan Turing’s birth anniversary.

Alan Turing wore many scientific hats in his lifetime: a code-breaker in World War II, a prophetic figure of artificial intelligence (AI), a pioneer of theoretical biology, and a founding figure of theoretical computer science. While the former of his roles continue to catch the fancy of popular culture, his fundamental contribution to the development of computing as a mathematical discipline is possibly where his significant scientific impact persists to date.

Turing’s work emerged from a long legacy in the history of ideas and tools. From the times of the Sumerian abacus, we have a recorded history of attempts to create tools which could simulate at least certain aspects of human reasoning. For the most part of history, they remained rather rudimentary in technology and limited mostly to arithmetical calculations.

Leibniz’s calculator

In 1694, the German polymath Gottfried Wilfred Leibniz invented a mechanical calculator which could perform all four arithmetical operations called the Stepped Reckoner. It may be noteworthy that this invention preceded the industrial revolution and the invention of the steam engine, but was during the early phase of the rise of capitalism in Europe, which brought with it both a desire and demand for technology to aid production.

In the spirit of his times, Leibniz believed that all human reason could be mechanised. He foresaw as a precondition to this mechanisation, a universal formal language in which human thought could be expressed precisely, and unambiguously, which he termed characteristic universalis. Assuming the existence of such a formal language, the validity of any argument can thus be checked by a calculating machine, which treats symbols of the language like a calculator treats arithmetical expressions and arguments like calculations. In Leibniz’s words in The Art of Discovery

“The only way to rectify our reasonings is to make them as tangible as those of the Mathematicians, so that we can find our error at a glance, and when there are disputes among persons, we can simply say: Let us calculate [calculemus], without further ado, to see who is right.”

Leibniz’s Stepped reckoner. Photo: Kolossos/Wikimedia Commons, CC BY-SA 3.0

While such a machine, or even such language remained elusive in Leibniz’s lifetime, it sparked off lines of enquiry which persisted for centuries. One of its key contributions was the attention it drew to the fact that any attempt to mechanise reason necessitates a language and set of rules of reasoning which are precise, formal and unambiguous. Nowhere else did the attempt to make the language of expression and methods of reasoning- precise and formal find as much success as it did in the field of Mathematics during the late 19th and the early 20th centuries, following the development of set theory, logic and philosophy of mathematics.

In this intellectual context, David Hilbert and William Ackermann, two mathematicians asked whether there exists an algorithm which takes as input a certain statement and returns ‘True’ or ‘False’, depending on whether the statement holds true in every axiomatic system. This problem was later famously referred to as Hilbert’s Entscheidungsproblem

To answer such a question in the affirmative would imply producing an algorithm to achieve the task. But to answer with denial would mean to prove the impossibility of such an algorithm – not merely that such an algorithm does not currently exist, but such an algorithm can never be discovered. The proof of such an impossibility would necessarily have to take the form of mathematical proof by logical deduction.

There was, however, another bridge to be crossed before either attempting such a proof or an algorithm. Answering such a question would need a definition of algorithm, in the language of mathematics and which would make it amenable to mathematical reasoning through the means of proof. An algorithm, informally, can be described as a finite sequence of rigorous instructions to solve a particular problem or a general method precisely stated that leads to a solution of a problem.

Algorithms were long known before any mechanical device on which they can be implemented arrived in the picture. Since the algorithm has to be understood as a general method, rather than a technological entity, it was for the most part a procedural guide for humans to arrive at solutions of problems.

For instance, Euclid’s algorithm from the 3rd Century BC can be used to find the greatest common divisor (GCD) of two numbers, the Persian Mathematician Al-Khwarizmi (referred to as Algorithmus in Latin) defined methods to solve algebraic equations and Indian mathematicians were known for using such methods to solve algebraic questions. Lady Ada Lovelace, wrote the first computer programme a century before an actual computer was invented.

However, in order to reason about the existence or non-existence of an algorithm, the first critical step was to formally define it in a way that encompasses all known understanding of the algorithm and yet is precise enough to lend itself to expression as a purely mathematical object. Such a definition had to be independent of any present or future technology on which the algorithm is implemented. In fact, the definition has to be further independent of the fact whether the algorithm is employed by a human or a machine. Thus lending the definition to all possible algorithms which can be implemented on all possible machines in the present or in future. 

Turing’s contribution

This was Turing’s first major significant contribution in his seminal paper titled ‘On Computable Numbers with an application to the ENTSCHEIDUNGSPROBLEM’: A formal definition of an algorithm, which is now called the Turing Machine. His definition drew its key insights through his observation about how mathematicians perform their calculations: First and foremost, one can realise that any calculation done on a paper with length and breadth, can be done on a single tape divided into cells. Second, every computational task in mathematics involves starting with some terms and rewriting them according to prespecified rules and a state which depends on the current state of calculation, and the states themselves undergo transitions depending on prespecified transition rules. For instance, consider the addition: 13 + 27. It can be carried out on a single tape. The act of adding them involves writing a new number which is obtained by first adding the right-most digits of both the numbers (=10), and subsequently the left-most digits. However adding the leftmost digits of both numbers will involve a carry-over operation, which can be regarded as addition in a state where conventional addition will be replaced with an operation which consists of addition of digits followed by a subsequent increase of the final digit by 1. Such information used when processing the numbers is usually the information encoded in the abstraction termed a state. 

A physical Turing machine model. A true Turing machine would have unlimited tape on both sides, however, physical models can only have a finite amount of tape. Photo: Rocky Acosta/Wikimedia Commons, CC BY 3.0

A Turing Machine thus consists of a single tape divided into many cells, a set of states, a set of rules, a head which can move left or right, and can write, rewrite or erase entries in a cell depending on the current entry in the cell and the current state of the machine. While this may seem more like a description of a machine than a mathematical object, the beauty is that this mechanical device can be easily described mathematically using the basic vocabulary of sets and functions. Turing realised that any algorithmic or computable process can be simulated using this abstract model. 

One could argue that this is a specific model of computation, based on certain abstractions and simplifications. As often observed in mathematical modelling, a single phenomenon can lend itself to multiple mathematical models depending on what is retained and what gets abstracted away in the model. What lends credibility to this particular model is that Alonzo Church independently and roughly around the same time arrived at an entirely different model of computation using a more abstract and mathematical notion of computation in terms of lambda calculus. Surprisingly the two distinct definitions of computation inspired by the idea of modelling different notions of computation proved to be equivalent. This led to what is called the Church-Turing thesis, which asserts that our intuitive understanding of computability coincides with the formal definitions laid down by Church and Turing. 

Using the notion of computation, defined by Turing, went on to prove that there exist certain statements which when entered as an input, cannot be answered by any Turing Machine. Several variants of this statement, popularly known as Turing’s halting problem are widely known and several excellent references can be found online. (Martin Davis’s excellent exposition in an essay is highly suggested). The heart of the argument relies on the fact that if we assume that every problem lends itself to an algorithmic, we arrive at a logical contradiction. A simplified version of the statement would be whether there exists an algorithm which takes as input any algorithm (say algorithm A) and an input I, and determine whether algorithm A would halt on input I. Such a statement allows for self-referentiality thus leading to a paradox.  This solved the ENTSCHEIDUNGSPROBLEM, by showing the existence of a statement whose truth value cannot be determined by an algorithm.

While Turing’s approach and solution to the ENTSCHEIDUNGSPROBLEM were remarkable by themselves, its consequences have been equally illuminating. An important consequence is that there are several theoretical limitations to computation. These are limits which do not depend on physical models of computer, but on the very idea of computation. Neither are these empirical facts, which can change with growing evidence but will hold true as long as our basic assertions about what are valid rules of logic remain the same. Equally interesting is that the statement discovered by Turing was no longer the unique statement whose truth value did not lend itself to an algorithmic solution. The problem encoded by such statements is termed an ‘undecidable problem’. Many distinct problems were discovered which were proved out be undecidable, though quite often they can be directly or indirectly reduced to Turing’s halting problem. 

An interesting example of an undecidable problem is the Word Problem in String Rewriting. Consider a game, where the task is to obtain one word from another by replacing a part of the word with a sequence of alphabets at each step, according to a given set of instructions (substitution rules). By words we mean any string of alphabets strung together and not just words in the dictionary. Let us call any such game, an instance of string rewriting. As an example, we consider a particular instance of the game, where our input consists of the following set of instructions:

  1. Replace bit in a word with bot.
  2. Replace arb in a word with v.
  3. Replace tra a word with le.

Let us assume given these instructions, we have to find out if you can derive the word volery from the word arbitrary. As it happens, in this case, we may be able to convert arbitrary to volery. arbitrary -> arbotrary -> votrary -> volery, which employs the rules 3, 1, 2 and 4 respectively. If the two given words are arbitrary and zebra, one can show that it is impossible to derive zebra from arbitrary, since there is no instruction in which the letter z occurs on the right-hand side.  

A general instance of this problem could be stated as: given a set of substitution rules, similar to the ones defined above, and two words, does there exist an algorithm by which we can answer if the one word can be derived from another through the substitutions allowed by the input rules. While prima facie, it does feel like a problem that can be solved easily by a computer programme, the answer is No. This problem was proven (independently) by Emil Post and A.A. Markov in 1947 to belong to the class of undecidable problems, and thus is provably impossible to tackle by means of an algorithm!

The class of undecidable problems, many of which continue to be discovered, which shed light on the limits of computations perhaps proved to be one of Turing’s most lasting legacy in the field of computer science. In a time, when every progress in machine-learning based AI, is greeted with either a sense of euphoria or despair, and the scientific significance gets mystified and eclipsed by a combination of marketing jargon and corporate soothsaying, Turing’s work may be the sobering dose we need to remind ourselves that there are indeed fundamental limits to computation. Perhaps like his work and life, Alan Turing’s eventual death, following the persecution he faced for his sexuality, may also serve as a reminder of all the curious minds lost to science and careers cut short owing to social prejudices, which continue to persist albeit in different forms.   

T.V.H. Prathamesh is an assistant professor of computer science at Krea University, Sri City, Andhra Pradesh.