Alan Turing is considered the father of artificial intelligence, and rightfully so. Marrying mathematical study with computer science, Turing was the first to contend that computers could think like humans, and he pioneered the concept of machines that could perform tasks on par with human experts – a bedrock concept of modern AI computer science to this day. Given the intense interest in AI of recent years, Turing is more famous now than he was at the time of his death, 15 days shy of his 42nd birthday in 1954.
I’m constantly amazed by Turing’s prescience in laying the theoretical groundwork for what he called thinking computers, those that exhibit intelligent behavior equal to or indistinguishable from that of a human. However, Turing’s work occurred more than 65 years ago, and — give the guy a break — while several of his predictions are uncannily on the mark, he wasn’t able to foresee all the advances that are shaping life in 2019.
So occasionally it is interesting to ponder what Turing did get right and what he didn’t get quite right or imagine at all.
To Turing, the Cloud Was a White, Puffy Thing in the Sky
Today, the cloud is the provider of the data storage capacity and enormous processing power that fuels AI’s advance across all industries, including automotive, healthcare, finance, education and retail. Turing understood that the “digital computing machines” he envisioned would require more memory and storage than was available from the magnetic strips of tape that were the only option at the time. In his 1947 lecture at the London Mathematical Society, Turing described the need for “infinite memory which would be impractical using the infinite tape technology of the day.”
Turing seemed to think, however, that the best hope for this capability was simply the more advanced tape that was on the cutting edge of development at the time, or innovations such as “charge patterns on the screen of a cathode ray tube.” He couldn’t have divined the extent to which an interconnected network of computers would give rise to the cloud decades later. In 2019, 70 percent of companies that have adopted AI are obtaining AI capabilities through cloud-based software, according to Deloitte.
(This is certainly not a knock against Turing. Who really saw the cloud coming until the ‘90s?)
The ‘Turing Test’ May Have Gotten the Date Wrong, but…
In his 1950 paper (now considered an AI manifesto), Turing predicted that around the year 2000, computers would be able to answer questions in ways identical to those of human beings, when asked questions by a real human. While no machines have passed the strictest version of the Turing test, some of today’s computers come very close to passing it and definitely pass the test of being able to answer questions on par with domain experts — IBM’s “Jeopardy”-playing Watson being the most prominent example. Yet many still cannot.
This has not stopped computers from being smarter in ways that Turing never imagined. Self-driving cars, advanced robots, image recognition and algorithmic trading may not literally pass the Turing Test, but everyone would agree these are stunning examples of AI’s progress.
Turing doesn’t seem to have envisaged just how sophisticated AI would become, with deep learning capabilities that give computers the ability to learn and mimic the information processing patterns in the human brain. In today’s advanced AI technology, machines aren’t learning from humans as much as from data and their success depends on the quality of the data, not how smart the human programmer is. This is a distinctly 21st century leap in thinking.
I believe Turing grasped the concept of AI in broad strokes — for example, when he wrote that an intelligent machine “would be like a pupil who had learnt much from his master, but had added much more by his own work.” But he wrote elsewhere that “the machine must be allowed to have contact with human beings in order that it may adapt itself to their standards.” I’m not sure machines of the future learn from their human masters or rather from the myriad labeled data on the internet.
Will Machines Replace Humans?
I see little sign in Turing’s writings and speeches that he spent much time thinking about whether smart machines had the potential to take humans’ jobs. He was a mathematical theorist fascinated with the potential for computers to become much more intelligent, but I’m not sure to what extent he anticipated today’s discussion over whether AI is destroying jobs or augmenting them.
One clue, though, might be this: Turing saw AI, as it would come to be called, as an opportunity for “the centre of gravity of human interest (to) be driven further and further into philosophical questions of what in principle can be done.” In other words, let the machines free up human creativity. That’s truly what AI should be about.
While Alan Turing didn’t get everything right to the “T” and couldn’t have completely imagined life in the 21st century, he was a computer science prophet who saw how artificial intelligence could change the world. All of us in the field owe him a huge debt of gratitude.
About the Author
Bob Friday is co-founder and chief technology officer of Mist, which develops self-learning wireless networks using artificial intelligence. Bob’s career as a wireless entrepreneur is focused on developing and bringing to market unlicensed wireless technologies.