NEWS

Marvin Minsky, founding father of artificial intelligence, wins the BBVA Foundation Frontiers of Knowledge Award in Information and Communication Technologies

The BBVA Foundation Frontiers of Knowledge Award in the Information and Communication Technologies category goes in this sixth edition to American Marvin Minsky, regarded as a founding father of the artificial intelligence field. Minsky is also the author of key theoretical and practical contributions in mathematics, cognitive science, robotics and philosophy. A co-founder of the prestigious Artificial Intelligence Laboratory at the Massachusetts Institute of Technology, he was also instrumental in establishing the MIT Media Lab.

14 January, 2014

Profile

Marvin L. Minsky

Interview

Minsky: “Right now one thing is sure: there’s something wrong with any claim to know, today, of any basic differences between the minds of men and those of possible machines”

After expressing his contentment at receiving the award, Minsky declared himself firmly convinced that we will one day make machines as smart as humans. On how long this might take, he is less optimistic: “It depends how many people we have working on the right problems. Right now, there is a shortage of both researchers and funding.” True to his reputation as a scientific iconoclast, he adds ruefully: “Artificial intelligence contributed many ideas and methods between the 1960s and 1980s, and then its influence become smaller. I haven’t seen many advances in recent years, because the money is going more to short-term applications than basic research.”

In the words of the jury’s citation: “His work on machine learning, on systems integrating robotics, language, perception and planning, as well as on frame-based knowledge representation, shaped the field of artificial intelligence.”

Minsky (New York, United States, 1927) is Professor of Electrical Engineering and Computer Science at MIT, as well as Toshiba Professor of Media Arts and Sciences. In the 1950s, he became one of the founders of a whole new scientific field, artificial intelligence (AI), whose goal was to transform the computers of the time – essentially calculating machines – into intelligent devices able to incorporate functions mimicking human capabilities and thought. The impact of this shift was immense, since it marked the start of the conventional computer’s transformation into the first universal machine in history, through the progressive addition of new capabilities and its application to countless areas with a bearing on our daily lives.

The image of the computer as a giant calculator employed by a select group of organizations (corporations, governments, a handful of universities) gave way – thanks to the AI field – to the now familiar machine ubiquitous in the technology around us and which we use intuitively every day.

Minsky views the brain as a machine whose functioning can be studied and replicated in a computer, which would teach us, in turn, to better understand the human brain and higher-level mental functions. And some of his most inspirational work centers on the attempt to endow machines with common sense, i.e., the knowledge we humans acquire through experience.

He has also tried to account for diverse phenomena of cognition, language understanding and visual perception using the theory of frames, an omnipresent form of representing and storing knowledge by reference to the hierarchical relations between objects. Frames function as organized deposits of earlier knowledge and experience that enable us to process information.

Birth of a new field

Artificial intelligence officially came into being as a discipline at a computer science conference in Dartmouth College (New Hampshire, United States) back in 1956. The founders of the new field were John McCarthy of Stanford University, Allen Newell and Herbert Simon from Carnegie Mellon University, and Minsky himself, the group’s only surviving member.

Computers were just starting to show their power to handle previously unsuspected tasks, and the field was rich in every kind of promise. Minsky went so far as to affirm that “in one generation, the problem of creating ‘artificial intelligence’ will be essentially solved.” Although it would later became clear that the process was a lot more complex, artificial intelligence research has since yielded innumerable applications ranging from medical diagnosis, unmanned drones and intelligent robotics to a long list of expert systems that solve problems using the same approach as human specialists. The field also shares theoretical roots with the idea that computers should approximate the workings of the human brain, not the other way round, which has led to developments facilitating intuitive communication with machines.

Fascinated since his student days – at Harvard University – by the workings of the brain and the emergence of its cognitive functions, Minsky has led the quest to endow computers with common sense. For, to borrow his own analogy, if a young child knows not to use a building block that’s already in the tower or that to drag an object he has to pull not push the string, how can we teach a computer what seems to easy to the human brain? “We rarely recognize how wonderful it is that a person can traverse an entire lifetime without making a really serious mistake, like putting a fork in one’s eye or using a window instead of a door,” writes Minsky in one of his best-known works, 1985’s The Society of Mind.

In this book, Minsky expounds his mechanistic vision of how the human mind works, describing intelligence as a result of the interaction between myriad non-intelligent parts. In its sequel, The Emotion Machine, he extends his theory to the realm of feelings and emotions, which would simply be a product of multiple levels of processes. Hence his comment in an interview that: “Emotions are just a specific way of solving problems. For instance, when you choose to be angry, it’s so you can solve a problem really fast.”

Implications and applications

This vision has immediate philosophical implications for computer science: building a human-equivalent intelligence is not a utopia. After all, Minsky once quipped that the mind was a “meat machine”. In his essay “Why People Think Computers Can’t,” published in 1982, he writes: “when computers first appeared, most of their designers intended them for nothing except to do huge, mindless computations. That’s why the things were called ‘computers’. Yet even then, a few pioneers (…) envisioned what’s now called artificial intelligence or AI.” They saw that computers might possibly go beyond arithmetic, and maybe imitate the processes that go on inside human brains. (…) Still ‘computer experts’ say machines will never really think. If so, how could they be so smart, and yet so dumb?”

Despite his initial optimism, the progress of AI has shown that it is easier to get a machine to solve complex operations and apply expert processes than to make a medical diagnosis or behave commonsensically. But in the pursuit of these goals, computers have become humanity’s first universal machine, with multiple capabilities beyond mere computation, and applications in almost every parcel of our lives.

Minsky’s is also the mind behind inventions like the first neural network learning machine (SNARC) in 1951; the first head-mounted graphical display in 1963; and the confocal scanning microscope (patented 1957), still widely used in biology for its ability to reconstruct 3D images.

A devotee of science fiction, “where you find really bright authors and great ideas”, Minsky was an advisor to Stanley Kubrick in the making of 2001 A Space Odyssey, and in fact during the shoot came close to being flattened by a falling piece of set. Asked why forty years later there are still no computers as intelligent as HAL, he insists that the problem lies in the lack of funds for research.

Minsky believes there is a feedback between our understanding of mind and machine: that as we find more ways to make machines behave more sensibly, we will also learn more about our own mental processes. The loop will continue, he predicts in an essay, until we are faced with the dilemma of whether or not to create machines more intelligent than us. “We are fortunate to be able to leave that decision to future generations. No one can tell where we will get to, but right now one thing is sure: there’s something wrong with any claim to know, today, of any basic differences between the minds of men and those of possible machines.”

Information and Communication Technologies jury

The jury in this category was chaired by George Gottlob, Professor of Computer Science at the University of Oxford (United Kingdom), with Ramón López de Mántaras, Director of the Artificial Intelligence Research Institute of the Spanish National Research Council (CSIC) acting as secretary. Remaining members were Oussama Khatib, Professor in the Artificial Intelligence Laboratory in the Computer Sciences Department of Stanford University (United States), Rudolf Kruse, Head of the Department of Knowledge Processing and Language Engineering at Otto-von-Guerike-Universität Magdeburg (Germany), Mateo Varelo, Director of the Barcelona Supercomputing Center (Spain) and Joos Vandewalle, Head of the SDC Division in the Department of Electrical Engineering at Katholieke Universiteit Leuven (Belgium).