Notes on Rodney Brook's book
Robot: The Future of Flesh and Machine

Penguin Edition, 2002
Published in the USA under the title:
Flesh and Machines, How Robots Will Change Us

Notes by Pablo Nogueira

Updated: 30 April 2015

Disclaimer

These type-written notes are personal reminders written to myself, not reviews, nor abridgements, nor literary criticism. They may not make sense unless you've read the book.

Theme

The rationale for the title comes from one of the book's themes, namely, whether in the future super-intelligent robots will take over humans. For Brooks this proposition is not speculative science-fiction but a possible reality. His answer: there won't be pure people nor pure robots. We have already embarked on the technological manipulation of our bodies [we might even embark on its biological and genetic manipulation], and we should expect further merges between human and machine. By way of telling illustration, Brooks recalls in the preface his encounter with a colleague, a double leg amputee wearing bulky mechanical legs which is described as half human, half robot.

Throughout the ages, human social and individual life has been affected drastically by technological revolutions, from the agricultural, civilisation, and industrial revolutions to the information revolution of today in which computers have made a big impact. ‘Disruptive technologies are those that fundamentally change some rules of the social games we live with. Napster is an easy example’ [p100].

Brooks predicts that machines and artificial creatures will impact further on civilisation and change the way we see ourselves in what he calls the robotics and biotechnology revolutions: artificial creatures will take over many of the tasks humans carry out today and machines and humans will integrate biologically. [p6-11]

In the first chapters, Brooks slowly presents with a bit of history his particular approach to answering the ‘Can machines be intelligent if they can only do what they have been programmed to do?’ Basically, the problem of AI.

Subsumption, Situatedness, and Embodiment

When we look at nature we find that animals don't have technology, despite the amazing constructions they are capable of: ‘if the situations are not completely stereotyped, then the animals are unable to generalise their evolutionary built-in plan to accommodate the novel circumstances’. In the case of chimpanzees, tool invention and development is difficult, does not grow, and can be lost after generations. Finally, animals lack what Brooks calls ‘syntax’. [p4]

For Brooks, an artificial creature has to be situated, it has to act in the world. Past artificial creations lacked spontaneity and did not respond to the environment. They were mere deterministic clockworks or automatons. Not only flawed design but lack of proper physical technology was the cause of this failure. Most programs/machines of today ‘do not have an ongoing existence that is tied into the flow of time. They are not situated. Rather, they are procedures that are applied to a current set of data and produce a result’, like mathematical functions [p103].

Brooks cites William Grey Walter's Machina Speculatrix as the first case of artificial creatures that not only showed spontaneity but also autonomy and self-regulation. The author explains whence their success in showing emergent behaviour, that is, behaviour that is not explicitly programmed:

even a seemingly very simple creature can have extremely complex behavior in the physical world because of the way that small variations in what is sensed, and how the actuators interact with the world, can change the actual behavior of the system … with just a few nonlinear elements … multiple elementary behaviours couple with the environment to produce behaviours that are more than simple strings or suppositions of the elementary behaviours. An observer finds it easier to describe the behaviours … in terms usually associated with ‘free will’ … rather than with detailed mechanistic explanations of the particular unknowable details of exactly what its sensors reported when.

This seems to be one way of breaking the straight-jacket of deterministic automatism and of solving the (soft) AI problem. The response to the environment need not take place in ‘linear’ (i.e., fixedly pre-programmed) fashion, and complex behaviour may emerge from the interaction with the environment. [How does this relate to a set of fuzzy control rules where the input values match the premises to some degree and the consequents' output may overlap? Here we find a separation of control and data (i.e., there is no algorithm) plus an element of non-determinism if the firing rule is chosen randomly from the set of matching rules.]

Brooks concludes by saying that it is difficult to implement a ‘theoretical physiological or neurological construct on a physically embodied system’, because details that are less relevant in the former become essential in the latter. [p17-21]

Initially, the field of robotics had followed a more top-down approach where the psychological and functional had dominated over the physiological and neurological. The stereotypical case of this is Shakey, a robot that had an internal (and by necessity, partial) representation of its environment that it recalculated according to effected action and sensed data. ‘Shakey used reasoning [calculation with representations] in situations where real animals have direct links from perception to action’. Shakey sat computing what to do while the world changed around it. [p23]

The lack of sufficient computational power seems not to be the reason for Shakey's failure if we look at the size and, therefore, relative computational power of an ant's brain. For Brooks, we must tackle the problem in bottom-up fashion: intelligence without representation first, ‘responding to the environment rather than contemplating its every move—sensory-motor loops rather than cognition’. In short, AI must escape from the lure of logic and reasoning, where the latter is understood as calculating with representations of the world. [p26]

Thus, what has been traditionally considered as intelligence has to be left aside. Most definitions of intelligence—usually concerning terms like knowledge, understanding, learning, successful manipulation of one's environment, and attaining goals—are all misleading in the sense that they cannot be pinned down into objective criteria useful in building an artificial creature. In the early days of AI, most researchers looked for intelligence in high-level problem solving that was more easily subject to formalisation, like playing computer chess or solving mathematical problems. Tasks that we seemingly carry out mechanically in the blink of an eye, such as those related to perception and action or even aesthetics, were considered second rate. To Brooks, our high-level cognitive abilities are ‘all based on a substrate of the ability to see, walk, navigate, and judge … they arise from the interaction of perception and action’.

Just as he did in the past, by negating the central belief of a discipline, i.e., by removing the cognition box and directly linking perception and action, he found himself a fruitful research programme where artificial creatures would ‘perform well’ with little computation, referring to the world via the sensors and with a minimal architecture for the perception and action boxes. [p36-40]

Instead of trying to come up with a general and definitive design, he relied on the principle that complex capabilities are built out of simpler ones, again from the bottom-up. He would start with simple behaviour and add on top layers of new behaviour, with new layers subsuming or controlling older ones; thus, he thought, emulating the process of evolution. His initial subsumption architecture (avoidance of objects, aimless wandering, and purposeful exploration), embodied in the robot Allen, was further developed by him and his doctoral students and it later materialised in the success story of the ant-robot Genghis. [p40-44]

Brooks provides an enlightening and detailed description of Genghis' internals in the appendix. Its architecture is described in its entirety by 51 rather small Augmented Finite-State Machines (AFSM), which give meaning to Brooks' dictum that albeit software is not life-like, rightly organised it can produce life-like behaviour:

Genghis had no internal notion of toward, of forward, or away. That was all embedded in the interactions of its sensors and actuators, mediated by the very simple, mindless AFSMs …

The architecture consisted of small software components that put together brought an inanimate object to life. It was an example of simple components producing complex, emergent behaviour through their interaction. The observable intentional behaviour was not reflected in the computations of the AFSMs, which had absolutely no notion of intention represented, nor explicitly programmed. [p46-50]

Brook's robots were based on the principles of situatedness and embodiment. A situated creature is embedded in the world and deals with it through sensors and concomitant action. An embodied creature has a physical body and experiences the world through that body. Many philosophical questions remain unanswered by this definition, as for example when does sensing become perception and when do sensors and actuators become bodies—in short, how does the system scale up. But no one can't deny Brook's originality and courage in tackling the problem from a novel and pragmatically realistic perspective. [p51-52]

On page 53, Brooks mentions Toto, a robot that represents its world indirectly by adjusting its AFSMs. This resembles the learning process taking place in neural networks, with no separation between using and being the representation.

In the following pages, Brooks tells the story of how his ideas led him to the ‘Cheaper, Faster, Better’ or ‘Fast, Cheap, and Out of Control’ hype (i.e., the fast development of small, cheap, and autonomous interplanetary robots that needed little control from earth) which materialised in a project to send a robot to Mars.

Brooks recalls that embodied intelligence can be noticed in everyday language, which metaphorically reflects our bodily interactions. Examples are ‘warmth’ in relation to emotions, ‘big’ in relation to importance, ‘ahead’ and ‘behind’ in relation to time. This tenet has been argued at length by philosophers like George Lakoff and Mark Johnson.

However, he warns, there is a danger that his situated and embodied robots bear a superficial and non-functional relationship to real people, like a wooden gun does to a real gun: ‘Perhaps we have left out too many details of the form of the robot for it to have similar experiences to people’. [p67-68] Indeed.

Perception and Behaviour

On page 72 he recalls Isaac Asimov's three laws of robotics and how the well-known sci-fi writer drew from their possible inconsistencies and contradictions some of the themes of his novels. Brooks then moves to the topic of computer vision, which he deems a hard nut to crack. Animals and humans seem very good at it. For some time, some AI experts underestimated the difficulties of vision and considered the field a matter of input-output (Brooks makes an explicitly reference to Marvin Minsky).

He delves at length in the remains of Chapter 4 on the basis of seeing, perceiving and behaving, which are interrelated. For example, seeing cannot be understood without understanding social interactions. It is an active process where our eyes get scraps of information about the world although we seem to consciously perceive a stable panoramic view of it, with the brain filling in lots of details. Brooks describes some of the anatomical particularities of our eyes; for example, that most of the receptors are located in the region called the fovea; that we can only perceive motion in the periphery (so the assumption is that we must move our heads); that we have a blind spot due to bad design of which we are mostly unaware, that we move our eyes in discrete saccades, not in swift continuous movement as we might perceive (however, we are capable of involuntary smooth pursuit when tracking passing objects); etc. People only care to perceive the relevant information of what they see that is related to some task or purpose, and are terrible at detecting small unexpected changes in the world; for instance, we usually have to be told about blunders in movies, like an actor's hair changing in the same sequence. [p75-84]

Ascribing Beingness

When do we grant a machine the status of being? We are comfortable granting our pets beingness even if we debate whether they are conscious. ‘They certainly all seem to have feelings. Our dogs display fear, excitement, and contentment. It is more debatable whether they show gratitude’ [p150]. Is an animal running away because of fear or as a result of a stimulus that triggered a response? (Skinnerian stance). Is fear just a biological manifestation of this stimulus and response? Whatever the answer, animal feelings seem visceral, like the kind of feelings we have in similar situations. We consider creatures with feelings as having some level of beingness and grant them rights accordingly. Every individual seems to have some tacit threshold for granting beingness to a creature thereby treating it with different ethical care. For example, we use insecticides for bugs and traps for mice, but would use a different method for cats or dogs.

We grant beingness to a creature when it carries itself in a way that it can be ascribed intentions and emotions. However, our ascriptions could be based on flawed judgement: scientific studies usually deny the reasoning abilities that pet owners unanimously find in their animals. Brooks recounts that many owners of Sony's artificial robot AIBO ascribed to it abilities the company categorically denied were possible. [p152-3]

Emotions

According to neurologist Antonio Damasio in The Feeling of What Happens, emotions in humans arise from primitive parts of the nervous and limbic systems. ‘These structures receive inputs from many parts of the brain's perceptual subsystems, and at the same time innervate both primitive motor sections of the brain and the more modern decision-making and reasoning centers’. Emotions thus play ‘more intimate roles in all of the high-level decisions that we tend to think of as rational and emotionless’. [p156-157]

Brooks and his colleagues have embarked on the construction of robots with systems that model emotions, examples of which are Kismet and the toy My Real Baby. He asks himself the question of when is a model or a simulation not the real thing. Should we decide based on operational criteria?

[A creature can appear to have emotions even though it is actually simulating them at a functional level. At bottom, we don't know how real emotions come about, so judgements based on operational criteria are wrong not only because such criteria might deceive us (AIBO's example above) but because they drive us away from the real question of what are real emotions and how they work] [cf. Turing Test Rant]

Specialness

However, Brooks shifts the point in an interesting direction: we don't say planes simulate flight because they don't flap their wings. They fly. ‘Is there something deeper in the semantics of the word "emotion" as contrasted to "flying" …?’ [p154-158] [Perhaps one way to answer would be that the semantics of emotion are not understood whereas the semantics of flying are. It is simply better and technologically easier for planes to fly the way they do]

Brooks develops his answer in the subsequent pages. First, he pinpoints that many people refuse to ever grant or ascribe any human quality to a machine because they believe humans are special. Brooks questions this belief. He attacks anthropocentric prejudices and blind allegiance to a closed system of thought, be it religious or scientific, ‘where faith and a limit on questioning … releases us from having to search further … provides a satisfaction and an ease …’ and keeps people in ignorance by means of clever subterfuges in order to dismiss adverse and discordant ideas or evidence.

[This argument is all very nice, but the opposite scenario is ridiculous: intelligence (or name it) should not be ascribed to programs/machines by induction on circumscribed operational success. What's in question is the conditions for the attribution of success. Newell and Simon didn't hesitate to dub their General Problem Solver ‘intelligent’. The program was indeed a success, but not of the sought kind. Jumping from GPS into a general theory of Human Problem Solving resembles jumping from the operational conditioning of laboratory rats to a theory of Human Behaviour. A bit of realism is needed, but that is hard to bear in mind when competing for research grants. This is a fine point raised by Weizembaum [below].

For Brooks, our deep-seated prejudices arise from the emotional wishful thinking that blocks our reasoning. [But we must judge after we understand. The problem is that often in science, and particularly in AI, the notion of overwhelming or irrefutable evidence hangs by a thin hair.]

Brooks recounts a bit of the history of ideas that challenged established beliefs, such as Galileo's affirmation of the Earth's place in the solar system, Einstein's relativity, Heisemberg's uncertainty principle, and Darwin's evolutionary theory, which for Brooks is true even if ‘details here and there may be fuzzy, but the central outline is undeniable’ [!]. He does not spare scorn for American creationists:

Why do they cling to their flat-Earth-like beliefs? They are afraid of loosing their specialness. For those whose faith is based on literal interpretation of documents written by mystics thousands of years ago when just a few million people had ever lived, the facts discovered and accumulated over thousands of years by societies of billions of people are hard to accommodate.

I cannot help recalling Grabriel Zaid's words: ‘humans look for solace, not truth’.

According to evolution, humans are not special in a purposeful or divine way. They are the way they are because of random chance and selection. The discovery of DNA has also demoted our status; its study has shown that we are made of the same material as any other living being and ‘share 98 percent of our genome with chimpanzees’.

[p159-164]

Now, back to the question of thinking machines:

Are we more than machine, or can all our mental abilities, perceptions, intuition, emotions, and even spirituality be emulated, equated, or even surpassed, by machines? [p165]

But careful: to emulate and to equate are two different things, and here lies the trouble.

On Weizenbaum and Dreyfus

Brooks enters the debate overviewing and criticising the position of these thinkers and stating his own.

Of Joseph Weizenbaum's Computer Power and Human Reason, he says the former ‘fled from the notion that humans were no more than machines. He could not bear the thought. So he denied its possibility outright’. To Brooks, Weizenbaum's aim is to discredit the idea of an intelligent machine without a rational argument. This is an unfair and terrible oversimplification and misconception of Weizenbaum's book.

He is slightly more condescending of Hubert L. Dreyfus which for Brooks is right in that ‘people operate in the world … intimately coupled to the existence of their body in that world’, but wrong in his unsupported assertion that the yet unformalised aspects of intelligence cannot be mechanised in principle [cf. Dreyfus What Computers Can't Do]. Many of those aspects have been mechanised in one way or another, demonstrating that algorithmically described processes can show intelligence. Dreyfus can only make use of philosophical ammunition and his ‘unsupported assertions’ are tentative. Of course, arguing that something is not possible just because nobody has an inkling of how to attain it is not a valid argument, as Brooks rightly points out. One wonders if overwhelming arguments can be found against the otherwise partially undecidable empirical stance. For Brooks, these arguments are not valid and emotions are our only bastion of specialness, for now. [cf. Turing Test Rant]

[p166-171]

In Principle We Are Machines

Brooks starts Chapter 8 taking a reductionist stance:

A central tenet of molecular biology is that that is all there is … [and] biomolecules interact with each other according to well-defined laws … [the] body consists of components that interact according to well-defined (though not all known to us humans) rules that ultimately derive from physics and chemistry … We are much like the robot Genghis, although somewhat more complex in quantity but not in quality … The idea that we are machines seems to make us have no free will, no spark, no life. But people seeing robots like Genghis and Kismet do not think of them as clockwork automatons. They interact in the world in ways that are remarkably similar to the way in which animals and people interact. To an observer, they certainly seem to have wills of their own. [p172-174]

How can the body's components interact according to well-defined yet unknown rules? This is a statement of faith. I think Brooks' argument leaps in similar ways to Skinner's.

Brooks points out that seeing people as machines does not mean we have to treat them like we treat a toaster. We can hold inconsistent sets of beliefs, like a religious scientist, that make us accept emotional and intelligent machines as beings. We have to get less rational when judging machines, just like we are when judging people. We are machines and we are intelligent and have emotions and consciousness, so in principle, we can build other machines that are intelligent, have emotions, and have consciousness, even if they are of a different sort [non-sequitur]. What makes us equal is the status of being machines. [p176]

[The whole point is sniffy. ‘In principle’ we are machines. (Let's pass over the ‘in principle’ proviso, albeit it carries weight and deserves an argument.) Whether we like it or not, the fact is this is a statement of faith, a unproven conjecture based on partial information. Then, we're told that denying this ‘fact’ puts us in the class of retrogrades against reason and progress. And finally, we must act as if this fact wasn't true and hold inconsistent sets of beliefs, get ‘less rational’.]

The ‘New Stuff’ in Searle, Penrose, and Chalmers

In Brooks' opinion, Roger Penrose, David Chalmers, and John Searle are materialists that oppose the view that humans are machines by means of inventing new stuff that we have and machines don't. Penrose and Chalmers could be right but they only put forth unsupported hypothesis.

Penrose's argument is basically the Gödelian argument against a precise notion of machine, namely, a universal Turing machine. He conjectures that the missing piece that explains consciousness lies in the quantum mechanics of the brain. Brooks does not contest Penrose's use of the Gödelian argument and limits himself to criticising what he calls Penrose's creation of a new deity. Similarly, Chalmers appeals to the existence of something mysterious and not understood.

Searle believes that the causal powers of the brain cannot be simulated by a machine. Brooks summarises Searle's Chinese Room argument as follows:

His arguments are largely of the form of imagining robots and computer programs that have the same input-output behaviours as animals or people, and then claiming that the (imagined) existence of such things proves that mental phenomena and consciousness exists as a property of the human brain, because the robot has neither of them, although it is able to operate in the same way as a human … deep down [Searle] does not want to give up the specialness of being human. [p178-179]

Brook's counterargument is close to the ‘System's Reply’: the operator does not understand the rules, the whole system does. And it does not matter how it is physically constructed, it could be made of silicon or beer cans.

For Brooks, all these arguments about specialness are driven by emotions and beliefs. Surely he is aware that, by the same token, his arguments also are. I think that appealing to people's motives or agendas when discussing their arguments is somewhat risky; we should stick to the arguments themselves. [p180]

In the following pages Brooks humbles about the merits of his machines and deliberately proposes his own new stuff without supporting evidence. Direct attempts to tackle the problem of general intelligence have failed, and so have indirect attempts based on simulations of evolution in A-Life: ‘somehow, a glass ceiling was soon hit’ [p182]. ‘The problem is that we do not really know why they do not get better and better, so it is hard to have a good idea to fix the problem’ [p184] [Perhaps it is because tinkering with settings and knobs until it works is more engineering than science.]

On the same page Brooks lists a few possible explanations: (1) a few parameters (knobs) might be wrong in the systems; (2) the artificial environment is too simple; (3) there is not enough computer power (which made a difference in the case of chess playing); or (4) we are missing something in our model of biology and evolution, that is, there might be some new stuff we need. However, unlike other thinkers, Brooks believes this stuff is ‘staring at us in the nose, and we just have not seen it yet’ [p187]. He draws an analogy with the mathematics of computation: ‘Computation was a gentle, non-disruptive idea, but one that was immensely powerful … I am convinced that there is a similarly, but different, powerful idea that we still need to uncover …’

Coda

I want to conclude this summary with one of Brook's naive assertions:

the emotion-based intelligent systems for robots situated in the real world are plugging ahead. These robots will not hate us for what we are, and in fact will have empathetic reactions to us. [p202]

The premise is that emotions are essential to intelligence, but I don't see how being empathetic toward us follows from robots having emotions.

Brooks states on page 202 that we are still far away from building machines that outsmart us and that are out of our control. I guess he means to ‘generally’ outsmart us, as there are machines/programs that outsmart us at many tasks. As for ‘out of control’, he is one of the pioneers of autonomous robots. What worries me more is that we already have programs that are not intelligent but are out of our control. Weizenbaum already called the attention to this fact with his Incomprehensible Programs.