AI = Alchemy of Intelligence

Pablo Nogueira

Updated: 6 July 2012

The effort of using machines to mimic the human mind has always struck me as rather silly. I would rather use them to mimic something better [...] In the long run I expect computer science to transcend its parent disciplines, mathematics and logic, by effectively realizing a significant part of Leibinz's Dream of providing symbolic calculation as an alternative to human reasoning. (Please, note the difference between ‘mimicking’ and ‘providing an alternative to’. Alternatives are allowed to be better.)

Edsger W. Dijkstra, On the Cruelty of Really Teaching Computer Science
Communications of the ACM. Volume 32, Number 12, 1989

Research in Artificial Intelligence (AI) has produced marvellous technical results and insights about the concept of intelligence and its relation to computation. The short-term goal of developing programs that can do things which (are agreed to) require intelligence has been achieved to a great extent.


Developing programs capable of general intelligence is beyond current research. The reasons of failure may be diverse. It may be that integrating aspects of intelligence is hard. It may be that intelligence is computational but the theory of computation is lacking, for example, the computational principles of unknowns such as awareness. It may be that special hardware is required. It may be that humans play chess using the same intelligence required to recognise faces. It may be that intelligence is not computational after all. It may be that intelligence itself is the wrong concept and the problem is in our intuitions and our language. Not even the starting points are clear.

There are programs whose behaviour is deemed intelligent and there are programs whose behaviour is deemed unintelligent. Where lies the difference? In both cases the measure is behaviour, but is it enough to pass judgement?

Hiding behind an empirical stance (cf. Newell and Simon's ‘physical symbol system hypothesis’) smells of alchemy . The assumption is that the principles have been laid out and are unchanging. Intelligence will be shown by the right program. Researchers ‘only’ need to find/construct that program and the theories that will allow such program to be constructed. Medieval alchemists sought the right kind of chemical process to transmute metal into gold. They never found it. Their chemistry wasn't developed and transmutation turned out to be a poor concept.

[Cf. Turing Test Rant]

I regard the following as fundamental:

  1. The underestimated importance of concrete architecture. It is not enough to propose unsubstantiated theories or thought experiments that border on scholasticism. Cognitive scientists, philosophers, and AI researchers must put forth constructive theories, where constructive is used in its mathematical sense. We must answer the what and the how.
  2. The importance of awareness and its role in understanding. Awareness is to me a key aspect of intelligence. I see learning somewhat embedded in it. Awareness seems to escape from the computational model that views intelligence as a consequence of a many-levelled symbol shoving process. (Gödel-like self-referential systems are a clever yet unsubstantiated explanatory conjecture.)

Perhaps it is because of our partial understanding of these two points that most philosophical debates on foundations end up in questions of principle. For example, the Chinese Room argument and its Systems Reply.

[Cf. The Chinese Room]