terça-feira, fevereiro 07, 2006

Douglas Hofstadter, no prefácio a "Gödel's Proof" de Ernest Nagel e James R. Newman


When Nagel and Newman were composing Gödel's Proof, the goal of getting computers to think like people - in other words, artificial intelligence - was very new and its potential was unclear. The main thrust in those early days used computers as mechanical instantiations of axiomatic systems, and as such, they did nothing but churn out proofs of theorems. Now admittedly, if this approach represented the full scope of how computers might ever in principle be used to model cognition, then, indeed, Nagel and Newman would be wholly justified in arguing, based on Gödel's discoveries, that computers, no matter how rapid their calculations or how capacious their memories, are necessarilly less flexible and insightful than the human mind.
But theorem-proving is among the least subtle of ways of trying to get computers to think. Consider the program "AM," written in the mid - 1970s by Douglas Lenat. Instead of mathematical statements, AM dealt with concepts; its goal was to seek "interesting" ones, using a rudimentary model of esthetics and simplicity. Starting from scratch, AM discovered many concepts of number theory. Rather than logically proving theorems, AM wandered around the world of numbers, following its primitive esthetic nose, sniffing out patterns, and making guesses about them. As with a bright human, most of AM's guesses were right, some were wrong, and, for a few, the jury is still out.
For another way of modeling mental processes computationally, take neural nets - as far from the theorem-proving paradigm as one could imagine. Since the cells of the brain are wired together in certain patterns, and since one can imitate any such pattern in software - that is, in a "fixed set of directives" - a calculating engine's power can be harnessed to imitate microscopic brain circuitry and its behavior. Such models have been studied now for many years by cognitive scientists, who have found that many patterns of human learning, including error making as an automatic by-product, are faithfully replicated.
The point of these two examples (and I could give many more) is that human thinking in all its flexible and fallible glory can in principle be modeled by a "fixed set of directives," provided one is liberated from the preconception that computers, built on arithmetical operations, can do nothing but slavishly produce truth, the whole truth and nothing but the truth. That idea, admittedly, lies at the core of formal axiomatic reasoning systems, but today no one takes such systems seriously as a model of what the human mind does, even when it is at its most logical. We now understand that the human mind is fundamentally not a logic engine but an analogy engine, a learning engine, a guessing engine, an esthetics-driven engine, a self correcting engine. And having profoundly understood this lesson, we are perfectly able to make "fixed sets of directives" that have some of these qualities.
To be sure, we have not yet come close to producing a computer program that has anything remotely resembling the flexibility of the human mind, and in this sense Ernest Nagel and James Newman were exaclty on the mark in declaring, in their poetic fashion, that Gödel's theorem "is an occasion, not for the dejection, but for a renewed appreciation of the powers of creative reason." It could not be said better.

Sem comentários: