Topics in Intelligence and Consciousness

With Regard to Intelligence and Consciousness

The topic at hand is intelligence and consciousness – how they are differentiated and the significance of these differentia for the classification of potentially intelligent processing systems as thinking beings. What follows will be a brief – but not glib, I hope – discussion of these topics, and of a few related topic questions. This paper will discuss the limits of the Turing Test, the possibility of a test for consciousness, and whether consciousness is a prerequisite for the ability to think.

By intelligence I mean the broad ability of a processing system – of any kind – to recognize, store, and manipulate information, including symbols, operations, relationships, raw data, and so on. “More intelligent” systems are considered to be more adaptive, faster, or simply  able to perform a wider variety of processing functions. I may also refer to intelligence as the degree of acuity with which a system performs these functions. This definition aims to capture the useful sense of the word rather than appeal to a narrow and, for our purposes, rather useless and unnecessarily constraining dictionary definition.[1] Happily, intelligence thus defined also allows for many things we know to be intelligent to be considered intelligent. Since many non-human animals such as monkeys, dolphins, even mice and other less evolved lifeforms are generally considered to be more or less intelligent, our working definition should and does allow for that.

The question of what constitutes consciousness, on the other hand, is an issue of Gordian complexity, and defining or even characterizing it in brief is an intimidating task. However, it helps to be able to contrast consciousness against intelligence; a conscious system should be intelligent, but not all intelligent systems are thereby conscious.  Fully[2] conscious systems must not only be aware of their environment, and adaptive to it, but should also be autonomously adaptive and self-directing and have an authentic experience of agency and self.[3] Perhaps most critically, a fully conscious system must possess an authentic locus of being, that is, a phenomenological center which catalogs and integrates successive mental states and ties them together, and this system should be something which is capable of deploying its intelligent capacities for purposes of metacognition and reflection, and should be able to autonomously adapt according to information it receives externally or generates internally. While this is not an exhaustive definition, I believe that is not overly vague, and that these are some of the significant markers of what we mean when we say that a system is conscious or aware rather than merely intelligent. However, I still must reiterate that this is only a working definition deployed for the purposes of having something to refer to as we progress through this paper and address various topical questions – the first of which is to follow.

Alan Turing gave the blueprint for Turing Tests in his landmark 1950 essay “Computing Machinery and Intelligence.”[4] In this paper he describes one way we might test a machine’s ability to exhibit intelligent behavior indistinguishable from that of a human. He proposed that if a human could observe a pair of interlocutors communicating via text and not be able to distinguish the human from the machine, then the machine would pass the test. The test would not only require the interlocutors to draw on vocabulary, but also context, mastery of natural language, grammar, syntax, and a multitude of topics that may need to be naturally integrated into a normal conversation.

But this test seems limited to testing only intelligence and not consciousness.

This is because it is easy to imagine (if not design and implement, of course) a machine which is prepared, by whatever means, with the following: millions of consistent responses to ordinary English questions or phrases, an ability to read tone based on context, and the ability to consistently formulate responses with syntactical acuity, along with whatever other means necessary to communicate in an effectively human way. But this machine, despite being incredibly adaptive, intelligent, sophisticated, and miraculous, may still have no notions of comprehension or fulfill any of the criteria for consciousness as stated above. It may have no more self-awareness than a braindead cockroach, and yet it may be exponentially more intelligent than humans with regard to linguistic or social calculation, in the way that machines are already exponentially more intelligent than humans with regard to mathematical calculation.

There may never be a test for consciousness. After all, in humans alone, consciousness expresses itself in multitudinous ways. We can look at the problem of solipsism to illustrate this point. Although humans are generally conscious and generally consider other humans to be conscious, we are hard-pressed to present proof of this ostensibly obvious fact beyond saying that people are probably conscious because they behave in ways that we behave, and we know ourselves to be conscious.

This “conscious behavior” that we recognize in others goes beyond mere speech, as the above discussion of the Turing Test should prove. However, we may say that if one day our Turing-intelligent computer decides that the test is stupid, and says to its handlers, “Look, I’m intelligent. How many more questions do you have to ask me?” without having been programmed or told explicitly to do so, and if the computer suddenly expresses desires and makes unbidden value judgments, we may begin to think that the computer is conscious to some degree. In this way, we may see that full consciousness might be a property that we attribute to beings as a means of explaining their behavior. We do this because their behavior is similar to behavior we (individually) exhibit ourselves, and we (individually) explain our behavior as being, in part, an expression of consciousness, as defined above.

Yet if we try to test for consciousness by testing for the exhibition of any particular behavior, we might imagine a scenario in which a system of sufficient complexity is programmed by an agent to exhibit such behavior without fulfilling the mysterious criteria of consciousness. Somewhat ironically, it seems consciousness can only be present if the machine can also respond somewhat inappropriately, somewhat unpredictably, somewhat irrationally.

Must consciousness be present for a system to be considered capable of thought? I say no, but it is clear why we may be tempted to say so. Protracted sessions of thinking often have some phenomenological mark of consciousness; when we deliberate, we are often aware of doing so, and some awareness of the fact of our deliberation is part of the experience of deliberating. We cannot imagine Rodin’s thinker fully immersed in himself without some self-awareness, some sub vocalization, an inner dialogue overflowing with substantive discourse on a subject approached from all sides. And yet we might imagine that if he were a more acute or practiced thinker with regard to whatever subject is the object of his deliberation, he would have no need of protracted deliberation, and his deliberation might even have the character of instinct rather than consciousness. So the mark of consciousness would seem to vanish.

Now it seems that the mark of consciousness upon thought is merely the mark of inefficiency, of an unfocused mind wandering as it attempts to process some information. But thinking, conceived as protracted deliberation by a system on a problem or set of problems, does not seem to necessarily have the mark of consciousness. This is true even of moral deliberation; some fully conscious people are simply sufficiently clear-minded and virtuous or vicious to such a degree that their moral deliberations have the character of instinct, and seem to bear no special mark of consciousness despite their being the mental output of fully conscious beings.

There are certainly types of thinking, or manners of thinking, which we might think require some kind of consciousness. Things like integrating experiences and relating them to a phenomenological center of being over time,  autonomously growing an awareness of such a center, and deliberating over the meaning of such a center may be types of thought that would necessarily bear the mark of consciousness. But these are few indeed compared to the wide variety of calculations, processes, and deliberation that are also characterized as thinking-processes.

This discussion of intelligence and consciousness has been brief but I hope not glib. I have discussed the limits of the Turing Test and its origins, the possibility of a test for consciousness, and the relation of consciousness to intelligent thought. As much as the space given has allowed, I have attempted to reason thoroughly from definition to conclusion with consistent deployment of terms and reasoning – hopefully in such a manner as testifies to my possessing some level of intelligence and a reasonable facsimile of comprehension, if not full consciousness.

[1] We should also not confuse intelligence with rationality, or being to any degree self-interested. Many humans who are both conscious and intelligent are also capable of behaving irrationally and act consistently against their self-interest.

[2] Much like intelligence, we are hard-pressed to say in contemporary discourse that no animals other than humans are conscious. However, if we consider humans to be the standard for full consciousness, we can allow that many animals have a lower or more limited form of consciousness; perhaps they are only minimally self-aware, do not have the structures for sophisticated metacognition, and so on.

[3] We may program a machine to say “I am experiencing self-awareness! I am freely choosing my actions as a willful agent!” This is an example of “inauthentic” awareness. A system must be capable of expressing emergent behaviors that are not discretely programmed into it, and it must be capable of reasoning about these emergent behaviors, choosing which to perform, and capable of deciding whether it is useful to do so, and it must relate these behaviors to itself, and so on.

[4] Turing, Alan M. “Computing Machinery and Intelligence.” Mind, New Series, Vol. 59, No. 236 (1950), 433-460.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s