Syntax and Semantics:
The Future as a Way
Out of the Chinese Room
In this essay I will provide an
analysis of John Searle’s ‘Chinese Room Argument’ (CRA), unpack two notable replies
the argument has faced, and provide my own reply. My analysis will show that
Searle intends the argument to be a knockdown argument against what he calls ‘strong
AI’ insofar as mere symbol manipulation is not sufficient for understanding. I
will feature as objections to the CRA the ‘robot reply’ and the ‘combination
reply’; both of which are raised by Searle in his 1980 article Minds, Brains, and Programs. I will
conclude the paper by considering the question of whether syntax is sufficient
for semantics, and argue that a program could fulfill the sufficient conditions
of understanding purely by virtue of syntax insofar as the form of
understanding could be seen as a complex but implementable logical form.
John Searle begins Minds, Bodies, and Programs by asking
what philosophical significance ‘recent efforts at computer simulations of
human cognitive capacities’ possess; how do these efforts stand in relation to the
concerns of the philosophy of mind? (Haugeland 183) In order to properly assess
this question, Searle makes a distinction between what he calls ‘strong’ AI and
‘weak’ AI. While ‘weak’ AI, Searle claims, is the position that computers serve
as ‘very powerful tool(s)’ for approaching the study of the human mind,
‘strong’ AI is the position that computers are not simply tools, but, given the
right program, ‘can be literally said to understand and have other cognitive
states’. (183) While Searle claims to not have qualms with the weak position,
he does take issue with the strong position insofar as it claims computers have
cognitive capacity and that the programs that instantiate these are
explanations of human cognition; Searle’s arguments will only address the
‘strong’ AI thesis.
In an effort to show that his
argument is not a sophisticated straw-man argument, Searle provides real
examples of what he claims are instantiations of the ‘strong’ AI mentality in
the work of Roger Schank of Yale University. Searle describes the goal of
Schank’s program as being an attempt to ‘simulate the human ability to understand
stories’. (183-184) Schank attempts to bring this goal to fruition by creating
a program that, given a story, can provide, as a human would, a yes or no
answer to a question posed about that story. In the same way one would orate a
story to a human being and then ask him or her questions to which they would give
appropriate responses, Schank hopes to provide the computer with a
‘representation’ of the information a human being would have. The computer
would then ‘print out answers of the sort that we would expect human beings to
give if told similar stories’. (184) Searle posits that the proponents of
‘strong’ AI hold these two claims to be true:
a)
The machine can be literally be said to
understand the story and provide answers to questions; and
b)
What the machine and its program do explains the
human ability to understand the story and answer questions about it. (184)
Searle hopes to
demonstrate by virtue of a ‘Gedenkenexperiment’[1]
that Schank’s position, which is illustrated by the two claims stated above, is
not in fact supported by his experiments.
Searle will attempt to show that even he, under the very conditions
Schank condemns his computers to, would not understand the story; thus enters
the CRA.
Searle begins the though-experiment by asking the reader to imagine
Searle himself locked up in a room. Searle is given three ‘batches’ which
consist of: 1st batch: a set of Chinese letters; 2nd
batch: more Chinese letters and rules written in English that serve to
‘correlate one set of formal symbols with another set of formal symbols’; and
the 3rd : another set of rules about how to match symbols from the
final batch the 2nd batch.
(185) Searle then wants the reader to
imagine that the people giving him these batches are calling the first batch
‘script’, the second batch ‘story’, and the third batch ‘questions’. Furthermore,
when Searle gives symbols back they call these ‘answers to the questions’. Eventually Searle becomes quite the adept symbol
manipulator and the people outside the Chinese Room begin to assume that he
understands Chinese. However, Searle knows that he doesn’t understand a word of
Chinese; Searle might understand the rules in English, but in terms of the
Chinese letters all he does is manipulate symbols he doesn’t understand: he is merely computing; ‘I
am an instantiation of a computer program’. (186)
Searle uses the thought experiment as the basis for his criticisms of the
two main assumptions Schank’s work possesses. In regards to the first
assumption that a computer literally understands a story, Searle claims that it’s
clear from the thought experiment that he does not have an understanding of the
Chinese story. Does it not logically follow, he then argues, that the computer
‘for the same reasons, understands nothing of any stories, whether in Chinese,
English, or whatever’? (186) Insofar as a human being would not understand the
story if he was in the same position as the computer, how could one argue that
the computer would have this power? To the second point, namely that a computer
explains human programming insofar its program consists of the fundamental
basis of such understanding, Searle argues that this is simply false; how could
one argue that what the computer exhibits is a sufficient condition of
understanding when there is no understanding and the computer is fully
operational? Programs, if they are defined as Searle claims ‘computational
operations on purely formally-defined elements’, seem to lack what Searle calls
understanding; one needs to possess semantics for understanding, while a
computer program is purely syntactical. If this is the case, Searle argues that
‘strong’ AI is in trouble insofar as its goal was to recreate intelligence by
simulating it within a program; even if a friend of ‘strong’ AI was to respond
that understanding could be added to the structure of the machine, ‘strong’ AI
would still be untenable because it assumed a program, or computation, could be
sufficient for understanding. Therefore, Searle hopes to have defused Schank’s
‘strong’ AI project of crafting a mimesis of ‘understanding’ purely
syntactically.
A key aspect of Searle’s account is his notion, or at least Schank’s
misguided notion, of understanding. Searle argues that people often misuse the
word ‘understanding’ by applying it to things where it does not apply; one may
say that ‘the door knows when to open
because of its photoelectric bell’ when it is clear that the door does not
‘know’ in the sense one would say a human ‘knows’. (188) Searle claims that this happens when human
beings ‘extend (their) own intentionality’ to artefacts. What does Searle mean
by intentionality? Intentionality, Searle claims, is the way in which mental
states are ‘directed at or about objects and states- of affairs in the world’.
(204) Searle is claiming that ‘understanding’ presupposes intentionality, and
unfortunately for ‘strong’ AI, programs do not have intentionality; the way in
which one would say a computer ‘understands rules’ does not correspond to what
Searle means by ‘understanding English’. (188)
The two objections I’ve chosen to
present in this paper are also presented by Searle in Mind, Brains, and Programs. I’ve chosen to present them and not the
other criticisms found in said paper specifically because I think they will
help me elucidate my critique optimally in the third part of this essay. The
first objection, the ‘robot reply’, concedes that while a program is itself not
sufficient for having the complex propositional attitudes that organisms have,
a robot could be built that would somehow possess a specialised syntax, as well
as a camera attached to it that would complement its syntax with
representational content. Proponents of the ‘robot reply’ claim that this robot
would in fact mimic human understanding. It should be clear that this is not
quite a direct response to the Chinese argument; Searle does not argue with the
CRA that this is impossible. However, the view does give tangible support to
the notion that technology could have a place in the philosophy of the mind
insofar as a cognitive process could conceivably be mimicked. Jerry Fodor, a
supporter of the ‘robot reply’ claims that there is no principled reason to why
specialised syntax or representational content ‘can only be true of organic
systems’. (Lafave)
The second reply I’ve chosen is the
‘combination reply’. This objection is called the ‘combination reply’ because
it combines the key elements of two previous critiques, the ‘robot reply’ and
the ‘brain simulator reply’, into one; one is to imagine a ‘brain shaped
computer lodged in its cranial cavity’, where that computer mimics ‘all the
synapses of a human brain’. (195 Haugeland) Furthermore, the computer would exist within a
robot that would mimic human behavior such that it was indistinguishable from a
human being. The conclusion of this reply is that one would have to ‘ascribe
intentionality to the system’. (195) While this objection is not levelled
directly at the CRA, and in fact assumes that Searle has demonstrated
computation is not sufficient for understanding, the reply reiterates the
simple conjecture that technology can be deeply interconnected with the
philosophy of the mind; building a computer with such characteristics would
either presuppose knowledge of the nature of our organism’s cognitive
capabilities, or would in some way contribute to knowledge of such cognitive
capacities by virtue of a significant reconceptualization of its structure. If
anything, eliminating ‘strong’ AI leaves room for ‘weak’ AI to flourish; if AI
is, as the ‘weak’ AI thesis claims, a tool, then it seems like if one
eliminates unfounded assumptions about the limitations of said tool, namely
those assumed by ‘strong’ AI’, then one can open AI up to many new possibilities.
One of those very possibilities could be the computer/robot featured in the
‘combination reply’ itself.
My reply to the CRA is in fact not
much like the ones I’ve stated above; I am not going to argue that a robotic
apparatus and a brain-like computer could conceivably be sufficient for
‘understanding’. However, it does share some semblance with the views insofar
as it attempts to push AI towards a place where it could produce
‘intentionality’ or ‘understanding’. I also agree with the previous replies
insofar as I think Searle responds appropriately to Schank’s notion of
‘understanding’ by claiming it to be misguided. What I want to argue, or at
least hint towards, is the idea that a computer program could itself provide
the sufficient conditions for understanding in Searle’s sense; one could derive
‘intentionality’ and understanding from pure computational syntax. My notion of
‘understanding’ differs from Schank’s notion of it; I do not think that a
computer must be able to answer questions about a story in order to be said to
understand it. It seems trivially true to me that someone could understand
something without being capable of explaining it. I would instead characterise
understanding in a way similar to Searle’s; something involving intentional
states and true semantic content. What I would argue is that the subjective
qualitative state of understanding could be in principle be mimicked by a
computer program as, while it may be semantic, it possess a form that may be
reducible to syntax.[2] As it can be seen from my
analysis, Searle is arguing that directedness of meaning or something akin to
the form of a propositional attitude is what constitutes intentionality. If one
posits that this form is what constitutes intentionality, then I would argue
that it is conceivable that a highly complex programming language could
replicate the form of this propositional attitude; one could create a
‘subjective/intentional programming language’. This would of course require not
only a new programming language, but also a complete reconceptualization of
what a program can be.
One of the factors I am considering
is the progress programs have made since the time when
Searle wrote this article, and the predictable progress programs will make in
the future, and just how that progress can factor into reconceiving one’s
notion of what a program is. It would be a tad presumptuous, I would argue, to assume
that the only way one could ever create understanding would be to add a robot
to a computer program. This assumption presupposes a fixed notion of ‘program’
such that it only consists in syntax and that a robot would complement it by
providing it with semantics. The seeming mutually exclusive nature of syntax
and semantics, I would argue could be washed away in the future by a program
that somehow consists of a logical language that could manipulate purely
semantic content by mimicking its form as a symbol. Part of the form of
semantic content could be intentionality, and I see it as within the scope of
logic to formalise intentionality. While one could say that intentionality is
inherently temporal and therefore could not be reduced to logic which is static
system, I would argue that computers make logic temporal in some sense by
giving it a medium in which it can change and grow. I am not arguing that such
a language exists currently- I am simply claiming that, in the same way Schank
was misguided in his conception of ‘understanding’, the way in which programs
are defined could be misguided and limiting.
In conclusion, my analysis has shown
that Searle presents a purported knockdown against what he calls ‘strong AI’
insofar as he claims to have demonstrated that mere symbol manipulation is not
sufficient for understanding. I have also considered the ‘robot reply’ and the
‘combinations reply’ and argued that a program could fulfill the sufficient
conditions of understanding purely by virtue of syntax insofar as the form of
understanding could be seen as a complex but implementable logical form.
Work Cited:
Haugeland, John. Mind
design II. 2nd ed. Cambridge, Mass.: MIT Press, 1997. Print.
Lafave, Sandy. Westvalley College
Philosophy Department. The Chinese Room Argument and Replies to it. http://instruct.westvalley.edu/lafave/Notes_on_AI.html
. 28/02/07. Web.
[1]
Gedenkenexperiment is a German phrase
equivalent to the English phrase ‘thought-experiment’.
[2]
By syntax I mean something like a genetic syntax; in the same way that language
can be seen as acting out of a Universal Grammar and reducible to a formal syntactic
structure, propositional attitudes like ‘ a believes x’, ‘a understands x’, and
‘x said z about x’ could be
conceivably be reduced to such a structure. Such a structure would also have to
include the inherent ‘mineness’ subjectivity possesses as well; the intuition
that one has dominion over their subjective state.
No comments:
Post a Comment