Devoir de Philosophie

Chinese room argument

Publié le 22/02/2012

Extrait du document

John Searle's 'Chinese room' argument aims to refute 'strong AI' (artificial intelligence), the view that instantiating a computer program is sufficient for having contentful mental states. Imagine a program that produces conversationally appropriate Chinese responses to Chinese utterances. Suppose Searle, who understands no Chinese, sits in a room and is passed slips of paper bearing strings of shapes which, unbeknown to him, are Chinese sentences. Searle performs the formal manipulations of the program and passes back slips bearing conversationally appropriate Chinese responses. Searle seems to instantiate the program, but understands no Chinese. So, Searle concludes, strong AI is false. Searle (1980) argues that, since in the imaginary case he does everything the computer would do and he still does not understand a word of Chinese, it follows that a computer successfully programmed to pass a Chinese Turing test (Turing, A.M. §3) would not understand Chinese either. The problem, according to Searle, is that computer programs, whether executed by electronic devices or by Searle inside the room, concern only syntax, or strings of symbols characterized only by spelling, not meaning, while thought and understanding require meaning and semantics. And he claims you cannot get semantics from mere syntax, no matter how subtle or complicated it may be.

« alone.

Such an ensemble he asserts is just the wrong sort of thing literally to understand or to have a mind.

Second, he tries to avoid the objection by altering his example and removing himself from the room to a wide open field and committing all the rule books to memory.

In such a case he would perform all the purely formal operations in his head, re-establishing his analogy with the whole computer.

He asserts he would still not understand a word of Chinese (though some of his critics claim otherwise). The robot reply concedes that the original Chinese room might not suffice for understanding Chinese.

Since all its behaviour is strictly verbal, it is able to make connections between words and words, but not between words and the world: it might give coherent answers to questions about boiled eggs, but not have any ability to recognize an egg.

The robot reply thus imagines a program installed within a robot that allows it to simulate the full range of human behaviour, verbal and non-verbal; and that , it is claimed, would count as understanding Chinese and having a mind. Searle, however, disagrees.

He notes that the robot's sensors would supply its internal processors only with more formal symbolic inputs.

Were he and his Chinese room installed within the robot with additional rule books specifying how to respond to those additional symbols, and were these responses to drive the robot's apt behaviour, he, Searle, would still not understand a single word of Chinese.

Nor, he argues, would the robot.

Again, the assumption that mentality requires consciousness seems in part to drive Searle's intuition.

Critics less sympathetic to that assumption and more inclined to analyse intentionality in terms of causal interactive relations have been predictably more persuaded by the robot reply, especially if it is combined with the systems reply (see Semantics, informational ). Other objections to the Chinese room have sought to discredit it as a source of unreliable and misleading intuitions. Some have focused on the enormous disparity between the complexity and speed that would be required in a computer able to pass a Turing test, which the man in the room could not match.

Others have stressed that strong AI need not be committed to the Turing test, and that the specific program by which behaviour is produced is crucial to whether something has a mind.

In reply, Searle denies that speed or style of program is of any importance; what matters, he argues, is that the operations he performs are merely formal.. »

↓↓↓ APERÇU DU DOCUMENT ↓↓↓

Liens utiles