Devoir de Philosophie

theories of Common-sense reasoning

Publié le 22/02/2012

Extrait du document

The task of formalizing common-sense reasoning within a logical framework can be viewed as an extension of the programme of formalizing mathematical and scientific reasoning that has occupied philosophers throughout much of the twentieth century. The most significant progress in applying logical techniques to the study of common-sense reasoning has been made, however, not by philosophers, but by researchers in artificial intelligence, and the logical study of common-sense reasoning is now a recognized sub-field of that discipline. The work involved in this area is similar to what one finds in philosophical logic, but it tends to be more detailed, since the ultimate goal is to encode the information that would actually be needed to drive a reasoning agent. Still, the formal study of common-sense reasoning is not just a matter of applied logic, but has led to theoretical advances within logic itself. The most important of these is the development of a new field of ‘non-monotonic' logic, in which the conclusions supported by a set of premises might have to be withdrawn as the premise set is supplemented with new information. The formal study of common-sense reasoning is a field in which the concerns of philosophy and artificial intelligence converge. From a philosophical point of view, it is motivated by a popular approach to the philosophy of mind - known as the ‘language of thought' hypothesis - that postulates a domain of mental representations as the primary bearers of meaning, and then analyses cognition as the rule-governed manipulation of these representations. In order for an approach along these lines to be useful, some account must be provided of the structure of the internal representations, and of the rules governing their manipulation (see Language of thought). These issues have been studied extensively within cognitive psychology, of course. Here, the idea is to focus on an existing system capable of intelligent behaviour in a wide range of circumstances - a human being - and to attempt to infer the symbolic processes underlying this intelligence. But the issues are studied also within the area of artificial intelligence, and from a perspective that offers a different kind of illumination: the goal here is to build an intelligent system, rather than attempting to discover the structure of a system that already exists. Although the performance exhibited by the artificial systems designed to date is significantly less impressive than that of humans when evaluated across a broad range of circumstances, the symbolic processes underlying the behaviour of these systems are at least well understood. Among the various research methodologies that have emerged within artificial intelligence, one of the most powerful is the logic-based approach first advanced by John McCarthy; and it is primarily this approach that has motivated the formal study of common-sense reasoning. In fact, the logic-based approach in artificial intelligence has much in common with the project of formalizing mathematical and scientific reasoning that has been carried on throughout the twentieth century. Within artificial intelligence, the idea is that much of the knowledge necessary for achieving intelligent action even in everyday situations can be represented in a machine through the formulas of some logical language, and that the reasoning tasks underlying this intelligence can then be accomplished by means of logical deduction. Both the idea of using logic as an underlying representation language for artificial intelligence and the emphasis on formalizing common-sense knowledge are present even in McCarthy's very early work (for example, McCarthy 1959), but the project receives its clearest articulation in a paper jointly authored by McCarthy and Patrick Hayes (1969), which isolates the task of defining ‘a naïve, common-sense view of the world precisely enough to programme a computer to act accordingly'. Much of the work now produced within artificial intelligence on the topic of formalizing common-sense reasoning is similar to what one finds in philosophical logic - attempts to adapt or generalize logical techniques to apply to some new area - and there is now a good deal of interaction between these two fields. Still, the research that takes place within artificial intelligence has a somewhat different character from standard philosophical logic. For one thing, the matter of implementation is always in the background, and sometimes in the foreground; but even apart from that, the studies generated within artificial intelligence are often focused on very detailed representational problems that would seem peculiar to a philosopher. It would not be unusual, for example, to find a research project in artificial intelligence with the goal of representing what an agent would have to know about the objects in their kitchen in order to prepare a meal. This kind of detailed work is necessary, of course, since the ultimate goal is to encode information in such a way that it could actually be used to drive a reasoning agent. Still, it is not as if the project of providing a logical representation of common-sense knowledge were simply a matter of applied logic. In fact, the problems presented by the task of representing this kind of information explicitly have led to a number of theoretical advances within logic itself: the most important of these is the development of the new field of ‘non-monotonic' logic. In most standard logics, the addition of new information to a given set of premises might lead us to draw new conclusions, but never to withdraw conclusions already reached. The set of consequences of a given set of premises is thus said to grow monotonically as the premise set grows: the consequence set can only increase, never decrease, as the premise set is supplemented with new information. A non-monotonic logic is simply a logic in which this property fails - a logic in which the addition of new information to a given premise set might force us to retract some conclusion drawn from the original set of premises. The study of non-monotonic logics within artificial intelligence was motivated by the realization that much of our common-sense knowledge concerns defeasible information - generalizations subject to exceptions, such as ‘Birds fly' or ‘Things remain where you put them'. Of course, philosophers had always known that these defeasible generalizations could not be represented naturally in ordinary logic: the statement ‘Birds fly', for example, cannot be represented by a formula of the form 8x(Bx ¾ Fx). But it was not until the practical need arose for reasoning with defeasible statements such as these that serious attention was focused on the problem; and it then became apparent that the appropriate notion of defeasible consequence would have to be non-monotonic. To take a standard example, given only the information that Tweety is a bird and that birds tend to fly, it is natural to conclude defeasibly that Tweety flies. But if this premise set were supplemented, consistently, with the additional information that Tweety does not fly (perhaps Tweety is a penguin), it would then seem reasonable to withdraw our initial conclusion. The monotonicity property flows from assumptions that are deeply rooted in both the proof theory and the semantics of most ordinary logics. From a proof-theoretic point of view, this property follows from the fact that a proof based on a set of premises also counts as a proof based an expansion of the same set of premises; from a semantic point of view, the property results from the assumption that the models of a set of premises are models also of any of its subsets. Because the features underlying the monotonicity property are so basic to the conception of most standard logical systems, researchers concerned with non- monotonic reasoning have been led to explore fundamentally new ideas in both proof theory and semantics. The first proof-theoretic treatment of non-monotonic reasoning is found in Raymond Reiter's default logic (1980), which supplements ordinary logic with new rules of inference, known as ‘default rules'. The semantic approach to non-monotonic reasoning was first explored in McCarthy's own theory of circumscription (1980). Because of its intrinsic interest and practical importance, the study of non-monotonic logic has grown into a significant area of research. The fixed-point and minimal model approaches pioneered by Reiter and McCarthy are still the best-known and most widely applied techniques in non-monotonic reasoning, but a number of other approaches have been explored as well; a series of articles surveying these different approaches can be found in Gabbay et al. (1994). In recent years, the techniques developed within the field of non-monotonic logic have begun to find applications also to a number of philosophical issues, such as the understanding of ceteris paribus clauses in scientific generalizations and the formalization of prima facie obligations.

« formulas of some logical language, and that the reasoning tasks underlying this intelligence can then be accomplished by means of logical deduction. Both the idea of using logic as an underlying representation language for artificial intelligence and the emphasis on formalizing common-sense knowledge are present even in McCarthy's very early work (for example, McCarthy 1959 ), but the project receives its clearest articulation in a paper jointly authored by McCarthy and Patrick Hayes (1969 ), which isolates the task of defining ‘a naïve , common-sense view of the world precisely enough to programme a computer to act accordingly' . Much of the work now produced within artificial intelligence on the topic of formalizing common-sense reasoning is similar to what one finds in philosophical logic - attempts to adapt or generalize logical techniques to apply to some new area - and there is now a good deal of interaction between these two fields.

Still, the research that takes place within artificial intelligence has a somewhat different character from standard philosophical logic.

For one thing, the matter of implementation is always in the background, and sometimes in the foreground; but even apart from that, the studies generated within artificial intelligence are often focused on very detailed representational problems that would seem peculiar to a philosopher.

It would not be unusual, for example, to find a research project in artificial intelligence with the goal of representing what an agent would have to know about the objects in their kitchen in order to prepare a meal. This kind of detailed work is necessary, of course, since the ultimate goal is to encode information in such a way that it could actually be used to drive a reasoning agent.

Still, it is not as if the project of providing a logical representation of common-sense knowledge were simply a matter of applied logic.

In fact, the problems presented by the task of representing this kind of information explicitly have led to a number of theoretical advances within logic itself: the most important of these is the development of the new field of ‘non -monotonic' logic. In most standard logics, the addition of new information to a given set of premises might lead us to draw new conclusions, but never to withdraw conclusions already reached.

The set of consequences of a given set of premises is thus said to grow monotonically as the premise set grows: the consequence set can only increase, never decrease, as the premise set is supplemented with new information.

A non-monotonic logic is simply a logic in which this property fails - a logic in which the addition of new information to a given premise set might force us to retract some conclusion drawn from the original set of premises. The study of non-monotonic logics within artificial intelligence was motivated by the realization that much of our common-sense knowledge concerns defeasible information - generalizations subject to exceptions, such as ‘Birds fly' or ‘Things remain where you put them' .

Of course, philosophers had always known that these defeasible generalizations could not be represented naturally in ordinary logic: the statement ‘Birds fly' , for example, cannot be represented by a formula of the form 8x(Bx ¾ Fx).

But it was not until the practical need arose for reasoning. »

↓↓↓ APERÇU DU DOCUMENT ↓↓↓

Liens utiles