Simulative inference in a Computational model of belief
When a person learns something new, he generally draws some conclusions from it, but he will miss or ignore some logically valid conclusions ( people are not logically omniscient), and furthermore he may draw conclusions that do not logically follow from the new information (people s beliefs are not derived purely by deduction). Therefore, in order to reason accurately about what someone believes, it is necessary to know something about how he thinks. We propose a family of logics in which belief is modeled as the result of applying an algorithm to a set of input sentences. We define inference rules that can be used to reason about what someone believes by simulating the application of his belief algorithm, and we explore conditions under which these inferennce rules are sound and complete.
ESSLLI 18th European Summer School in Logic, Language and Information, University of Malaga, 31 July-11 August, 2006.