Saturday, 23 May 2009

Dilworth's Propositional Indexing

In his paper, “Semantics naturalised: Propositional indexing plus interactive perception”, John Dilworth advocates a propositional indexing view according to which cognitive states (understood as concrete causal occurrences) enjoy an isomorphic correspondence with propositions (understood as abstract truth-value bearing items). Maintaining such an isomorphism requires that we have some method for indexing a given proposition (such as the proposition “X is red”, with respect to some worldly object X) with a corresponding perceptual or cognitive state (such as an agent Z perceiving that X is red).

This method, whatever it may be, may be identified with the specific epistemic conditions under which we would accept that the isomorphism in question holds. Moreover, Dilworth maintains that the epistemic conditions must be ones that can be met by individuals without any technical or specialised scientific knowledge. After all, it is part of our folk psychological practice to describe the contents of perceptual and cognitive states in propositional terms. This imposes the constraint that the method for indexing propositions to cognitive states must be one which is available to everyone, including those lacking any specialised philosophical or technical expertise. Dilworth puts the point as follows:
Our understanding of propositional indexing is not intended to be restricted to specialised cognitive science procedure requiring technical expertise and detailed knowledge of such matters as the cognitive structures involved in perceptual functioning. Instead, the idea is that the everyday understanding by people in general, of when a particular proposition, such as “X is red” is true of a particular object X is to be correlated with a related understanding by such people of what kinds of behavioural evidence would justify a claim that the person had indeed correctly perceived the relevant fact. So the predominant epistemic issue is not the theoretical nature of propositional indexing as such, but rather the everyday conditions under which people in general would agree that it had successfully occurred.
Dilworth recommends that propositional indexing be unpacked in terms of classification behaviour:
For example, a paradigm kind of colour-related classification behaviour would be that of a person assigned a task of sorting some miscellaneous objects by their colour, and then putting object X into a box containing only red objects. This classification behaviour would provide evidence that the person P had perceived that object X was red. Consequently, if the person Q observing person P is considering the proposition that X is red, then Q would take person P’s classification as evidence that P had perceived that X is red, and hence that P’s relevant perceptual state S, whatever it may be, is indexed by the proposition ‘X is red’.
Significantly, Dilworth describes propositional indexing in third- rather than first-personal terms. This suggests a view according to which an agent’s cognitive state may be described as propositional just in case it is, in principle, possible for that cognitive state to be correlated with a proposition. However, the act of identifying the correlation need not be performed by the agent that is actually undergoing the cognitive state in order for the cognitive state to count as propositional. One upshot of this view is that the cognitive states of non-linguistic animals, which lack the ability to actually engage in propositional indexing, may nevertheless count as having propositional attitudes.

Dilworth’s observations about colour-related classification behaviour generalises to less overt types of classification behaviour. For example, a dog may be said to perceive a bone is edible just in case it is disposed to ingest the bone. Ingesting the bone, then, amounts to a type of classification behaviour (akin to the act of sorting bones into the set of edible items). Since the heuristic of classification behaviour is understood in dispositional terms, it is not necessary that the ingesting of the bone actually take place for the dog to count as perceiving that the bone is edible. Moreover, since a dog may engage in such classification behaviour without us having to attribute to the dog the concepts of “bone” or “edible”, the present account does not require concept possession as a prerequisite for having an agent’s perceptual state indexed by a proposition.

There is much that I find attractive in Dilworth’s framework. Specifically, I believe it provides the resources for an account of propositional attitudes that allows for such attitudes to be attributed to non-linguistic animals. (Admittedly, Dilworth may be reluctant to speak of propositional attitudes in this way, but that would only be because of his refusal to attribute semantic content to cognitive states in general and not because of any prejudice against such attributions in the case of non-linguistic animals. In short, both Dilworth and I agree in the even-handedness of our treatment of the cognitive states of linguistic and non-linguistic animals.) However, I want to conclude this post by highlighting a minor problem in Dilworth’s account of perceptual states.

Dilworth is committed to what he refers to as an "interactive theory of perception", the considered version of which he puts as follows:
IP2: An organism Z perceives an item X to have the property of being F just in case X causes some sense-organ zi of Z to cause Z to acquire an X-related disposition D, such that D is an F-classification disposition.
Dilworth defines an F-classification dispositions as “a disposition, the manifestation of which is some F-classification behaviour. For example, on this account, to perceive that an object X is red is to acquire a disposition to classify object X in some red-related way.”

However, it is not clear that IP2 can accommodate cases in which an agent perceives things to be a certain way and yet fails to believe that things are that way. For example, suppose that I am led to believe (erroneously) that a room is equipped with special lighting that makes all the white objects in the room appear red. (However, there is no special lighting and all objects in the room are actually the colour they appear to be.) Due to my misinformation, I am disposed to sort all objects that appear red into the white box. According to IP2, it follows from my having this disposition that I do not perceive that the objects are red. But this gets things wrong. The thing to say is that I perceive the object as red, but I believe it is white. Consequently, IP2 is unable to accommodate cases in which the two—perceiving X is F and believing X is F—come apart.


2 comments:

John Dilworth said...

Avery, thanks for your perceptive comments, I agree about animal perception cases. BTW, the relevant paper, plus related ones, is available on my website at

http://homepages.wmich.edu/~dilworth/Index.html

Re. your case of when perception and belief come apart,

I discuss such cases of conflicting epistemic tendencies in sec 2 of:

http://homepages.wmich.edu/~dilworth/Perception_Introspection_and_Functional_Consonance.pdf

One salient passage is as follows:

In such cases, what is involved is not, strictly speaking, a clash of short-term versus long-term dispositions associated with the relevant beliefs, because I have argued that dispositions as such cannot clash at all. Instead, what is actually involved is basically a clash or conflict in epistemic reasons or justifications for adopting or retaining the relevant dispositions. Thus, as an overview of the relevant required cognitive structure, minimally we need a two tier view of the mind, in which lower level, purely executive causal dispositional structures implement higher level, broadly epistemic decisions, which decisions can change the actual causal/dispositional structure of the lower level.

Cheers,

John

AVERY ARCHER said...

John, thanks for the clarification and the links to your papers. I'll be sure to check them out.