Hostname: page-component-cd9895bd7-jn8rn Total loading time: 0 Render date: 2024-12-25T20:30:20.095Z Has data issue: false hasContentIssue false

Searle, Syntax, and Observer Relativity

Published online by Cambridge University Press:  01 January 2020

Ronald P. Endicott*
Affiliation:
Arkansas State University, State University, AR72467-1890, USA

Extract

In his book The Rediscovery of the Mind (hereafter RM), John Searle attacks computational psychology with a number of new and boldly provocative claims. Specifically, in the penultimate chapter entitled ‘The Critique of Cognitive Reason,’ Searle targets what he calls ‘cognitivism,’ according to which our brains are digital computers that process a mental syntax. And Searle denies this view on grounds that the attribution of syntax is observer relative. A syntactic property is arbitrarily assigned to a physical system, he thinks, with the result that syntactic states ‘do not even exist except in the eyes of the beholder’ (RM, 215). This unabashed anti-realism differs significantly from Searle's earlier work. The Chinese room argument, for example, was intended to show that syntactic properties will not suffice for semantics, where the syntax was realistically construed. But now Searle claims that physical properties will not suffice to determine a system's syntactic properties.

Type
Research Article
Copyright
Copyright © The Authors 1996

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

1 Searle, John The Rediscovery of the Mind (Cambridge, MA: The MIT Press 1992)CrossRefGoogle Scholar. The material in question (chap. 9) is reprinted almost verbatim from Searle's, Presidential Address ‘Is the Brain a Digital Computer?Proceedings and Addresses of the American Philosophical Association 64 (1990) 2137Google Scholar. Note that the arguments are directed against 'classical’ cognitivism. Searle's, stand on connectionism appears to have changed. Cf. his ‘Is the Mind's Brain a Computer Program?Scientific American 262 (1990) 2631CrossRefGoogle Scholar, with The Rediscovery, 246-7.

2 Searle, JohnMinds, Brains and Programs,The Behavioral and Brain Science 3 (1980) 417-24CrossRefGoogle Scholar

3 Let me warn that Searle has an exasperating proliferation of meanings for the term ‘intrinsic.’ Here it means (a) determination by a set of lower-level properties, i.e., syntax is not intrinsic to physics because physics will not suffice to determine syntactic properties. But elsewhere it means (b) definability in terms of a particular class of predicates, i.e., syntax is not intrinsic to physics because it is ‘not defined in terms of physical features’ (RM, 225). But more often than not, Searle uses ‘intrinsic' to mean (c) real, ontologically speaking, and thus mark the distinction between ‘the real thing’ as opposed to the merely ‘as if.’ ‘derived: or ‘observer-relative’ (78-80, 208).

4 Quine, W.V.O. Word and Object (Cambridge, MA: The MIT Press 1960)Google Scholar; also Quine's, On the Reasons for Indeterminacy of Translation,Journal of Philosophy 67 (1970) 178-83CrossRefGoogle Scholar; and Dennett, DanielReflections: Real Patterns, Deeper Facts, and Empty Questions,’ in The Intentional Stance (Cambridge, MA: The MIT Press 1987) 3742Google Scholar.

5 Fodor, Jerry The Language of Thought (Cambridge, MA: Harvard University Press 1975)Google Scholar; Fodor's, appendix to Psychosemantics (Cambridge, MA: The MIT Press 1987)CrossRefGoogle Scholar; and a more cautious discussion in Clark, Andy Microcognition (Cambridge, MA: The MIT Press 1989), ch. 8.Google Scholar

6 An anonymous referee has suggested that we might strengthen the point against Searle by distinguishing between ‘implementational capacity’ and ‘implementation.’ The former is characterized by the mathematical notion of isomorphism whereby the physical states of the system are isomorphic to the transitions between states that the program specifies. And here it may be conceded that a given system has the ‘implementational capacity’ with respect to a large number of programs. Yet this is not to say that the system actually ‘implements’ all those programs. For actual implementation is determined by the appropriate causal interactions between the physical store of the program and the physical device itself.

7 Block, NedTroubles with Functionalism,’ in Block, Ned ed., Readings in Philosophy of Psychology 1 (Cambridge, MA: Harvard University Press 1980) 268305Google Scholar; but cf. Churchland, P.S. and Churchland, P.M.Functionalism, Qualia, and Intentionality,’ in Biro, J.I. and Shahan, Robert W. eds., Mind, Brain, and Function (Norman: University of Oklahoma Press 1981) 121-45Google Scholar; and Lycan, WilliamForm, Function, and Feel,Journal of Philosophy 78 (1981) 2450CrossRefGoogle Scholar.

8 Or consider again the point about behavior. The remarkable fact about systems like ourselves is that we behave rationally, maximizing truth. A further constraint is therefore that any syntactic attribution lead to rationality and truth. So when we discover an interpretation of some ‘perceived’ syntactic objects which makes sense of the system and its behavior, and if those syntactic patterns continue in such a way that preserves truth, rationality, and the overall fit of the system with its environment, then our hypothesis is empirically well grounded. Notice that this coalesces nicely with my point in the text that we should consider, not a time slice of the wall considered in isolation, but a physical system whose structure has parts with the disposition to causally interact. For rationality concerns the manifestation of such dispositions over time, the coordination of beliefs with action, in short, a continuing system's adjustment of intentionality to the world.

9 See Davidson, Donald and Hintikka, Jaakko eds., Words and Objections (New York: Humanities Press 1969)Google Scholar. Quine and others would reject the implication that indeterminacy in radical translation arises solely from behaviorist constraints. This is a large question I cannot address here. But however it is to be resolved, I assume Searle is not simply extending Quine's argument to the area of syntax. Cf. Searle, Indeterminacy, Empiricism, and the First Person,Journal of Philosophy 84 (1987) 123-46CrossRefGoogle Scholar.

10 See my ‘On Physical Multiple Realization,’ Pacific Philosophical Quarterly 70 (1989) 212-24.

11 See Dennett, DanielWhy the Law of Effect Will Not Go Away,’ rpt. in Brainstorms (Cambridge, MA: The MIT Press 1978) 7189Google Scholar; and Minsky, Marvin The Society of Mind (New York: Simon & Schuster 1985)Google Scholar.

12 Searle is wrong to say that the cognitivist believes ‘only the bottom level really exists; the top levels are all just as if.’ For the standard picture is that the upper levels exist with the same robust sense of reality, only they supervene on the lower levels. See Haugeland, JohnSemantic Engines: An Introduction to Mind Design,’ in Haugeland, ed., Mind Design (Cambridge, MA: The MIT Press 1981), 134Google Scholar. Indeed, Haugeland states a view in exact opposition to that which Searle attributes to the cognitivist (more curious still, since Searle refers to Haugeland's piece! [RM, 2131]). Haugeland says: ‘Once we see this point, we see how foolish it is to Say that computers are nothing but great big number crunchers, or that all they do is shuffle millions of “ones” and “zeros.” Some machines are basically numerical calculators or “bit“ manipulators, but most of the interesting ones are nothing like that…. The machine one cares about- perhaps several levels of imitation up from the hardware- may have nothing at all to do with bits or numbers; and that is the only level that matters’ (14-15).

13 That is, cognitivists are typically anti-reductionists, and supervenience is the antireductionist's doctrine of choice. See Kim, JaegwonConcepts of Supervenience,Philosophy and Phenomenological Research 45 (1984) 153-76CrossRefGoogle Scholar; and Horgan, TerenceFrom Supervenience to Superdupervenience: Meeting the Demands of a Materialist World,Mind 102 (1993) 555-86CrossRefGoogle Scholar. This is not to say, however, that all cognitivists would accept a claim about supervenience. Dennett, e.g., maintains that computational patterns exist in some sense but are indeterminate inasmuch as they depend upon adopting the intentional stance (‘Reflections: Real Patterns, Deeper Facts, and Empty Questions,’ 39-40).

14 Parenthetically, (i) must be understood aright. It does not mean: ‘If there are mental symbols, there must be a mind for those symbols,’ which might be true simply on grounds that a mental state cannot exist apart from a mind in which the state inheres. No, there is an additional function which (i) is intended to capture, i.e., that the mind must not only ‘have’ but ‘interpret’ its (symbolic) states.

15 See Cummins, Robert Meaning and Mental Representation (Cambridge, MA: The MIT Press 1989)Google Scholar.

16 Dennett, Evolution, Error, and Intentionality,’ in The Intentional Stance, 102-3Google Scholar

17 Admittedly, ch. 9 does contain two additional claims: that ‘syntax has no causal powers’ (RM, 214-22); and that ‘the brain does not do information processing’ (RM, 222-5). Nevertheless, those claims hinge upon what has gone before. The reason why syntax has no causal powers is that syntax is observer relative: ‘But the difficulty is that the 0's and 1's as such have no causal powers because they do not exist except in the eyes of the beholder’ (215, my italics). And the reason why the brain does not process information is that information processing is observer relative: The computer then goes through a series of electrical stages that the outside agent can interpret both syntactically and semantically even though, of course, the hardware has no intrinsic syntax or semantics: It is all in the eye of the beholder’ (223, my italics).

18 My thanks to an anonymous referee for suggesting a connection between the two connection principles. It should be underscored that this is only a suggestion, one that Searle nowhere explicitly makes. But we do find passing reference to consciousness in his later critique from ch. 9 (see RM, 219-20). Also, we could, if we wish, piece together an attack on cognitivism with the aid of this connection principle that parallels argument (i) through (iii), viz.: (i*) If there are intrinsic Mental states, they are in Principle accessible to Consciousness. (ii*) The syntactic states of computational theory are not in Principle accessible to consciousness. (iii*) So that syntactic states of Computational theory are not intrinsic mental states. And, although Freudian theory is the designated target of Searle's attack in ch. 7 on ‘deep’ unconscious states (151, 167-73), he does cite Jackendorff's distinction between the ‘computational mind’ and the ‘phenomenological mind’ as an ‘extreme version’ of an approach which emphasizes the importance of such unconscious facts (152). Cf. his earlier version of the same material on consciousness specifically tailored to cognitivism, in ‘Consciousness, Explanatory Inversion, and Cognitive Science,’ Behavioral and Brain Sciences 13 (1990), 589.

19 This is true if consciousness is understood in terms of ‘awareness,’ which Searle admits is a near synonym (RM, 84), since awareness is arguably an interpretive act. If, on the other hand, we understand consciousness in the phenomenal ‘what it is like’ sense, then, as several authors have pointed out, nothing follows about the status of the cognitivist's mental syntax. In particular, we cannot rule out the possibility that syntactic states have a certain phenomenal feel associated with them. See Block, NedConsciousness and Accessibility,Behavioral and Brain Sciences 13 (1990), 597CrossRefGoogle Scholar; and cf. a similar point by Dennett, Daniel in his review of Searle's book, Journal of Philosophy 90 (1993), at 197-8Google Scholar, where Dennett speculates that the phenomenal feel may be completely cut off from any process of the subject's first-person awareness.

20 Propositions (1) through (4) correspond to Searle's 4. through 7. respectively. Searle's first three, which I omit in the text, simply mark the distinction between 'intrinsic’ versus ‘as if’ intentionality, locate the class of unconscious mental states within the intrinsic, and then define the essence of all intrinsic mental states in terms of their ‘aspectual shape,’ i.e., intentionality.

21 The Open Peer Commentary to Searle's, Consciousness, Explanatory Inversion, and Cognitive Science,Behavioral and Brain Sciences 13 (1990) 585640CrossRefGoogle Scholar. The points I make are largely independent of any arguments raised there.

22 The term biological naturalism’ was coined by Searle, in his Intentionality: An Essay in the Philosophy of Mind (Cambridge: Cambridge University Press 1983), 264CrossRefGoogle Scholar. According to this view, all mental traits are ‘biologically specific characteristics’ (RM, 90). Echoing an old materialist cry, Searle, enjoins us elsewhere to: ‘Think of the mind and mental processes as biological phenomena which are as biologically based as growth or digestion or the secretion of bile,’ from Minds, Brains, and Science (Cambridge, MA: Harvard University Press 1984), 54Google Scholar.

23 See Glymour's, Unconscious Mental Processes,Behavioral and Brain Sciences 13 (1990), 606CrossRefGoogle Scholar. The criticism does seem warranted. In his ‘Consciousness, Explanatory Inversion, and Cognitive Science,’ Searle says that no neurophysical facts have aspectual shape ‘under neurophysical descriptions’ (587). And he repeats it here (RM, 158, 169). Note, too, that a similar problem would arise if Searle means to emphasize the epistemological difference reflected in the differing descriptions, i.e., the objectivity of neuroscience versus the subjectivity of consciousness. It may nonetheless be the same facts that are known. Cf. the ‘intensionalist fallacy’ committed by some Cartesian dualists, nicely discussed in Churchland, PaulReduction, Qualia, and the Direct Introspection of the Brain,Journal of Philosophy 82 (1985) 828CrossRefGoogle Scholar.

24 Talk of theoretical levels is common enough, though different authors imply different things. In the context of cognitive theory, see Anderson, J.R.Methodologies for Studying Human Knowledge,Behavioral and Brain Sciences 10 (1987) 467505CrossRefGoogle Scholar.

25 These cases are mentioned by Searle, who concedes that the behavior is complex, though ‘habitual, routine, and memorized’ as compared to the behavior that results from consciousness, which has a ‘degree of flexibility and creativity’ (RM, 108). Regardless, the question is the level of neurophysiology required for explanation, and it is undoubtedly high for such ‘memorized’ behavior. Also, while I assume (2'“) is false, even the unrevised (2) is terribly contentious. Why believe that the ontology of unconscious mental states is exclusively neurophysical, regardless of level? Cognitivism, for one, postulates a wide range of high-level functionally individuated computational states realized by this neurophysiology that nevertheless operate at an unconscious level, some having intentional properties that cannot be found at lower levels of organization. Remember Marr's three dimensional descriptions mentioned earlier, which purportedly represent visual information from the external world. So the dialectical situation is all too familiar: we still need an argument to show why cognitivism is false, one that does not beg the question at a key premise.

26 Others have argued for the same conclusion, but from a different direction by analyzing the connection principle's important proviso that it may be possible ‘in principle’ for a mental state to fall under the scope of consciousness, and then showing that mental syntax can be accessible in just this sense. See Block, Ned ‘Consciousness and Accessibility,’ 596CrossRefGoogle Scholar; to a lesser extent Chomsky Noam ‘Accessibility “in Principle,“’ 600-1; and David Rosenthal, On Being Accessible to Consciousness,' 621-2, all in Behavioral and Brain Sciences 13 (1990). Here, on the other hand, our focus is the causal-dispositional analysis and what it implies for the connection principle.

27 For details on the structural view, see Kim Jaegwon ‘Causation, Subsumption, and the Concept of Event,’ Journal of Philosophy 70 (1973) 217-36; also Lombard, Lawrence Events: A Metaphysical Study (London: Routledge & Kegan Paul 1986)Google Scholar.

28 I have not attempted to explain what would make [y,M,t’] a conscious event, only that if it is a conscious event, then the cause [x,P,t] could be reflected in the subject's conscious awareness by being represented in its conscious mental episodes. Parenthetically, those who believe that there can be noncausal representation would resist the stipulation in the first clause of (b) that [x,P,t] must have the ‘causal capacity’ to generate the conscious event. But I am genuinely non-committed, and more than happy to waive the restriction. It will not affect the overall point I am trying to make in the text, viz., that the subject is typically unaware of both computational and neurophysical events.

29 Well, they may not fail by (b) either. For it might be possible ‘in principle’ for the computational events to fall within the scope of consciousness. See the references in n. 26.

30 I should like to thank Charles Carr, Thomas Grimes, Terence Horgan, Jaegwon Kim, and also an anonymous referee for their helpful comments on an earlier draft of this paper.