Published online by Cambridge University Press: 01 January 2020
It is well known that informational theories of representation have trouble accounting for error. Informational semantics is a family of theories attempting a naturalistic, unashamedly reductive explanation of the semantic and intentional properties of thought and language. Most simply, the informational approach explains truth-conditional content in terms of causal, nomic, or simply regular correlation between a representation and a state of affairs. The central work is Dretske (1981), and the theory was largely developed at the University of Wisconsin by Fred Dretske, Dennis Stampe, and Berent Enc. Recently, informational semantics has roamed far beyond its Wisconsin home, and built a sizeable collection of followers. Converts include Jerry Fodor (1987), Robert Stalnaker (1984) and, less faithfully, Paul and Patricia Churchland (1983) and Hartry Field (1986). But for some years informational semantics has been hounded by a problem with error – the classic presentation is Fodor (1984) – and no other problem has hounded the theory so persistently.
Editor’s Note: Readers will be interested to know that this paper is a greatly compressed version of another paper, ‘The Problem of Error in Informational Theories of Representation,’ that won the Rutgers International Prize in Philosophy in 1987. The Rutgers Prize is for an undergraduate essay in philosophy, and is organized by Rutgers University.