Skip to main content Accessibility help
×
Hostname: page-component-78c5997874-lj6df Total loading time: 0 Render date: 2024-11-11T10:01:07.450Z Has data issue: false hasContentIssue false

3 - Philosophical challenges

Published online by Cambridge University Press:  05 July 2014

William S. Robinson
Affiliation:
Iowa State University
Keith Frankish
Affiliation:
The Open University, Milton Keynes
William M. Ramsey
Affiliation:
University of Nevada, Las Vegas
Get access

Summary

Descartes (1637/1931, p. 116) held that our reason was a “universal instrument.” Since he believed that any mechanism has to have some special purpose, and that no collection of special purpose mechanisms could be large enough to encompass all that reason can do, he concluded that no mechanism could instantiate human reason. Aquinas (1265–72, I, Q.75, a. 2) also argued that intellect was not provided by a material organ. He believed that a disease-induced bitter humor could interfere with our tasting sweetness, or any taste different from bitter. Analogously, he thought that if our intellects were material, they would be prevented from knowing material things of different natures.

Most contemporary philosophers would accept that our intelligence is provided by our material brains, and thus would be disinclined to challenge the possibility of artificially intelligent devices on the ground of their materiality. The questions and problems about artificial intelligence that remain can be divided into those that are largely independent of particular approaches to AI, and those that are prompted by more specific ideas about artificially realizable cognitive architectures. We shall begin with the more general issues.

Type
Chapter
Information
Publisher: Cambridge University Press
Print publication year: 2014

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

Carter, M. (2007). Minds and Computers: An Introduction to the Philosophy of Artificial Intelligence. Edinburgh University Press. Introduces basic concepts in philosophy of mind and the computational theory of mind, which is defended.Google Scholar
Chalmers, D. and Bourget, D. (Repeatedly updated.) “Philosophy of Artificial Intelligence,” . A comprehensive bibliography of papers, organized by many sections and subsections covering all aspects under its title.
Cole, D. (2004, rev. 2009). The Chinese Room argument, in Zalta, E. N. (ed.), The Stanford Encyclopedia of Philosophy, . Explains the argument and the replies to it, and makes connections to larger philosophical issues.
Garson, J. (1997, rev. 2010). Connectionism, in Zalta, E. N. (ed.), The Stanford Encyclopedia of Philosophy, . Explanation of connectionist devices, strengths and weaknesses, and issues between connectionism and classical approaches to AI.
Horst, S. (2003, rev. 2009). The computational theory of mind, in Zalta, E. N. (ed.), The Stanford Encyclopedia of Philosophy, . Explains why the mind has been regarded as a computer, and reviews criticisms of that view.
Aquinas, T. (1265–72/1945). Summa Theologica, in Pegis, A. C. (tr.), Basic Writings of St. Thomas Aquinas. New York: Random House.Google Scholar
Ayer, A. J. (1954). Freedom and necessity, in Ayer, A. J, Philosophical Essays (pp. 271–84). London and Basingstoke: Macmillan.Google Scholar
Bechtel, W. (1998). Representation and cognitive explanations: Assessing the dynamicist’s challenge in cognitive science, Cognitive Science 22: 295–318.CrossRefGoogle Scholar
Block, N. (1981). Psychologism and behaviorism, The Philosophical Review 90: 5–43.CrossRefGoogle Scholar
Brooks, R. (1991). Intelligence without representation, Artificial Intelligence 47: 139–59.CrossRefGoogle Scholar
Chalmers, D. J. (1993). Connectionism and compositionality: Why Fodor and Pylyshyn were wrong, Philosophical Psychology 6: 305–19.CrossRefGoogle Scholar
Chalmers, D. J.. (1996). Does a rock implement every finite-state automaton?, Synthese 108: 309–33.CrossRefGoogle Scholar
Clark, A. (1993). Associative Engines: Connectionism, Concepts, and Representational Change. Cambridge, MA: MIT Press.Google Scholar
Dennett, D. C. (1973). Mechanism and responsibility, in Honderich, T. (ed.), Essays on Freedom of Action (pp. 157–84). London: Routledge and Kegan Paul. Reprinted in Dennett, D. C. (1978) Brainstorms (pp. 233–55). Cambridge, MA: Bradford Books.Google Scholar
Descartes, R. (1637/1931). Discourse on the Method of Rightly Conducting the Reason and Seeking for Truth in the Sciences, in Haldane, E. and Ross, G. R. T (trs.), The Philosophical Works of Descartes, vol 1. Cambridge University Press.Google Scholar
Dretske, F. (1995). Naturalizing the Mind. Cambridge, MA: MIT Press.Google Scholar
Dreyfus, H. L. (1972; rev. edn. 1979). What Computers Can’t Do. New York: Harper & Row.Google Scholar
Feldman, J. A. (1985). Connectionist models and their applications: Introduction, Cognitive Science 9: 1–2.Google Scholar
Fodor, J. A. (2000). The Mind Doesn’t Work That Way. Cambridge, MA: MIT Press.Google Scholar
Fodor, J. A. and Pylyshyn, Z. (1988). Connectionism and cognitive architecture: A critical analysis, in Pinker, S. and Mehler, J. (eds.), Connections and Symbols (pp. 3–71). Cambridge, MA: MIT Press.Google Scholar
Gödel, K. (1931). Über formal unentscheidbare Sätze der Principia Mathematica und verwandter Systeme I, Monatshefte für Mathematik und Physik 38: 173–98.CrossRefGoogle Scholar
Grush, R. (1997). The architecture of representation, Philosophical Psychology 10: 5–23.CrossRefGoogle Scholar
Harnad, S. (1990). The symbol grounding problem, Physica D 42: 335–46.CrossRefGoogle Scholar
Hasker, W. (1999). The Emergent Self. Ithaca and London: Cornell University Press.Google Scholar
Haugeland, J. (ed.) (1981). Mind Design. Montgomery, VT: Bradford Books.
Hume, D. (1748). An Enquiry Concerning Human Understanding. Full text available online at .CrossRefGoogle Scholar
Kaebling, L. P., Littman, M. L., and Moore, A. W. (1996). Reinforcement learning: A survey, Journal of Artificial Intelligence Research 4: 237–85.Google Scholar
Lucas, J. R. (1961). Minds, machines and Gödel, Philosophy 36: 120–34.CrossRefGoogle Scholar
McClelland, J. L. and Patterson, K. (2002a). ‘Words or Rules’ cannot exploit the regularity in exceptions: A reply to Pinker and Ullman, Trends in Cognitive Science 6: 464–65.CrossRefGoogle ScholarPubMed
McClelland, J. L. and Patterson, K.. (2002b). Rules or connections in past-tense inflections: What does the evidence rule out?, Trends in Cognitive Science 6: 465–72.CrossRefGoogle ScholarPubMed
McDermott, D. (1976). Artificial intelligence meets natural stupidity, SIGART Newsletter, no. 57: 4–9.CrossRefGoogle Scholar
Nolfi, S. and Floreano, D. (2000). Evolutionary Robotics: The Biology, Intelligence and Technology of Self-Organizing Machines. Cambridge, MA: MIT Press.Google Scholar
Penrose, R. (1989). The Emperor’s New Mind. Oxford University Press.Google Scholar
Penrose, R.. (1994). Shadows of the Mind. Oxford University Press.Google Scholar
Piccinini, G. (2010). Computation in physical systems, in Zalta, E. N. (ed.), The Stanford Encyclopedia of Philosophy (Fall 2010 edn.), .Google Scholar
Pinker, S. and Prince, A. (1988). On language and connectionism: Analysis of a parallel distributed model of language acquisition, in Pinker, S. and Mehler, J. (eds.), Connections and Symbols (pp. 73–193). Cambridge, MA: MIT Press.Google Scholar
Pinker, S. and Ullman, M. T. (2002a). The past and future of the past tense, Trends in Cognitive Science 6: 456–63.CrossRefGoogle ScholarPubMed
Pinker, S. and Ullman, M. T.. (2002b). Combination and structure, not gradedness, is the issue: Reply to McClelland and Patterson, Trends in Cognitive Science 6: 472–4.CrossRefGoogle ScholarPubMed
Pollack, J. (1988). Recursive auto-associative memory: Devising compositional distributed representations, Proceedings of the 10th Annual Conference of the Cognitive Science Society. Mahwah, NJ: L. Erlbaum.Google Scholar
Pollack, J.. (1990). Recursive distributed representations, Artificial Intelligence 46: 77–105.CrossRefGoogle Scholar
Porta, J. M. and Celaya, E. (2005). Reinforcement learning for agents with many sensors and actuators acting in categorizable environments, Journal of Artificial Intelligence Research 23: 79–122.
Putnam, H. (1960). Review of Nagel and Newman, Gödel’s Proof, Philosophy of Science 27: 205–7.CrossRefGoogle Scholar
Ramsey, W., Stich, S., and Garon, J. (1990). Connectionism, eliminativism and the future of folk psychology, in Tomberlin, J. E. (ed.), Philosophical Perspectives 4: 499–533.CrossRefGoogle Scholar
Robinson, W. S. (1992). Computers, Minds and Robots. Philadelphia: Temple University Press.Google Scholar
Robinson, W. S.. (1995). Mild realism, causation, and folk psychology, Philosophical Psychology 8: 167–87.CrossRefGoogle Scholar
Robinson, W. S.. (1996). Review of Roger Penrose, Shadows of the Mind, Philosophical Psychology 9: 119–22.Google Scholar
Rumelhart, D. E. and McClelland, J. L. (1986). On learning the past tenses of English verbs, in Rumelhart, D. E., McClelland, J. L., and the PDP Research Group, Parallel Distributed Processing: Explorations in the Microstructure of Cognition, vol. 2 (pp. 216–71). Cambridge, MA: MIT Press.Google Scholar
Schank, R. C. and Abelson, R. P. (1977). Scripts, Plans, Goals, and Understanding. Hillsdale, NJ: L. Erlbaum.Google Scholar
Searle, J. (1980). Minds, brains, and programs, The Behavioral and Brain Sciences 3: 417–24.CrossRefGoogle Scholar
Searle, J.. (1992). The Rediscovery of the Mind. Cambridge, MA: MIT Press.Google Scholar
Smolensky, P. (1995). Constituent structure and explanation in an integrated connectionist/symbolic cognitive architecture, in MacDonald, C. and MacDonald, G. (eds.), Connectionism: Debates on Psychological Explanation (pp. 223–90). Oxford: Blackwell.Google Scholar
Smolensky, P. and Legendre, G. (2006). The Harmonic Mind. Cambridge, MA: MIT Press.Google Scholar
Sutton, R. S. and Barto, A. G. (1998). Reinforcement Learning: An Introduction. Cambridge, MA: MIT Press.Google Scholar
Turing, A. (1950). Computing machinery and intelligence, Mind 59: 433–60.CrossRefGoogle Scholar
van Gelder, T. (1997). Dynamics and cognition, in Haugeland, J. (ed.), Mind Design II (pp. 421–50). Cambridge, MA: MIT Press.Google Scholar

Save book to Kindle

To save this book to your Kindle, first ensure coreplatform@cambridge.org is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle.

Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service.

Available formats
×

Save book to Dropbox

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Dropbox.

Available formats
×

Save book to Google Drive

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Google Drive.

Available formats
×