Hostname: page-component-cd9895bd7-dk4vv Total loading time: 0 Render date: 2024-12-26T09:03:26.508Z Has data issue: false hasContentIssue false

Philosophical Research: Problems and Prospects

Published online by Cambridge University Press:  01 January 2024

Abstract

Type
Articles
Copyright
Copyright © ICPHS 2016

The exegetical turn

It is not easy to reach an overview of as complex and intricate subject as the state of philosophical research in the world today. In this survey, I have to restrict my attention mostly to what in Scandinavia and Scotland would traditionally be called “theoretical philosophy,” that is, to epistemology, metaphysics, logic, philosophy of science, philosophy of mind and philosophy of language. In these fields, the global philosophical scene has been dominated by European and American traditions. Inevitably, this perspective will be reflected also in this essay.

The world of philosophy can perhaps be seen as a microcosm of the world at large. In the course of the last few decades, the world has seen the collapse of the communist system of Russia, a major crisis of the free market economy in the USA, Europe, and Japan, and massive economic changes in China. One perspective on contemporary philosophical research is reached by asking what crises the major philosophical traditions, if not literally “systems,” are likewise undergoing and what can be done to find a road ahead. What might a “stimulus package” for philosophy be like (except for inevitably being controversial)?

Indeed, in the course of the last several years, a definite picture of the state of philosophical research has crystallized in my mind. Philosophical research is in a crisis at least in the sense of lacking direction or directions. This impression is not based only or even primarily on an evaluation of the research that is being done in different subfields of philosophy in different parts of the world. Admittedly there is by any token a mass of careful and often high-quality scholarly work going on in different parts of the world. What is missing is awareness of ideas that could point out goals for philosophical research and open doors for reaching them.

The clearest indication of the worrisome state of the subject is philosophers’ self-image, their conception of what the nature of philosophical thinking is. Philosophy used to be considered as being of a piece with other efforts to find out the secrets of reality, variously including nature, the human mind, the world of concepts and ideas, and perhaps also the divine, if there be such an aspect of reality. A philosopher was like a scientist in that he or she was searching for truth, albeit perhaps a different and higher kind of truth than is pursued in departmental sciences. In our days, the predominant paradigm of a philosopher's activity is not a scientific inquiry, but rather exegesis of sacred texts or perhaps creative interpretation of the great works of world literature. We might call this “the exegetical turn.” Even in the study of the history of philosophy many contemporary scholars seem to be satisfied with giving a “reading” of the works of a classical or even of a contemporary philosopher rather than inquiring what that thinker actually means or what that philosopher's insights might contribute to our understanding of the subject matter. More generally speaking, in the study of the history of philosophy this change in the aims of philosophizing is manifested in the form of abandoning in effect the search for historical truth. Instead of concentrating on the question of what this or that thinker actually meant, philosophers far too often project their ideas and problems into other philosophers’ texts. This tendency is not new. In Oxford this or that philosopher was sometimes accused of treating Aristotle and Plato as if they were “fellows of another college.” This tendency has become stronger and more widespread. Far too often philosophers doing history have concentrated on their own problems and ideas and have left general intellectual history to be taken over by history or humanities departments instead of philosophy departments.

One prominent victim of this neglect and disregard of a philosopher's real intentions is probably Ludwig Wittgenstein. He complained himself that in intellectual life he was an outlaw who could be treated in any way an interpreter chose. He was right. Several of the main lines of interpreting Wittgenstein are seriously wrong. He did not maintain that language is a social phenomenon in the sense of presupposing a language community. He did not ask whether or if so how one can know the rule one is following; instead, he asked how an embodied rule can guide my actions. He did not claim that his own philosophy was nonsense except in the sense that the semantics of language is according to him inexpressible in the same language.

A partial way of counteracting such temptations would be a keener attention by philosophers to semantic history. In this respect, serious consciousness-raising is in order. Far too many philosophers seem to be unaware for instance of what precisely Hume meant by “sympathy” or Newton by “induction.”

In some important cases, the best payoff of being able to understand earlier philosophers’ ideas is to be able to diagnose their mistakes and to correct them. There is for instance a substantial body of work on the founding fathers of analytic philosophy, especially Frege. However, the timeliest insights in his thought do not any longer concern his constructive ideas, important as they were at his time, but his mistakes. Frege did not recognize an important dimension of the semantics of quantifiers, viz. their role in expressing relations of dependence and independence between variables. This failure was instrumental in leading set theory into its foundational problems. Frege did not recognize the need of functions as nonlogical primitives and consequently ran into problems about identity statements. Frege did not understand the nature of the axiomatic method as it is used in mathematical theorizing. He did not appreciate the characteristics of higher-order logic as distinguished from first-order logic. He thought that he could handle higher-order logic simply by assuming (in his ill-fated Basic Law V) that sets could be taken to be values of first-order quantifiers, which led to the paradoxes that undid his Grundgesetze. Frege believed in the inexpressibility of truth and other semantical notions. In each of these instances, recognizing Frege's shortcomings is part and parcel of a substantial new conceptual insight.

This exegetical turn in the practice of philosophy has led to a disproportionate emphasis on the history of philosophy and a weakening of the interaction between philosophers and representatives of other disciplines. I cannot help considering these developments disconcerting. I am not questioning the importance of historical awareness for topical pursuits in philosophy, an importance whose usefulness I can testify to on the basis of my own experience. But it is equally important to realize that, conversely, major new insights into the history of philosophy itself are only possible with the help of enhanced systematic insights. For instance, I do not think that one can do full justice to Aristotle's science of “being qua being” without understanding better the logic of being than most previous commentators have done.

The weakening of genuine interaction between on the one hand philosophers and other hand scientists and mathematicians has led to a tremendous loss of the relevance of philosophical research in a wider perspective. Hundred years ago most leading mathematicians—Poincaré, Hilbert, Brouwer, Weyl, Borel, you name them—were intensively involved in discussions about the foundations of mathematics, because the most issues were found by them to be of vital importance for their own subject. Today most working mathematicians could not care less about, say, debates about realism in mathematics.

In the sense illustrated by this example, the challenges that have kept philosophical research going have often come from the sciences or mathematics. Philosophy will be much poorer in the future if it does not rise to such challenges.

Truth, paradoxes, and intuition

One symptom of the changed attitude to the mission of philosophy is that the traditional “philosophical problems” are not treated as questions to be answered but as themes to inspire renderings of, like performances of classical compositions. An especially clear-cut example is offered by the traditional (and newer) logico-philosophical paradoxes, such as those of the liar and of the heap, Moore's paradox of saying and disbelieving the paradoxes of infinity or the paradox of the future contingents. There are good reasons to cultivate awareness and appreciation of such paradoxes: they are important historically and they may be also the best way of introducing students into the problematic to which they belong. But the fact is that there exist definitive solutions to most of the traditional paradoxes, which means that the practice of presenting them as serious research topics is little short of ridiculous.

A specific symptom of this syndrome is the fate of the notion of truth in philosophical discussion. Truth has lost its crucial role. Some so-called theories of truth seek to explain away truth in terms like “assertability” or “coherence.” Instead of construing philosophical activity as a search for truth, some philosophers are turning it into a form of cultural discourse for the sake of discourse. The fate of the concept-of-truth is symptomatic in other respects, too, as will be seen later in this survey.

Yet another symptom of this resigned view of the task of philosophical research is the prevalence of the idea of philosophy as explication of our ideas, concepts, and “intuitions.” Such explication is undoubtedly a part of the task of philosophical analysis, but it is not even a self-supporting enterprise. Merely expressing oneself more clearly is not explication. For the purpose, we need a more comprehensive and more illuminating framework into which to translate our ideas. It is wishful thinking to assume that a suitable framework already exists somewhere in the murky depths of our minds. And the construction of such a framework is no longer a matter of mere explication.

The preferred methodology of the proponents of philosophy as explication consists usually of appeals to intuition. Such appeals are more generally speaking a staple in the argumentative practice of most analytic philosophers. Unfortunately this practice is seriously flawed. An acceptance, frequently tacit, of the idea of philosophy as explication, has led philosophers to misconstrue the epistemological status of intuition. Intuition is best understood, not as a source of truths or of evidence but as a legitimate source of promising but usually tentative insights, not unlike C.S. Peirce's “faculty of guessing right.” Unfortunately such guesses have credential value only if the thinker in question has some independent reason to believe that intuition can give genuine information. Earlier philosophers relying on intuition (in so many words or in effect) typically had some reason to think so. Aristotle believed in the actual realization of forms in the mind, rationalists believed in innate ideas, Kant argued that the forms of sense-perception are imposed on all our intuitions, and G.E. Moore believed in mind's direct (“intuitive”) access to the objects of awareness. Alas, contemporary thinkers cannot offer any such backing for their intuitions, with the exception of Noam Chomsky the Cartesian linguist. Hence their appeals to intuition cannot have any evidential value. The entire practice of so using intuition ought to be stopped, for it hurts seriously the prospects of philosophical research to produce interesting, let alone valid results. Yet it is only recently that serious questions have been raised in philosophical discussion about the reliance on intuitions.

Recently, an organized group of philosophers have rejected the practice of appealing to one's own intuitions and instead advocated (under the title “experimental philosophy”) an empirical study of people's intuitions in general. This research strategy is not new. It was anticipated and practiced by Arne Næss already in the 1930s. It is a step in the right direction, but it does not mitigate the general objections to appeals to intuition in the first place.

The practice of appealing to intuitions is dangerous also because it can lead to the arrogant illusion that philosophers have a special access to important truths. One form that this illusion has taken is a belief in the existence of a special metaphysical necessity that can be found by intuition. Another, related, one is the belief that we can reach metaphysical truths by examining postulated possible worlds. Empirically oriented thinkers used to ridicule the type of German metaphysician who was taken to try to find great truths “in der Tiefe seines Bewusstseins.” Some recent developments within analytic philosophy are open to similar ridicule.

The theory of truth offers also an object lesson about another worrisome development in the world of philosophical research. This development is the splitting of philosophical efforts, not so much into different “schools” in the conventional sense, but interest groups that are alienated from each other. Recent discussions of so-called “theories of truth” are predicated on the assumption that in the philosophically relevant sense truth is not explicitly definable. This assumption goes back to Alfred Tarski's groundbreaking 1935 monograph on truth. While Tarski's impossibility result is technically correct, it presupposes an unnecessarily narrow concept of logic and therefore does not have the slightest relevance to the definability of truth for philosophical purposes. Yet the participants in the discussion seem to be either blissfully ignorant of the new results or are (more likely) refusing to let the truth about truth disturb their parochial preoccupations.

The menace of fragmentation

Sometimes recent philosophers have congratulated themselves for the absence of sharp, often bitter disputes between different traditions or “schools.” I am not sure that this relatively peaceful coexistence is an entirely healthy sign. One possible explanation is that philosophers are tolerant of other views because they are not sure of the truth of their own ideas or because they do not at bottom care about their truth so very much in the first place.

The absence of open disputes does not seem to have helped the perennial fragmentation of philosophical efforts into different traditions and “schools,” either. A new source of fragmentation has been an expectation, not to say a pressure, that philosophy should be “relevant.” It is a sign of intellectual health that philosophical thought has been found applicable in the form of such endeavors as medical ethics, business ethics, environmental ethics and that it has addressed itself to important social issues in the form of feminist philosophy. Far too often it has nevertheless been forgotten that such forms of “applied philosophy” can only be cultivated on the basis of solid theoretical work. For instance, medical ethics should be thought of as a part of the philosophy of medical sciences and not a branch of general ethics. It is also sometimes forgotten that the aims of such applications are not necessarily a part of the ultimate aims of philosophy. For instance a pioneering feminist philosopher once said the truly final success of feminist philosophy would be to make itself unnecessary.

One of the most serious consequences of fragmentation is a lack of cooperation and even mutual understanding. Sometimes the absurdity of such fragmentation is shown by an enhanced historical awareness. For instance, the arch-positivist Ernst Mach and the founder of phenomenology Edmund Husserl might seem to be nearly opposite poles on the map of philosophy of their time. Yet Husserl tell us in so many words that his phenomenology is but further continuation and radicalization of the developments in the philosophy of science represented by the likes of Mach and Hering (plus analogous developments in the philosophy of mind). Yet in a relatively recent Encyclopedia of Phenomenology one does not even find Mach in the index.

It seems to me also that the basic reason for this implicit or explicit rejection of the traditional aims of philosophy is frustration. We are dealing with a sour grapes reaction to the relative lack of major breakthrough ideas in different subfields of philosophy. This leads to the all-important question whether the sense of frustration is objectively justified. What are the prospects of different kinds of philosophical theorizing? In the following, I will try to diagnose the difficulties that some main contemporary philosophical movements are having and suggest some ways of overcoming them.

Whither hermeneutics?

For one example, take the hermeneutical tradition. It is inspired by the deep and suggestive idea of approaching the reality that philosophical thought has to face in the same way we approach a text to be interpreted. (A chronicle of this idea is given, e.g., in Reference BlumenbergBlumenberg 1981.) This idea is not new nor is it the monopoly of any one philosophical school. One can for instance view the philosophy of symbolic forms in this light. (No wonder Heidegger perceived Ernst Cassirer as an important rival.) Even a scientist like Galileo could speak of nature as a book being written in mathematical symbols. What characterizes twentieth-century hermeneutical tradition is the conviction that the meanings of these “texts” cannot be expressed and discussed in ordinary discoursive language and thought. As a consequence, a hermeneutical thinker must approach his or her interpretational task by non-discoursive means, perhaps by a special nonliteral use of language, as in Heidegger. The role of the idea of the ineffability of meanings in Heidegger and his philosophical neighbors is studied in Reference KuschMartin Kusch (1989) as an extension and of the idea in language theory that the meanings (the semantics) of a language cannot be expressed in the same language. I have examined the historical role of this idea in the essays collected in the volume Lingua Universalis vs. Calculus Ratiocinator (Reference HintikkaHintikka, 1997). There are even results in formal semantics that seem to support this hermeneutical vision, most notably the famous result of Tarski's according to which truth cannot be defined for what is known as first-order language in the same language. This result seems to be highly significant, for truth is one of the most basic concepts of semantics.

Alas, as was already pointed out, Tarski's theorem is due merely to the poverty of the languages he was considering. There is every reason to think that truth can be defined in every sufficiently rich language for that language itself, including our own working language. Hence there is no need to resort to a metalanguage or to a separate hermeneutical approach.

These results constitute a neat example of how explicit analytical work can put an entire philosophical tradition in a new perspective. They suggest very strongly that the soi-disant hermeneutical philosophers’ rejection of discursive, especially logical methods for the purposes of interpretation is ill conceived. Everything should be permissible, in addition perhaps to the proverbial duo of war and love, also in hermeneutics, at least methodologically. Only in this way can the hermeneutical tradition carry out its own best insights.

Perhaps this methodological liberalization has already been happening. Gadamer loosened up Heidegger's methodological strictness. Among other things he admitted as one of the main tools of his hermeneutical trade (perhaps inspired by Heidegger's interest in the question and questioning) Collinwood's “logic of questions and answers.” Collingwood's concept of logic might not have been the same as Tarski's, but his theory has meanwhile been systematized and extended into an epistemic logic that has already proved its mettle as a foundation of new approach to epistemology. Perhaps this logic, strictly understood and suitably developed, is what hermeneutical philosophy needs in the future. In my considered view, it should in any case be part of the next serious “stimulus package” in philosophy. One interesting development in this direction already exists in the form of Michel Meyer's “problematology.”

In view of the rejection of normal rational methods by the orthodox hermeneutical tradition, it is not surprising that this movement has become a true hunting ground for all sorts of different interpretations and approaches. It is hopelessly difficult to grasp which of these sundry ideas are anywhere close to the actual insights that gave rise to the hermeneutical approach. The difficulty does not lie in the superficially strange idiom of thinkers like Heidegger. It lies in the ideas that prompted the use of the oblique language. The hermeneutical tradition should sign a methodological truce with the Platonic tradition that sees rational methods like mathematics as a prerequisite for entering the realm of serious philosophy.

Whither phenomenology?

Other traditions require different diagnoses and different prescriptions. Consider one of the main traditions, the phenomenological one. There is among philosophers considerable confusion concerning the precise meaning of phenomenology. Contrary to some writers’ assumptions, phenomenology must be sharply distinguished from phenomenalism. It does not maintain that only phenomena are real. On the contrary, the characteristic idea is that experience gives us direct access to part of reality. If you call what can be so accessed “phenomena,” then you have to say that phenomena can be part of mind-independent reality, in the same way as, e.g., Bertrand Russell's sense-data were denizens of the physical world.

Some things are in any case clear. As the title indicates, one of its central ideas is to go back to the phenomena, that is, to what is immediately given to my consciousness. The rest of my cognitive world is constructed or, as the phenomenologists’ term goes, constituted from “the given.” This conceptual basis of the total structure of our knowledge is reached by the phenomenological reductions, in the first place by the transcendental reduction.

What are the prospects of such an approach? Repeatedly, some philosophers have raised doubts on the viability of the idea of the given. Much more sweepingly, the massive fact is that contemporary neuroscience has revealed that the most primitive and apparently unedited data of consciousness are in reality products of an enormously complicated processing by our central nervous system. Even such seemingly totally simple experiences as color perception require complicated processing of optical input by specific centers in the brain. A patient can therefore lose the use of color concepts while retaining a perfect color vision.

When this overwhelming fact is realized, it becomes obvious that a narrowly construed phenomenological approach is useless for foundational purposes. The simplest phenomena that can be reached in consciousness cannot for instance claim any special epistemological status, such as infallibility. Ironically, in certain earlier periods the term “phenomena” was actually used to refer to a much wider input into our cognitive process than the purely phenomenalistic one. For instance, Newton's “phenomena” included the results of controlled experiments.

This might seem to be enough to kill phenomenology for good methodologically, or at least destroy any relevance that it might claim for our actual knowledge acquisition. This fortunately need not be the case. However, a new perspective on the methodological situation is needed. Here a generalization of David Marr's interesting methodological trichotomy of the aspects of scientific inquiry into human cognitive processes is instructive (Reference MarrMarr, 1982). To study any such cognitive process, a scientist must spell out what conceptually speaking the process produces. In Marr's case, vision must produce a three-dimensional representation of what is seen. This is analogous to asking what the function is as a mathematical function that your computer should compute. Only if you know and spell out this can you ask what algorithm should be programmed into the machine for the purpose. And only as a third stage can we ask how that algorithm is implemented in the hardware of the computer. Marr finds analogies to these three interrelated stages in his study of human vision.

If we accept Marr's trichotomy, even the most naturalistic epistemologist must spell out what it is conceptually, in typical cases logically and mathematically, that our human cognitive systems have to accomplish. And this task will have to include prominently the recognition of what the conscious knowledge is conceptually speaking, that is the outcome of the processes that human cognitive systems carry out. This recognition requires self-reflection on, and analysis of, what we think and know. Phenomenology can survive as a study of phenomena if those phenomena are thought of as the conceptual fabric of the output of human constitutive process, not of its consciously inaccessible input. The project of phenomenology should be reversed. Instead of trying to register the input into human cognitive processes, phenomenologists are well advised to study their output. Perhaps one could even suggest that this is what rightly understood phenomenology has always been at its best.

This means an assimilation to each other of phenomenological reflection and the kind of conceptual analysis usually associated with some types of “analytic” philosophy. But perhaps there always was a connection between the two apparently different traditions. As was pointed out, Edmund Husserl acknowledged in so many words that his phenomenology is a continuation and radicalization of lines of thought represented among others by the positivist Ernst Mach. The further developments that Husserl talks about can perhaps be compared to the replacement in the analytical tradition of Mach's phenomenology by more sophisticated conceptual tools.

But once again what looks like a radically new perspective can be seen as an integral part of the original theory. Phenomenology has been interpreted as a theory of intentionality, that is, as a generalized meaning theory, partly analogous to Frege's. In this analogy, Fregean senses (Sinne) are supposed to correspond to Husserl's noemata. But in both cases the precise nature of these meaning entities is far from obvious. Thus a clarification of this crucial question is needed before we can understand the nature of phenomenology and to evaluate its prospects. The focus of this problem is which phenomenology is marked by the notion of eidetic reduction. An interpretation of phenomenology as a theory of intentionality makes it awkward to see it as an attempt to base ultimately everything on the given through phenomenological reductions. For one does not intend or mean the objects of immediate experience. One has the objects present in one's consciousness. The phenomenological reductions thus seem to show that phenomenology is not calculated to emphasize intentionality but to minimize its role.

Furthermore, how can a noema or any other universal be literally present in one's consciousness? Phenomenologists claimed to be able to extract all the basic ingredients from experience. But experience seems to be always about particular objects. And even if it is admitted that one can have experiences involving universals, there is a difficulty. A universal bears some intrinsic, necessary relations to other universals. These relationships must also be given to me in experience. One presumably acquires the concept of number five from seeing configurations of five objects. One can in some sense “see” the number five there. But can one in any sense see as a part of that experience that 5+7=12? Husserl postulated a faculty of doing such things in his notion of Wesensschau (“seeing of essences”). But what is this mysterious faculty? Are noemata objects of intentionality or its mediators?

The most clear-cut attempt to deal with this predicament is Aristotle's theory of forms as being actually realized in the soul. It seems to me that one instructive perspective on the phenomenological notions of essence and Wesensschau is to think them as attempts to revive the entire Aristotelian idea of form. Such a form is an objectively existing entity, in some cases a perceivable one that can for instance constitute the identity of an external object. At the same time, a form can be actually and completely realized in the soul. When it is, it does not represent the object of thought; it is formally identical with the object. In phenomenology this idea lives in the form (no pun intended) of the so-called intentional object. The intentional relation is not mediated by meaning entities; it consists in sharing a form. An actual historical link between Aristotle and latter-day phenomenologists is found in Brentano. I have jokingly (but not entirely jokingly) referred to phenomenologists as being “raiders of the lost forms.”

The Aristotelian forms have indeed been lost in transit. No contemporary thinker swallows the entire Aristotelian metaphysics of which his forms were an aspect. Hence phenomenology cannot be considered as having a satisfactory theoretical foundation until the problem of the mode of existence and mode of knowability of general concepts that affects the gist of their approach is cleared up. An acknowledgement and critical analysis of its Aristotelian sources could perhaps help phenomenology to steer its course to clearer waters.

The heritage of logical positivism

The most important intellectual challenge to philosophy in the twentieth century was revolutionary development of science. It remains a challenge to philosophical research. Members of different traditions have tried to respond to this challenge, including prominently neo-Kantians like Ernst Cassirer and phenomenological thinkers like Edmund Husserl or Hermann Weyl. The most sustained effort to master intellectually the new physical and mathematical theories was nevertheless made by logical positivists, also known as logical empiricists. This movement was spearheaded by the loosely organized group of philosophers, mathematicians, and scientists known as the Vienna Circle. It flourished around 1930, and even after its members had to flee away from Europe, it was one of the most important developments in the English-speaking world in the first couple of decades after World War II.

What is the legacy of this movement for contemporary research in philosophy? This question can be approached by asking another one: Why did logical positivism die or at least fade away? There are plenty of external reasons, prominently the diaspora of its members that destroyed effective cooperation. However, one can also find important internal reasons. Very broadly speaking, what did the logical positivists promise to do? In their own jargon, in the first place they promised to clear all the conceptual problems in the philosophy of science and in the philosophy of mathematics through the study of the logical syntax of the language of science (and mathematics). Did they succeed? Logical positivists and their allies did a great deal of valuable and sometimes groundbreaking work in logic and epistemology. But the overall answer to the question whether they fulfilled their ambition must be: No. This failure is the internal reason for the demise of the movement. It is nevertheless instructive to try to imagine what would have happened if they had for instance solved all the interpretational problems of quantum theory and carried out some version of Hilbert's program in the foundation of mathematics. If that had happened, we might perhaps all be logical positivists, I am tempted to say.

But what does the fate of logical positivism tell about the present-day prospects of philosophical research? Why did this movement fail? Several different answers are on the market, but have they produced a better prescription for future philosophy? Where do we stand, anyway? Often philosophers talk as if we are now finally overcoming the restrictive influence of logical positivism. This is a wrong perspective. What is going on at the present time is not the end of positivism, but the end of the main reaction against logical positivism in analytic philosophy. This reaction took different forms, represented by such philosophers as Karl R. Popper, W.V.O. Quine, and Thomas Kuhn. They are among the major figures in the reaction against logical positivism, and it is their shortcomings that are now becoming obvious. Popper had several excellent ideas, including the importance of attempted refutation in science, the central role of the concept of information, and the objectivity of abstract entities. Unfortunately he never developed any of them in a way that would have opened major new avenues of research.

It is also becoming blatant that Quine's leading ideas are too simple and too few to guide philosophy to new insights. And as far as Kuhn's ideas are concerned, he criticized logical positivists for not being able to explain the actual development of science and emphasized the role of extra-scientific factors in that development. His ideas have not led to essentially new insights into the nature of scientific enterprise, however. It is not clear that the role of extra-scientific factors that he emphasized is incompatible with the ideas of logical positivists who were not focused as much on the history of science as on its practice. I have argued that some of Kuhn's own work in the history of science could have been essentially enriched by deeper epistemological and logical insights. I have in mind his discussion of Planck's failure to put the concept of quantum to use and of the relation of mathematical and experimental traditions in early modern science.

It seems reasonable to conclude that the alternatives on the market to the approach of logical positivists have not been fully successful, either. Are there alternatives to these alternatives?

To logic or not to logic?

There is a fairly general agreement that the project of the logical positivists did not fully succeed because of the inadequacy of their conceptual tools. This failure is often ascribed to the inadequacy of purely logical methods in philosophy, including the philosophy of science.

At first sight, this diagnosis might very well be borne out by what has actually been found out. Hard results have revealed serious prima facie limitations in what such methods can do. The best known results of this kind include Kurt Gödel's incompleteness results that show that there cannot be a complete logical axiomatization of arithmetic or a proof of the consistency of arithmetic in the same arithmetic. These results have been taken to show that there are serious limitations to what can be done by logical and mathematical means, perhaps even limitations to the human mind itself, not to speak of the limitations of philosophical thinking. Closely related to Gödel's results are the results of Alfred Tarski concerning the undefinability of truth that were discussed earlier in this survey. The apparently skeptical implications of Gödel's and Tarski's results have encouraged the pessimistic, not to say defeatist tendencies noted in the beginning of this survey.

A completely different perspective is nevertheless emerging. In it, what went wrong with logical positivists is not that they used too much logic but that they used it too little. The logic that was valuable to them was not rich enough for the task.

One can in fact pinpoint some of the major flaws in the logical and mathematical tools that the positivists wielded and that are only now being corrected. They have also been hampered by the development of logic and the study of the foundations of mathematics.

One of them was the mistake of Frege's that was mentioned earlier. The mistake is to overlook an important part of the semantical function of quantifiers. These latter do not only range over a class of values. They express the (actual) dependence and independence of their variables through their formal dependence and independence of each other.

As soon as this is realized, it is also seen that our received formulation of the logic of quantifiers is defective in that it does not allow the expression of all possible patterns of dependence and independence between quantifiers, ergo variables. In particular, it does not allow a full implementation of the fundamental requirement on all definitions to the effect that the definiens (including its quantifiers) be independent of the definiendum. The central paradoxes of set theory are due to breaches of this requirement. Since Russell and others did not understand the notion of quantifier dependence, they were led to unnecessary and unilluminating theories like the ramified type theory.

In the other foundations of mathematics the current first-order axiomatizations of set theory are little better than misuses of the axiomatic method. As this method is used in mathematics, a class of structures is studied by capturing them as models of the axiom system. But the models of a first-order axiom system (like the Zermelo-Fraenkel axiomatization) are structures of particular objects, not of sets. And it turns out that one cannot draw general conclusions about structures of sets by studying such.

Frege's mistake has been corrected in what is known as independence-friendly logic and its further extensions. When developed far enough, they can also make both higher-order logic and set theory redundant for the foundations of mathematics. For instance, the so-called axiom of choice can (and apparently must) be considered as a valid first-order logical principle. There also seems to be reasonable hope that in this way we can not only correct the mistakes of our predecessors but also find our way to significant new results.

However, an entirely new vision has emerged. Instead of taking the negative results to show the limitations of logic or the human mind tout court, one can take them to show the limitations of the particular concept of logic or set theory or other traditional conceptual tools. This emerging perspective is being vindicated in different ways. For one thing, the received logic that Tarski was using and which limits his results has been shown to be too poor in the first place for the purposes of science, mathematics, or computer science. And if a richer logic is used, truth is no longer undefinable. This already makes a huge difference to the prospects of philosophical research, in the first place forcing philosophers to re-examine the entire discussion of “theories of truth.”

In other cases, pessimistic views are due to straightforward misunderstandings. A spectacular case in point is offered by the implication of Kurt Gödel's famous theorem in logical theory proving the incompleteness of elementary arithmetic. This theorem is supposed to show the limitations of logic in mathematics, and by implication of human thinking in general. In reality, it shows no such things. Gödel's result allows us to capture all truths of elementary arithmetic as logical consequences of suitable axioms. What it shows is only that a digital automaton cannot mechanically enumerate all these truths. It brings out the limitations of computers, not of human beings or of their logic and mathematics, and certainly not any limitations of the human mind. It should worry hackers, not philosophers.

Earlier, it was pointed out how radically Ludwig Wittgenstein has been misunderstood in general philosophy. In the philosophy of mathematics, the same fate has befallen David Hilbert. He was not a formalist but an axiomist. His crucial aim was not in the first place to create a deductive mechanism for mathematics, but to interpret mathematics as a study of configurations of concrete particular objects. One reason for his interest in formalism was only that symbols and formulas offer an example of such concrete particulars. He was not interested in proof-theoretical consistency but in model-theoretical consistency (existence of models). Contrary to a widespread belief, this overall aim of Hilbert's is not defeated by Gödel's results.

Thus the future of the tradition instantiated by logical positivists depends on the progress of rebuilding its conceptual tools and building better ones. This reconstruction has turned out to be much more surprising than I ever expected. By way of a quick overview it can be pointed out that not only has the original basic logic, the Frege-Russell logic of quantifiers, to be replaced by a richer logic. Its structure is undergoing a change. From its original form, the logic of quantifiers slowly branched into the traditional first-order logic as separated from traditional higher-order logic. But when first-order logic is enriched in a suitable way, it turns out to be capable of doing the job of higher-order logic. This same job is usually thought of as being done better by set theory. But the usual first-order axiomatic set theories have proved inadequate. Hence a radical unification and simplification is taking place in logical studies. All we need in principle is a suitable first-order logic.

In the briefest and simplest possible terms, our basic logic (the logic of quantifiers) has to be replaced by a richer one. Suitably extended, this new first-order logic makes both higher-order logic and set theory dispensable in principle.

(Un)bounded rationality?

These changes are only beginning, and much remains to be done. In any case, these developments belie much of the currently fashionable talk about inevitable cognitive limitations of human mind. Apparent limitations in this direction have received plenty of other kinds of attention recently. If there existed such restrictions, they would put to a critical light an idea of rationality. In fact, the term “bounded rationality” is one of the most frequently used terms in cognitive psychology and decision theory.

This development poses an important challenge to philosophical research, both intellectually and because of its implications in the fields of economics and politics. What is there to be said in critical philosophical terms? Perhaps the clearest case study is offered by the theory of cognitive fallacies developed by Amos Tversky and Daniel Kahneman. (They received a Nobel Prize in economics for their theory.) Although much more work is needed, it is becoming clear in the light of closer epistemological and logical analysis that the “fallacies” Tversky and Kahneman highlight need not be fallacious at all, depending on circumstances. Here philosophical research is faced with a task that is both intellectually and ideologically important.

A belief in inevitable built-in limitations of rationality easily leads to another harmful bias in philosophers’ work. Many of the worst ills in the world to-day, such as economic crises, have been attributed to cognitive mistakes. If so, a radical long-term cure would have to be an education in better thinking. Such an education in “reasoning and critical thinking” is one of the main pedagogical functions of philosophy. Indeed, undergraduate courses with this title have been a staple in American universities in the last several decades. Unfortunately, philosophical research has not been sufficiently guided by the needs of this important educational mission of philosophy. In spite of there being ample intellectual challenges in the general theory of reasoning and argumentation, most of the leading logicians, epistemologists, and methodologists have not addressed them. (There are exemplary exceptions, of course, for instance Patrick Suppes.) For instance, you do not find any satisfactory examination of “how possible” reasoning (as distinguished from “why necessary” reasoning) in the earlier literature, let alone in textbooks of logic or reasoning, in spite of the great practical significance of such reasoning.

Generally speaking, we do not have any intellectually satisfactory generally accepted general theory of ampliative reasoning. Most of philosophers’ theorizing about knowledge-seeking is naturally applicable only in scientific contexts. There are theories designed to back up the teaching of reasoning and critical thinking such as theories of “informal logic.” They can perhaps be useful in pedagogical practice, but from a more demanding philosophical point of view they remain on a do-it-yourself level.

It is not far-fetched either to suspect that theories of inevitable irrationality tend to discourage constructive research in this direction by belittling its applicability. But if one is not a fatalist in this respect, one can see here a wide-open and challenging field for high-powered philosophical research.

Does a computer think? Does a thinker but compute?

One of the active research areas in recent philosophy, including philosophically relevant work done under other titles, is the philosophy of mind, including the philosophically relevant research in cognitive science, cognitive psychology, and cognitive neuroscience. In some ways an observer must extend his or her perspective even wider, for cognitive science cannot be sharply distinguished from computer science, especially from the study of artificial intelligence. The research in all these fields is so rich and so varied that it is difficult even to separate different research traditions from each other, let alone to evaluate their vast performance and future promise.

A semi-historical bird's-eye glimpse might nevertheless illuminate the methodological assumptions underlying these developments. In the distant past, the paradigm case of rational thinking was taken to be logical inference. What the rise of the new “symbolic” logic in the late nineteenth century meant was the idea that the rules of such inference can be captured by purely symbolic (formal, syntactical) means. This led to the idea that we could think of all human cognitive operations in terms of manipulating the symbols of a suitable representational system. One crystallization of this idea is the use of notions like “the language of thought.” Historically speaking, it was this ideology of symbolic logic that helped to inspire originally the development of electronic computers. Their ubiquity and importance has conversely aided and abetted the idea of all cognitive operations as manipulations of suitable symbolic representations. And computers do indeed perform in electronic or mechanical terms operations that earlier were carried out by conscious intentional actions of the human mind. In the technology of artificial intelligence, an attempt is made to extend wider and wider the range of the cognitive operations that can be so delegated to computers.

If cognitive science is thus an offspring of symbolic logic, what does its parentage tell us about its prospects? What has been found out about the nature of logical reasoning? One answer is implicit in the most basic codifications of the bread-and-butter inferences codified in first-order logic. As such a codification, we can consider the so-called tableau or tree methods. The obvious way of understanding what goes on in these methods is to conceive of an attempt to infer G from F, not so much as a series of transitions from a proposition to another one, but as a thought-experiment, an attempt to see whether a scenario can be constructed in which F is true but G not. This construction can take place on paper or in a computer or in free imagination. There is plenty of evidence to suggest that our spontaneous logical reasoning takes place by means of some such imaginary thought-experiments largely independently of any particular symbolic representation of the scenarios being constructed. In other words, actual logical reasoning cannot be represented fully by purely symbolic (syntactical) means. Such limitations of purely syntactical methods are indeed in evidence in the Gödel-type results mentioned earlier in this survey. This fact can be seen to have a deeper significance. One can for instance represent all the truths of elementary arithmetic as logical consequences of suitable axioms, but one cannot program a computer to draw all those consequences one by one.

Pointing this out does not mean criticizing computer science or cognitive science or belittle their general theoretical interest. However, it means that there are philosophically highly significant limitations to certain types of research. Much of what is called cognitive science means in practice computer modeling of different cognitive processes. Such research cannot be expected to do full justice to the power of human thinking.

Philosophers working in this area thus face the challenge to explore the limitations of the paradigm of thinking as symbolic processing, and perhaps learn to consider the entire enterprise of cognitive science in a new light. This task is a philosophical one, for the limitations in question (in so far as we should call them limitations) are conceptual.

One can illustrate this problem situation as follows: Someone might object to what was just said by pointing out that in principle the model (scenario) building could always be done symbolically. This would be true but the important question would then concern the rules guiding such interpretation. Logical reasoning, like language use in general, is a goal-directed enterprise and as such can be conceptualized in game-theoretical terms. Now in any game (in the theoretical sense of the word) one can distinguish the definitory rules that specify what “moves” may be made in the game from the strategic rules or principles that facilitate the realization of the ends of the “players.” The rules that govern a logician's thought experiment are inevitably strategic, and cannot be reduced to the definitory rules of any other “game”, either. This is the philosophically crucial feature of the conceptual situation. In order for philosophical research to bring out the true significance of contemporary cognitive science, philosophers must internalize the distinction between definitory and strategic rules.

The same distinction throws some light on other questions. Philosophers and non-philosophers have asked whether the human mind is (or perhaps rather whether it can be modeled as) a digital computer. The popular form of this question is “whether computers can think.” If thinking means following definitory rules, the answer is trivially affirmative. But it is an entirely different question to what extent computers can be said to master strategic rules, for instance, to form strategies themselves and to modify them in the light of experience. For instance, the partial success of chess-playing computers against human grandmasters is a telling argument against any greater intelligence of computers. Their relative success is due to their enormous speed. In chess terms, computers have millions of times more thinking time than human players. The fact that in spite of this speed they are not appreciably superior to the best humans shows that their strategic skills are minimal.

Perhaps the most extensively discussed question in the contemporary philosophy of mind concerns the nature of consciousness. I must confess my failure to gain any satisfactory general perspective on this discussion. I strongly suspect that any definitive answer has to wait for the clarification of some of the main concepts involved in the problem. For instance, what is it that is happening in reflexive consciousness? Some kind of feedback? What is feedback? Mutual dependence of two variables? But such an interdependence cannot even seem to be expressed in any straightforward way logically and mathematically. And what is emergence? A great deal of further analysis is needed here.

These critical comments do not reflect on cognitive neuroscience, either, which may at this time be the most significant branch of science philosophically. It was pointed out earlier how cognitive neuroscience puts the entire project of phenomenology to a new light. What was said there can perhaps be generalized. In the spirit of David Marr's computational task, one can ask of different cognitive systems what the task is conceptually that they serve to carry out. This leads in fact to questions that are highly interesting theoretically, even logically and philosophically. For instance, cognitive neuroscientists’ distinction between the “what” system and the “where” system in visual cognition turns out to exemplify a distinction between two modes of identification in logical semantics. It is also highly interesting to ask for instance what conceptually speaking is “wrong” with an autistic person's cognition. Such questions can be raised and hopefully answered without having to inquire what the hardware implementations of the cognitive operations in question are.

Somewhat similar things can be said of another branch of philosophical studies that has attracted a great deal of research, the philosophy of language. There has been a great deal of cooperation between linguistic and philosophical, especially logical, research in the last half a century. However, the philosophical relevance of the linguists’ work has been limited by linguists’ frequently used research strategy of approaching semantical phenomena through their syntactical manifestations. This is an excellent strategy as far as it can reach, but in the longer run its reach is seriously limited.

It is not too much of an oversimplification to suggest that the least that the syntax-oriented work has produced is a syntactically defined form such as Chomsky's LF which is claimed to be the basis of the semantical interpretation of the language in question. Even if this were true, it would leave most of the work of semantical theory undone. For the interpretation of a sentence is not accomplished in one fell swoop, but involves a step-by-step process. Even methodologically, linguistic regularities are likely to be easier to capture by formulating them relative to the stage of the interpretation process at which they come to play. In sum, generative linguistics has not managed to synthesize syntax and semantics.

It is not that efforts have not been made or that there are promising ideas on the market, perhaps in the form of application of new insights in logic. Here progress in formal semantic and even logic seems highly promising. For instance, linguists’ discussions of the syntax and semantics of negation would be put to a new light if it turned out, as the case arguably is, that in any sufficiently expressive language there are implicitly (or perhaps explicitly) two logically different notions of negation in operation which are not distinguished from each other syntactically in most actual human languages.

There may also be a moral in the story of the philosophy of cognitive science (and of the philosophy of language) that is applicable to the use of formal methods in philosophy in general, as exemplified for instance by the uses of possible-worlds semantics or by formal epistemology. Such enterprises are not self-sufficient independently of interpretational and other wider issues. Such applications should be anchored in a firm interpretational basis, usually via a realistic model theory. For instance, some of the notions in purely formal versions of possible-worlds semantics are uninterpretable in some contexts of their use, for instance the ideas of “rigid designator” or of “backward-looking operator.” As an exercise in the history of philosophical ideas, I have suggested that even the metaphysical and other philosophical views of such prominent thinkers as Tarski, Gödel, and Kripke can sometimes be seen as consequences or rationalizations of their ideas in technical logic rather than inspirations of their formal work.

Both in philosophical methodology and in language theory, confusion and harm have been caused by a usually biased way of separating from each other semantics and pragmatics. The mistake is to overlook the possibility that the meaning relations that are studied in semantics are constituted by rule-governed human activities (together with their interpretations) that are supposedly studied in pragmatics. Wittgenstein highlighted this frequently missed idea in his notion of “language game.” Unfortunately, his idea has not been incorporated into most of the usual approaches to semantics. Conversely, some of the uses of game-theoretical ideas in language theory overlook the semantical significance of the games they study.

Here, as in many other directions, there are many splendid opportunities for philosophical research. However, in order to be able to make use of them, philosophers may have to adopt a more critical approach to the foundations of various theories and philosophical research traditions.

Funding

This research received no specific grant from any funding agency in the public, commercial, or not-for-profit sectors.

References

Blumenberg, H (1981) Die Lesbarkeit der Welt. Frankfurt am Main: Suhrkamp.Google Scholar
Hintikka, J (1997) Lingua Universalis vs. Calculus Ratiocinator. Dordrecht: Kluwer.CrossRefGoogle Scholar
Kusch, M (1989) Language as Calculus vs. Language as the Universal Medium: a Study in Husserl, Heidegger, and Gadamer. Dordrecht: Kluwer.CrossRefGoogle Scholar
Marr, D (1982) Vision: A Computational Investigation into the Human Representation and Processing of Visual Information. San Francisco: W.H. Freeman.Google Scholar