1. Introduction
Neither of us ever studied formally with Douglass North, nor were his colleagues, nor did we co-author with him. Yet, as practicing economic historians for many years, we cannot imagine what our work would have looked like without his influence, support, and encouragement, whether through his writings or through our conversations with him. North was a scholar like no other: He was never bothered by the standard methods and conventional wisdom of economics and never hesitated to take his colleagues to task if he felt they were missing something important – which was most of the time. He ignored the traditional boundaries between the various social sciences, and his thinking influenced and stimulated people across the social and historical disciplines. He also was rarely locked into a position on anything: As his thinking evolved, he readily abandoned positions he had taken in the past.
North believed in the importance of institutions in economic history, and rightly accused much of the new economic history – which he helped found – for ignoring them for many decades. This call resonated with many scholars who were inspired by him to ask historical questions about the kind of institutions that he was interested in: property rights, contract enforcement, the political and legal frameworks of markets, the deployment of power and violence in society, and so on. North stressed from the start that institutions should be differentiated from organizations. But what precisely are institutions? In his path breaking book, North (Reference North1990) defined them as human-made constraints (and hence the ‘rules of the game’), but that definition seemed to raise as many difficulties as it resolved: who actually defines these constraints, and if they were constraints, what happened if they were violated? It seemed more natural to define them as incentives (which North immediately added): the rewards and penalties that society imposes on people who display certain behaviors. But if so, who set these incentives and who enforced the rewards? What was the role of customs and norms? And above all, what determined if agents would pay any attention to them?
From the onset, it became (almost) a consensus in the profession that we could not understand economic history without paying attention to institutions. It became imperative to confront the question of institutional change: why and how institutions looked the way they did, how they changed in the long run and in response to what, and why some countries had such dramatically different institutions than others. In time, it became clear that changing cognition and beliefs was important to institutional change. North (Reference North2005: 49) noted that ‘there is an intimate relationship between belief systems and the institutional framework’, What people believed to be true, fair, and reasonable mattered a great deal not just to their behavior directly but also through the institutions they lived with. North famously referred to these beliefs as the ‘scaffolds’ on which institutional structures rested (ibid.: 8–9).Footnote 1
In what follows, we take up the challenge posed by North in his 2005 book. We suggest that institutions – rules, expectations, and norms – are based on shared cognitive rules. Indeed, it is hard to think of incentives as anything but a cognitive rule. Cognitive rules are social constructs that convey information which distills and summarizes society's beliefs and experience. These rules have to be self-enforcing and self-confirming, but they do not have to be ‘correct’. Cognitive rules include not only beliefs based on observed empirical regularities such as the difference in temperature between winter and summer (which each individual can observe on his or her own), but also beliefs about nature such as that the gravity of the moon causes the tides and that smoking causes cancer, which individuals believe because they are socially accepted (equilibrium) cognitive rules. The incentives that people respond to are also socially based cognitive rules: people believe that certain actions will lead to certain outcomes. For instance, in some societies people believe that working hard and paying one's taxes honestly are rewarded and are the correct things to do. In others, people have different moral beliefs on what constitutes ‘cheating’ on their taxes, what constitutes ‘shirking’ on their jobs, and what constitutes a ‘bribe’ as opposed to a ‘fair payment’. The very definition of these concepts is shared cognitive rules. These rules others may think of as ‘institutions’. We think that cognitive rules such as what is moral, what is expected of people to do in certain situations, and how causes lead to outcomes are underlying the regularities in behavior that are generated by institutions. Without such social mechanisms, people are incapable of making sense of much of the world around them, neither of the society they live in, and the markets they buy and sell in, nor the physical and biological world with which they cope with on a daily basis (Greif, Reference Greif2006, chapter 5; Greif, Reference Greif2014; Scott, Reference Scott1998). Economists have typically assumed that people make decisions on the basis of knowledge of the problems they have to solve. What North and others have pointed out is that the rules by which this knowledge emerges are the result of individual learning; we, on the other hand, see them as social constructs that provide the foundation of individual decision making and that are transmitted by social rules – such as the rules of the road or the rules of the market (Greif, Reference Greif1998; Greif and Kingston, Reference Greif, Kingston, Caballero and Schofield2011).
How should we think about institutions and cognitive rules? Cognitive rules, which summarize and aggregate society's beliefs and attitudes, are followed because individuals with limited cognition – that is, everyone – have to rely on them in exercising their choices since these rules link outcomes to decisions and thus set the incentive structure. The knowledge and information conveyed by the consequences of choices and decisions are specified by these rules. Individuals have the option to follow the rules or not, but they normally cannot set the rules. In other words, cognitive rules correspond to behavior when the cognitive frameworks they convey constitute an equilibrium in a ‘game’ between each individual and the rules. If I take this action, such and such is likely to happen – whether that involves eating spoiled food, driving through a stop sign, or writing a rude email. In a sense, one is ‘playing’ by responding to the rules rather than to the other players. The individual takes the rules as given – as if they correspond to reality – in making choices.
In what follows, we hope to show that this view of institutions can help us understand historical phenomena and not just behavioral issues such as why (almost) all drivers stay to the right side of the road as the law stipulates, but a few drivers observe posted speed limits. That raises two questions of profound historical importance: one is why and how do the accepted cognitive rules that matter to the economy change, and how did such changes map into economic change eventually leading to the modern economy, which is what North (Reference North1990, Reference North2005) was after all interested in. Specifically, we consider the rise of the consent-based government of the modern state, the idea of the Law as a means of enabling market and other economic activities, and the cognitive basis of the rise of modern science and the technology based on it.
2. Cognitive rules and the ‘market for ideas’
Of all the cognitive rules in a society, some of the most important may be the meta-rules that specify which cognitive rules are accepted or not. One reason for the limited capacity of individuals to form correct beliefs is poor informational feedback in an environment in which multiple interpretations are possible. Such feedback is particularly poor under individualistic (atomistic) learning based only on outcomes that are observed by each individual given the inherent attribute of the interaction. Each individual learns quickly that if they drop an object, it falls on the floor and if they want to buy an item on the market, they have to pay the market price. But many other rules are socially conveyed and distributed and cannot be tested individually. For instance, the age-old custom of bleeding fever patients may have been viewed as effective if the patient subsequently recovered; the more rigorous conclusion whether it was effective requires a large sample and random assignment of bloodletting, something that was beyond the power of the individual patient to observe. The rule was self-confirming: if the patient recovered, bleeding had worked. If he did not, this was despite the procedure. The socially constructed cognitive rule that bloodletting worked was left intact for many generations.Footnote 2 The same is true for social interactions: Each individual attempts to form beliefs about others' future behavior based on their past behavior. Yet, these others also adjust their future behavior in response to past outcomes.Footnote 3 Perhaps most difficult are cognitive rules about economic policy. Is free trade a policy that makes people better off? Do minimum wages create unemployment? Do democratic regimes foster economic growth?
One way of making this point is to utilize a distinction made many decades ago by Hayek (Reference Hayek1942). Hayek insisted that one of the errors made in the social science is to overlook ‘the real contrast between ideas which, by being held by the people, become the causes of a social phenomenon and the ideas which people form about that phenomenon’. He referred to the former as ‘constitutive’ ideas as they are the real causes of a phenomenon and the latter as ‘speculative’ or ‘explanatory’ ideas, which are the ex post notions people have about a phenomenon.Footnote 4 In all emergent properties, the collective or aggregate may be regarded very different than the causes of the elements that account for it – because it is. And yet, in this paper, we want to argue that at times the two may coincide: the people who carry out research in the hope of making society richer may actually believe that the path to economic growth is paved by scientific innovation; the people who are driven to political action to bring about government by consent may actually regard government by consent as a superior form of politics.
Because individuals cannot normally make such decisions on their own, they often rely on experts: priests, officials, teachers, physicians, scientists, ethicists, and legal experts – all help agents decide what they can and should do, and what the payoffs are of each action. These experts constitute a way in which society distributes the distilled cumulative aggregated wisdom of the totality to individuals. Yet, such a rule of experts – inevitable in every society in which the set of social knowledge is larger than what each individual agent can verify on his or her own and which practices a division of knowledge – raises many other issues. First, how do these experts themselves reach the beliefs and convictions they have? Second, who appoints those experts, and who appoints the appointers? And third, what happens when experts disagree and when they compete with one another, holding conflicting views? How do people choose?
Before delving deeper into these issues, it is important to stress that there is nothing in human history and experience that indicates some kind of Gresham's Law in reverse, namely that ‘good’ cognitive rules drive out ‘bad’ ones. Much of that depends on the meta-rules that help people decide what knowledge is valid. In a world in which the wisdom of ancient sages is decisive – be they Aristotle, the Talmud, or Zhu Xi – the likelihood that bogus beliefs can remain powerful is high. But even a world that relies on evidence and logic has to make difficult decisions about what evidence counts, and what rules of logic are admissible. Is statistical evidence acceptable if experimental data are unavailable? And when is experimental evidence decisive? When new ideas strongly conflict with an existing view of the world, the rules of evidence may be cast aside.Footnote 5
Such strong persistence of beliefs is due to confirmation bias on the individual and societal levels, compounded by the material interests of those benefitting from these beliefs. Confirmation bias implies that when beliefs are challenged by new evidence, individuals and groups seek ways to reconcile them with existing beliefs rather than replacing them with new beliefs that are better supported by the data. The belief system advanced by the Catholic Church has survived although it was modified to not be refuted by the Copernican view that it eventually had to accept. Ironically, cognitive systems that are false but cannot be disproved can last longer than systems that might be mainly right but cannot be proved (the germ theory was first proposed in 1546 by Girolamo Fracastoro, 50 years before the first microscope).
The idea of a competitive market for ideas has long been popular among some scholars (Coase, Reference Coase1974; Gans and Stern, Reference Gans and Stern2003; Mokyr, Reference Mokyr2007; Polanyi, Reference Polanyi1962; Stigler, Reference Stigler1965). It is at once misleading and helpful (Hodgson, Reference Hodgson2015: 130–131). Well-functioning markets imply transferable, defined property rights that are transferable at a price – which does not apply here. Yet, disruptive new ideas are generated by somebody, and they become accepted cognitive rules when a sufficient number of others accept them, usually abandoning or modifying previously held views. Such a change can be regarded in terms of a metaphorical market, in which intellectual innovators try to persuade the relevant public to accept new beliefs. If such persuasion is successful, a ‘sale’ has taken place. Markets for ideas can be highly competitive or dominated by monopolists, they can be open or erect high barriers of entry, and we can certainly see the transaction costs and taboos as we see in any other market. While there are no prices that are paid when transactions take place, successful sellers gain fame and prestige, and the utility and resources correlated with them. Old ideas are stubborn and fight for their survival, so such persuasion is often accompanied by serious conflict – one thinks of the persecution of heretics and dissenters over the ages, and the religious wars of the 16th and 17th centuries.
Like all markets, however, the market for ideas needs to have institutional and technological underpinnings that make it work – indeed, that was one of North's main messages. If repeated and sustainable transactions are to take place, there have to be meta-rules about how cognitive rules are assessed. Those include rhetorical conventions about what constitutes proof and evidence, but also the rules of conduct in this market.Footnote 6 There is also the matter of the technology of communication. The market for ideas in a world of internet and Facebook is as different from the market of the 1950s as the market of the first half of the 16th century (with widespread printing presses and effective long-distance mail services) was from the medieval environment.
Cognitive rules tend to reproduce themselves and to be highly persistent – except when they are not. They tend to become unstable when they lose their ability to be self-confirming. This can happen for example when new evidence emerges that is viewed as incontrovertible yet is inconsistent with accepted cognitive rules. Such new evidence can be the unexpected by-product of new technology; the new scientific instruments of the 17th century showed clearly the errors of classical physics, astronomy, and geography, and the improved microscopes of the 19th century demonstrated the validity of the germ theory as opposed to miasma theories. In other cases, however, more subtle persuasion was at work that changed people's views of the organizations that defined their collective lives: was the king the citizen's master by divine right, or was his legitimacy based on the rule of law and his subjects’ consent? Was rent-seeking a legitimate activity or was the only legitimate economic activity the one that actually produced (rather than redistributed) resources? Were protection and subsidies a good way to run an economy or was unfettered free trade? Here smoking guns or mathematical proofs were largely absent, and persuasion became a central feature. Indeed, social learning, imitation, and persuasion through one form or another were the essence of what was taking place in the market for ideas, shaping the kind of self-reinforcing cognitive rules that North viewed as institutions and that in his view set the rules of the game.
The market for ideas can throw up different kinds of equilibria. One is the degenerate equilibrium in which a single belief or cognitive rule becomes ‘fixed’ in the population. Worldwide, flat-earthier are practically extinct, as extinct as the number of Swiss drivers who will try to offer a traffic policeman a bribe. In many other cases, one mental species becomes dominant, but the others are driven into smaller or larger niches. Creationist biology may still be taken seriously in Petersburg Ky., and taught at Liberty University and a handful of other Christian colleges, but can hardly be regarded as a serious discipline in American higher education. A there is serious support for homeopathic and other alternative medicine, there seems to be little sign of a serious ‘alternative chemistry’ or an ‘alternative nuclear physics’. Yet the cognitive rule that guarantees such Aalternative@ ideas the chance to compete in the market for ideas without retribution, no matter how widely regarded their authors are viewed as crackpots, is itself a successful meta-rule (at least in the United States) that had to compete in the market for ideas (and which clearly was rejected in many other cases).
North had a great deal of sympathy for evolutionary theory and models of institutions and cognition. He realized full well that evolutionary thinking is a natural way to connect the past to the present and make sense of how institutions change over time. However, in the end he concluded that the differences between the two are too deep to apply evolutionary thinking to economics (North, Reference North2005: 65–66). Yet, a generalized evolutionary structure is attractive to historically minded scholars precisely because it provides a link between any society's rules and its past. Through the socialization of beliefs, customs, and values, children become imperfect copies of their parents and teachers. But at times this process fails in important respects: all Protestants before, say, 1535 were born Catholics, and early members of the Communist parties were not brought up as Communists. New items appear on the menu of cultural options. Such changes differ in important respects from ‘mutations’ in biology but they have similar effects. Selection on intellectual innovations works through the market for ideas.
What counts for the dynamics is the importance of the concept of coevolution (Durham, Reference Durham1991; Richerson and Christiansen, Reference Richerson and Christiansen2013); two entities or species can affect each other either positively or negatively. Cognitive rules affect one another. In some cases, they are mutually antagonistic, whereas in other cases they mutually reinforce one another. There are a few instances in human history when a process of positive feedback in which multiple sets of beliefs and knowledge reinforced one another had the power to change history. None of these, it appears to us, are more important than the evolution of beliefs and institutions in the West in the 18th century, which triggered the Industrial Revolution and everything that came after, and in which the foundations of our current prosperous world were laid.
Co-evolution also helps resolve the issue of the direction of causality. Historical materialism subjugated ideas and beliefs to material economic forces; historical ideationism in which ideas drive historical development has had a recent revival (McCloskey, Reference Levine2006, Reference McCloskey2016). But nobody is arguing that ideas alone or material forces alone drive institutions and historical outcomes. Material interests determine to some extent what people believe and what institutions will emerge. The ability to create intellectual rationalizations for one's hopes for material advancement to say nothing of naked greed should not be underestimated. Ideas change to fit changing times, but as they change they affect the way the environment changes, much like the environment affects how species evolve and yet the species change the environment in turn. All the same, beliefs and cognitive rules are formed in more complex ways than ‘how can I profit?’ Did material interests affect whether people believed in the theory of evolution or in Newtonian celestial mechanics? Co-evolution, in which the two constantly interact and feedback on one another is a more accurate way of looking at the kind of issues North was interested in. They also underline that outcomes are on the whole indeterminate and characterized by a multiplicity of equilibria and outcomes, much like history itself.
Did cognitive rules matter for outcomes economic historians care about? We argue that they did and below we present a number of examples to that effect, each of which touches directly on the two developments that are at the core of the historical transformations that created the modern economy. The first is the rise of the modern Western-style nation state aimed at improving the welfare of its citizens and relying on an effective legal system, and the second is the rise of modern science and technology and the increase in productivity and economic welfare it implied, which McCloskey has termed the Great Enrichment.
3. Cognitive rules, legitimacy, and political development
With some notable exceptions, the role that cognition played in the historical process of political development has not been examined. (Among these exceptions are Greif and Rubin, Reference Greif and Rubin2016; Levi and Sacks, Reference Levi and Sacks2009). Yet, as we argue below, cognition – particularly regarding legitimacy – has had a large impact on historical trajectories of political development.
The cognitive aspects of interest here are those articulating on legitimacy, that is, the rational or moral basis for the right to rule. Political regimes face the challenge of motivating compliance with demands on the citizenry that, absent either intrinsic motivation or coercive power, violate the individual-level participation constraint. North noted the role of intrinsic motivation in governing: the fundamental aim of ideology, he argued, is a way to make people behave in ways that were contrary to their simple hedonistic individual cost/benefit calculus and overcome free riding (North, Reference North1981: 53). In other words, people in power advanced cognitive rules justifying their control of others to motivate compliance. But how?
To begin this task, it is useful to begin by considering the conditions under which states provide public goods (rather than delegating this role to purely social or economic organizations). Moreover, if legitimacy and coercion are substitutes, what limits the reliance on legitimacy? We depart from North by considering why the state is facing the following tradeoff in providing public goods and why it provides them to begin with. Recall that the provision of public goods is characterized by a free-rider problem as identified in the seminal contribution by Olson (Reference Olson1965). Collective actions differ by their attributes such as excludability and observability that determine whether free riders can be deterred based on non-coercive (i.e., economic or social) mechanisms. How can free riding be mitigated if economic and social punishments are not available? Either intrinsic motivation or coercion or a combination of the two can be used. In particular, a legitimate ruler who orders his subjects to contribute resources to a public good can rely on an intrinsic motivation to mitigate the collective action problem. Similarly, if the ruler can collect tax using coercive power, he can finance the provision of public goods.
Both coercion and legitimacy, however, come at a price. The price of coercion, that has been and is still common, is dividing the society between the coerced and the coercing, where the former had to pay the costs required to motivate the latter to subdue them. Such internal divisions, in turn, required resources and fostered social unrest and violent conflicts. Equality and open access were inversely related to the degree of intra-state coercion used to mobilize resources for public goods. Moreover, intra-state imbalances in the allocation of coercive power created opportunities for a military elite (originally designed to protect the country from external threats) to exploit the non-elite and extract resources from the larger population. This is the essence of the ‘natural state’ described by North et al. (Reference North, Wallis and Weingast2009) and the ‘extractive state’ described by Acemoglu and Robinson (Reference Acemoglu and Robinson2012). Their attempt to understand the rise of the modern European state using only this perspective has been innovative, but without a more explicit emphasis on beliefs and ideology it has been incomplete. Extending the analysis to consider cognitive rules therefore seems promising.
Legitimacy is a perception shared by the citizens that a particular regime is rightfully in power. Because it is a moral view, it tends to be persistent. Some ancient rulers seem to have been remarkably able in prolonging their regimes, although it is difficult to distinguish whether their legitimacy or power was the reason. Be that as it may, later regimes were facing the challenge of motivating compliance among subjects, who often already held a cognitive structure created by a previous regime or another entity (e.g., religious authorities). Roland (Reference Roland2004) noted that such ‘slow moving’ cultural features constrain the set of behavior a ruler can institutionalized. Subjects may pretend to recognize legitimacy even when they do not.Footnote 7 If the economic and coercive pressure to conform to a new legitimacy rule is too strong, it may lead to resistance to the new regime that may be deeply entrenched. Regimes require powerful and influential individuals or organizations to declare their recognition of the rulers, such as the prophet Samuel in the book of Kings. The fact that such an agent has been asked to legitimize a ruler is a source of additional power to such agents. Such power to bestow legitimacy enhances the legitimacy of the ruler over other agents who have not been asked to legitimize him (Greif and Rubin, Reference Greif and Rubin2016).
One indication of the importance of cognitive rationales for political systems is the large extent to which political regimes invoked religious justifications for their control, despite the risks involved. Egyptian pharaohs, Persian kings, Japanese and Roman emperors alike were among the many who claimed themselves to be divine. The benefit to them was the ability to delegate the punishment to a third party (that it, the divine entity), or postpone punishment for non-compliance to the afterlife. The risks, however, were substantial. The first risk is the need of the rule to be self-confirming (Greif, Reference Greif2006, chapters 2, 6, 7; Greif and Laitin, 2011). A cognitive rule refuted by observable outcomes would not last long. A king who claimed to be a god risked being unmasked as an impostor by outcomes inconsistent with the claim such as a military defeat or a natural disaster. The second risk is that a supporting religious authority could become ambitious or greedy and challenge the monarchy.Footnote 8 Religious authorities therefore had to be compensated to maintain their loyalty.
Divine justification was nevertheless sufficiently valuable that rulers often sought it. It is therefore possible to evaluate whether cognitive rules mattered by regressing observable outcomes on proxies of differences in cognitive rules. Specifically, did different religiously based cognitive rules have distinct implications regarding the longevity, effectiveness, of economic and political institutions? Iyigun (Reference Iyigun2015: 23–45) established that political units in which monotheistic religions prevailed last longer and were bigger than others. It is significant, however, that post-Roman European rulers at first did not rely on religion to justify their control.
They ruled because they were the descendants of the traditional chieftains of these tribal groups. As such they were considered first among equal and to become a ruler, a son of the previous chieftain had to get the consent of those who were to follow him to battle. Although consent could not be taken back, a chieftain who lost the confidence of his followers could expect to find in following someone else to whom they declared loyalty.
By the 8th century, challengers to the traditional rulers of various European polities were deploying Christianity to gain legitimacy. Specifically, rulers whose legitimacy was based on blood line and consent were challenged by those whose legitimacy was acquired by Papal blessing. The legitimizing power of the Papacy is based on the new cognitive idea of Christian king and the Papal position as an intermediary between the Lord and the believers. The cognitive foundation for this position is reflected in the Papal emblem, two crossed keys, symbolizing that any door that the Papacy opens on earth God will opens it in heaven. The Papacy was therefore in a position to influence compliance and loyalty to rulers and those who challenge these rulers.
Divine justification was nevertheless sufficiently valuable that rulers often sought it. It is therefore possible to evaluate whether cognitive rules mattered by regressing observable outcomes on proxies of differences in cognitive rules. Specifically, did different religiously based cognitive rules have distinct implications regarding the longevity, effectiveness, of economic and political institutions? Iyigun (Reference Iyigun2015: 23–45) established that political units in which monotheistic religions prevailed last longer and were bigger than others. It is significant, however, that post-Roman European rulers at first did not rely on religion to justify their control. Instead, the political units created by the Germanic tribes and other groups held that the right to govern was based on blood line of the tribal chiefs and later kings and on the consent of the group's free people.
The Papal role as king-maker is illustrated, for example, by the history of one of the most important European dynasties, that of Charles the Great whose father, Pepin the Short, became king of the Franks in 751 with Papal support. Previously, the traditional rulers of the Franks, known as the Merovingian, legitimized their rule based on hereditary rights and the consent of their aristocracy. Pepin's family held the position of the mayor domus, the main administrator under the king and over time created a professional army under its control. In 751, when the Pope needed Pepin's military support against the Lombards, he approved Pepin as king. Only afterward did Pepin seek the consent of the aristocracy, while having his army close by. The hint was clear and consent was given. Pepin was not unique in invoking the Papacy to justify his rule. William the Conqueror sought Papal approval in 1066 before sailing to capture England and similar to Pepin, consent by the English nobility was given when William's army was at hand.
There is an interesting historical dialectic at work here. The king-making powers of the Church endangered existing royal houses and other powerful actors, thereby, undermining itself. Kings feared that their opponents would ally with the Pope, whereas intra-state actors who sought to limit royal power feared Papal support of the monarchy. Perhaps the most striking example is the case of the Magna Carta in 13th century England. The barons forced the king to take an oath to keep the charter and as a Christian king he could not renege without committing a mortal sin. In order to break his oath anyway, the king offered England to the Pope and ruling it as a papal vassal, in return for the Pope annulling the oath. Later, an act of Parliament declared it illegal for a king to offer England to the Pope.
The religious obligation of the king's subjects to comply with their rule was thus beneficial to the kings, as long as it could be controlled and shielded from papal meddling in the affairs of the realm. By the late 11th century, the tension erupted in an open confrontation between the Papacy and the Holy Roman Emperor. The results were devastating to both, as the Empire disintegrated and the papacy saw its king-making capacity decline over the following centuries.
More generally, to buttress their independence from the papacy, European rulers relied on the legitimizing power of consent as was the case in the period before the rise of the political power of the papacy.Footnote 9 In other words, in their attempts to weaken the cognitive rules that regarded the pope as the supreme political authority, the rulers promoted the role of consent by their subjects, harking back to pre-Christian institutions of legitimacy by consent. In 1302, the French Estates General was assembled by the King, Philip the Fair, when he sought their support in his struggle with Pope Boniface VIII. In England, the House of Common was drastically expanded after Henry VIII broke with Rome from 1529 onward (Greif and Rubin, Reference Greif and Rubin2016).Footnote 10 Similarly, the earlier conflict between the king and his barons led the monarch to strengthen cities and fostered commerce to weaken the baron by shifting power, wealth, and administrative capacity to the commoners.
European rulers still sought religious legitimacy, but one that did not depend on the consent of the papacy. For this purpose, they promoted national church hierarchies under their control that shielded them from Rome and enabled them to gain from a supporting religion authorities. In other words, the European monarchs created a new cognitive concept, a national church. These churches were part of a universalistic religion but were more amenable to sanction the current ruler. In creating a national church to justify their rule, the European monarchs exploited divisions within the Church and the cognitive rules inherited from tribal institutions that required consent. It was relatively easy to align the interests of the Monarch with those of the local high clergy. Archbishops preferred to crown a king rather than let the Pope do so.
In some cases, the Pope was formally deposed as head of the church and replaced by the King (as in England) or basically deprived of any serious political influence as in France, where Louis XIV forged a form of Catholic absolutism known as ‘Gallicanism’ (Pincus, Reference Pincus2011). In early modern Europe, the cognitive rules for legitimacy had clearly changed. The kings no longer had to rely primarily on a religious imprimatur to attain the consent of their citizens.
When the European monarchs turned to limit the power of the representative assemblies they again did so using a cognitive innovation that combined two previous legitimacy principles: hereditary rights and Church approval of a Christian king. By combining these, the cognitive rulers demanded compliance based on their ex dei gratia, divine right. This was a brilliant cognitive innovation that, subject to the constraint implied by monotheism, provided a way of invoking divine sanction but without assuming the risk of declaring oneself God. Even kings who ruled by consent valued their divine right. That European kings cared about the perception of their divine power is illustrated, for example, in the restoration of King Charles II to the throne of England in 1660. One of his demands was to resume holding public healing sessions, demonstrating his divine powers to perform miracles. His grandfather, James I, articulated in Parliament that he had a divine right to rule shortly after his coronation in 1603.
The appeal of monarchs to a divine right is natural, given the discussion regarding the nature of cognitive rules and their function in justifying rulers. Under this concept the king was not God, but his right to demand compliance was God-given. This rule was a way of achieving compliance based on divine sanction, while avoiding the risk of being one. The message he sent to his subjects was: obey the king regardless of his performance, or else be a sinner against the will of God. All the same, the divine right was not absolute: even kings who invoked it ruled only with the consent of their subjects – the divine right was a supplementary way to elicit that consent.
The effectiveness of national churches depended on context. According to Charles II, who was nominally the head of the Church of England, not even all monotheistic religions were created alike. Specifically, he famously declared that Catholicism was the best religion for an absolutist ruler.Footnote 11 Although Catholicism centers around the Papacy, there was no equivalent central religious authority in Protestantism. Moreover, Protestants read the Old Testament in which the idea that only God should govern over the community of believers undermined the claims for the right of Kings to be obeyed by their subjects because of divine will. Protestant intellectuals generally supported the notion that subjects had the right to overthrow rulers of whom they did not approve.Footnote 12
In the Catholic and Slavic parts of Europe, in which the church and/or the nobility were strong, rulers held power only through the legitimacy and support provided by the national church and/or nobility (landowners more generally). During the 18th century, the total number of sessions held by European representative assemblies went down relative to the previous century by 15% (to 804). The decline was particularly large in the Catholic states of Denmark, Poland, Portugal, Russia, Spain, Italy, and France (van Zanden et al., Reference Van Zanden, Buringh and Bosker2012). This was hence the period known as European Absolutism. It did not last long however. The cognitive rule claiming the right of representation reasserted itself during the 19th century.
The importance of the cognitive foundations of political order becomes clearer once we broaden the scope of the analysis beyond Europe. Political orders based on cognitive rules that can be refuted by not meeting the standards of proof defined by the rule were particularly vulnerable. This was the case in China where the cognitive rule justifying the Chinese emperor was that he held a mandate from Heaven (e.g., Zhao, Reference Zhao2009). The mandate manifested itself in peace and prosperity for which the emperor was responsible. Chinese dynasties were in jeopardy and even ended whenever some combination of population growth, climatic change, natural disasters, political weakness due to internal divisions, and external attacks invalidated the mandates (Morris, Reference Morris2010).
To sum up, the political foundations of legitimate rulers in Europe changed over time due to cognitive innovations, changes in balance of power, and strategic interactions. In particular, following the collapse of the Roman Empire, traditional or hereditary rights and consent provided the basic for political legitimacy in the new political entities, many of which were initially pagans. As Christianity spread, the papacy introduced the concept of Christian king based on which it could have become the kingmaker in Europe. In this quest, the papacy benefited from conflict between the monarchy and nobility. The king-making powers of the Church, however, endangered existing royal houses and other powerful actors, thereby, undermining itself. The monarchs initially weakened the power of the papacy by reviving legitimacy by consent and by allying themselves in strengthening the commoners, particularly the cities. Subsequently, however, the cognitive innovation of divine right strengthened by national churches and support by the weakened nobility enabled the rulers to restrict the power of the commoners as well.
4. The cognition of modern growth: progress, science, and technology
The European Enlightenment is a hugely complex and controversial topic; specialists still disagree on many aspects including whether we should think of a common denominator to the rather divergent views that constituted it, or whether we should speak of many 'enlightenments' and leave it at that. For the economic historian, however, the significance of the Enlightenment is above all concentrated in the belief in progress, in the capability of economic agents to work successfully towards improving their lives. New discoveries and instruments emerging after 1500 showed the many errors of classical learning and raised skepticism and contestability of age-old accepted beliefs to the level of a cognitive rule. Evidence and logic replaced unassailable authority. In this intellectual environment, the Enlightenment emerged triumphant (Mokyr, Reference Mokyr2016). It bears emphasis that radical skepticism and contestability of received wisdom were found in other societies, but never to the extent and with a force it attained in early modern Europe.
Specifically, an aspect of the Enlightenment that was central to the subsequent economic history of Europe was the changing views regarding the physical and biological world around us. The behavioral rules of interest here, above all, were the rules by which people distributed ideas and knowledge and the rhetorical conventions by which they persuaded one another on both these subjects (Mokyr, Reference Mokyr2016). These two entities co-evolved, reinforcing one another. In the end, the cognitive rules became inconsistent with the existent political forms, and the latter had to be changed either through revolution or through reforms.
The exact attitudes regarding progress and how to bring it about differed, especially between the great thinkers of the Scottish Enlightenment and their French counterparts. But they shared a most important mental model, namely the belief that economic progress depended on the ‘progress of the arts and sciences’ as Hume titled his famous 1742 essay and on suitable political institutions, as formulated by Adam Smith in his widely cited statement that ‘little else is required to carry a nation to the highest state of opulence from the lowest barbarism but peace, easy taxes, and a tolerable administration of justice . . . All governments which thwart this natural course, which force things into another channel or which endeavor to arrest the progress of society at a particular point, are unnatural, and to support themselves are obliged to be oppressive and tyrannical’.Footnote 13 Hume and others conjectured about the likelihood of progress occurring in their lives and future generations, but neither he nor his more enthusiastic French colleagues, such as Turgot and Condorcet, had much of a boding of what was to come.
The best-known part of the Enlightenment dealt with politics, including of course the matter of legitimacy. But the cognitive rules regarding the state and its relation to the economy went much further. As long as the essence of the state was to transfer resources from the weak multitudes to the powerful few, improvements in technology and the allocation of resources would be hard to translate into widespread growth in the standard of living. The Enlightenment rang in the beginning of the end of the extractive state in Europe. Government still taxed, but even in absolutist empires such as Russia and Prussia, the purpose of the discussion became less and less to enrich the rulers and their cronies, and more and more the provision of supposedly welfare-enhancing public goods and services that the private sector for one reason or another could not provide. The rise of free trade in post-1780 Europe is a good indicator of these changing cognitive rules; tariffs were one of the oldest and most widely practices of rent-seekers. The growing conviction that free trade was desirable and good for the economy derived not just from the highly influential writing of Smith and his liberal followers who stressed that exchange was a positive-sum and not a zero-sum game, but also from the increasing resistance to any kind of measure that benefitted a few at the expense of the many. The same held for freedom of occupational choice and location of residence. Rent-seeking (which was what mercantilist policies were largely about) was increasingly understood to be associated with large deadweight losses. Monopolies, tariffs, subsidies, cozy offices, what the French called privileges, were all leaky buckets, in which in the gains to the winners were smaller than the losses of those who paid the price. North (Reference North2005: 63) explicitly mentions the transition from a cognitive rule that sees all economic activity as a zero sum game to one that sees it as a positive sum game, but he did not pinpoint the intellectual innovations of the Enlightenment as the crucial events that brought this transformation about.
Continental Europe and the North American colonies implemented many of these reforms through revolution. In Britain, although the unfolding of these policies may have been slower than impatient reformers wished for, the mercantile state as it had existed in the 17th and 18th centuries was practically dismantled by 1850. With mercantilist policies, corruption and to a great extent rent seeking melted away, just as Smith and Hume had hoped (Mokyr, Reference Mokyr2009, chapter 4). Although as always some corrupt behavior could not be avoided, it became the exception rather than the rule. In Britain, as Harling (Reference Harling1995, Reference Harling1996) has shown, corruption declined and its ruling class was on its way to turn itself from an extractive class to a professional and largely conscientious service elite (Colley, Reference Colley1992: 192). In Prussia, Scandinavia, the Low Countries, and to some extent France, rent-seeking and corruption were kept in check, and the State in the countries that had experienced the Enlightenment became increasingly the kind of organization that the 18th century philosophes had dreamed of. The cognitive rules that governed how the citizens saw the state and their rulers had changed dramatically.
The other great cognitive change of the Enlightenment was the realization that the understanding and control of natural phenomena and regularities were essential to human progress. The importance of scientific insights (both substantive and methodological) to the Industrial Revolution and subsequent economic growth has been a matter of dispute. In many areas, technological progress still occurred the way it always had: small cumulative improvements in processes and products through trial and error and artisanal serendipity. Yet, that system was changing, and was changing, many of the great industrialists of the Industrial Revolution sought the advice and counsel of scientists at the cutting edge of their profession. Whether the advice of the likes of consulting scientists such as William Cullen and Davies Gilbert did much good to the instrument-makers and the spinning-mill owners who sought it is not at all certain: in some cases, more so than in others. But what is striking is how committed the age of the Industrial Revolution was to the basic cognitive rule that the insights of science could and would eventually lift productivity and living standards. It is not surprising that one of the heroes of 18th century thought was Francis Bacon, the philosopher who did more than anyone else to change the way people thought about progress and acted on those beliefs.
Bacon's influence on European economic history is a topic that has not engaged the profession much till now, but the Northian concepts of mental models on the individual level and its extension to the social and institutional levels (Aoki, Reference Aoki2001; Greif, Reference Greif2006, chapter 5; see Greif and Kingston, Reference Greif, Kingston, Caballero and Schofield2011 for a survey).
Cognitive rules and their effects on institutions are quite helpful here. Bacon's main message was that the agenda of natural philosophy, which we would call applied science, should be driven by practical and material needs to solve technological bottlenecks and 'the relief of Man's Estate' as he called it. This message resonated enormously during the century and a half that followed his death in 1626 and intellectual historians such as Zagorin (Reference Zagorin1998) and Zittel et al. (Reference Zittel, Engel, Nanni and Karafyllis2008) have given him the credit he deserves for being the pivotal thinker in creating economic modernity and the so-called Baconian program that created it (see also Farrington, 1979; Rossi, Reference Rossi1970). The ideas he promulgated were of course not altogether new, but his writings served as a focal point that clarified and organized the thinking of his followers in the age of Enlightenment and created an altogether novel cognitive rule. To cast this in terms of game theory, subjectively developed beliefs converged on equilibrium beliefs. An initial ‘grain of truth’ regarding others’ behavior – which is what Bacon proposed – is thus sufficient for individuals to learn independently how others will play and for convergence on a cognitive equilibrium. In the market for ideas, he can be regarded as a highly successful entrepreneur (Mokyr, Reference Mokyr2016).
The other idea that drove much of Enlightenment science was the realization that ancient learning was not the be-all and end-all of knowledge. The debate within the European intellectual community, now largely forgotten, is known as the struggle between the ancients and moderns (Lecoq, Reference Lecoq2001; Levine, Reference Levine1981, Reference Levine1991). The belief in progress logically implies a certain lack of respect for the learning of earlier generations. French thinkers such as Pascal and Fontenelle argued that knowledge was cumulative and that it was therefore inevitable that each generation knew more than the previous ones.Footnote 14 New cognitive rules emerged that denigrated the once-powerful authority of ancient wisdom reduced the built-in persistence of knowledge systems and allowed faster change. If Aristotle and Ptolemy could be wrong about so many things, could the zero-sum mercantilist view of international trade and the unassailable divine right of kings be far behind? The intellectual community that formed in the 16th century (known as the Republic of Letters) adopted a meta-principle that turned out to be transformative: contestability. There were no more sacred cows, not Aristotle, not the Bible, not even Newton. It was no accident that the Royal Society adopted the motto of in nullius verba (on no one's word). Authority was demoted, and had to make room for evidence and logic.
The cognitive rules established in the age of Enlightenment thus radically changed the way in which Europeans thought about the natural world around them, and how to go about understanding it. Not only the agenda but also the methods of inquiry were transformed between 1500 and 1750: experimental methods had become legitimate, mathematics and precise computation had gained respectability, and new tools and instruments were deployed to measure and observe new objects and with greater precision. How and why to improve science and technology were not the only cognitive rules that changed in this era. McCloskey (Reference McCloskey2016) argues that the hierarchy of values changed as well, although it did not change monotonically. For her what mattered above all is that at some point in early modern Europe, society began to honor the ‘bourgeoisie’ – merchants, investors, high-skill artisans, and speculators, giving them a respect and a social standing that changed their position in society and made others want to excel in these activities. That bourgeois spirit, she maintains, was a key factor in the economic changes that North was trying to explain. North would certainly agree.Footnote 15 It is hard to know whether the ethical factors that McCloskey is talking about are more important than the more mundane advances in the understanding of the physical world stressed by scholars such as Jacob (Reference Jacob2007, Reference Jacob2014) and Wootton (Reference Wootton2015). That debate will continue. But where everyone agrees is that what people believed to be true and how they processed information must be at the center of any argument that explains the modern world.
5. Cognitive rules and legal development
Belief in progress, the scientific method, and science-based technology provided the cognitive foundations of modern growth. Modern growth, however, would not have come about, at least in Europe at that period of time, unless it was complemented by reinforcing cognitive foundations of states and the law. The cognitive foundations of the European states were already discussed above. Political voice and political representation by economic agents, the rule of law, and the interest of rulers to promote economic growth as a way to gain in interstate competition were conducive to implementing the agenda, now recognized as possible, of modern economic growth. This does not imply, however, that contemporary recognized that this was the direction of the European economy. In fact, even the Wealth of Nations written by Adam Smith in 1776 reveals little awareness of what was about to transpire. The cognitive foundation of the European legal systems was crucial for the economic growth that was to follow, and their functioning, in turn, critically depends on the nature of the political systems. When the states were growth-oriented, the cognitive foundation of the European legal systems rendered them effective in the emergence of modern growth.
One role of the legal system in the European transition to modern growth was to mitigate the social upheavals implied by the transition. Maintaining social order in a society experiencing a transition to modern economic growth is challenging. The challenges are many and among them are protecting new forms of property rights such as copyright and patents, and providing social safety nets to a relatively large urban population that depends on the market for staple food. The transition to modern growth is socially challenging also because it requires large investment in new public goods such as research institutes, schooling, and infrastructure. Losers from economic development and change need to be compensated or otherwise held at bay. Population explosion in urban areas had to be dealt with and checked, and the internal and external predators that more wealth attracted have to be deterred.
In Europe, the evolving cognitive foundations of the law facilitated achieving such objectives. The cognitive foundations of European legal system evolved alongside and in complementary manner to that of the political development described above. An important consequence of these developments in the cognitive basis the European State was the decline in the legal power of the Church and the increasing authority of the state over the law. As late as 1300, the Church's Canon law was more advanced than individual state laws. Moreover, early states needed to hire churchmen to have literate civil servants. The cognitive rules in Christianity, however, did not serve these rulers well in attempting to control civil law. Emerging within the political body of the Roman Empire and its strong civil legal tradition, the Church was to render onto Caesar that which was Caesar's.
In its power struggles with the European monarchs, however, the Church sought control over the law. Legal authority would have enhanced the Church's capacity to discredit a ruler as a sinner. A king who was a sinner contradicted the premise that a Christian king had first and foremost to be Christian. As the church had authority to decide who was a sinner, such a discretion implied a great deal of political power and the Papacy repeatedly excommunicated kings and placed nations under interdict. Over time, however, the cognitive rules changed: the number of actions considered crimes increased and those considered sin declined. Eventually the idea of sin disappeared as a concrete political concept.
To illustrate other possibilities, considered the legal development in the Muslim Mediterranean area that evolved along a different path of cognitive foundations complementing that of its political system. The cognitive foundations of the political order in the Muslim lands in the medieval era differed from those in Europe and China. First and foremost, the Muslim rulers that created the first Muslim empire were titled Caliph Amir Al-Mu'minin meaning the substitute military leader of the believers. In other words, no Muslim rulers, including the Caliph himself inherited the role of Muhammed as a spiritual leader. The authority over religious matters remained the responsibility of the Islamic scholars.
Among the responsibilities of the Islamic scholars was advising the provincial administrator and caring for the needy, providing education, and interpreting the Sharia, the Muslim religious law, and judging accordingly. Islamic law, however, does not cover all legal matters and therefore had to be complemented by other codes. Among these were civil codes, non-Sharia codes, and code of customary laws. Common to these codes is that they were not based on or became part of the Islamic law prior to the modern period. In contrast the law in Europe became increasingly unified and centralized as rulers and assemblies gained power relative to the religious authorities due to the increasing power of the European states and their representative assemblies to control the legal system. Divergence in the cognitive foundations of political order thus influenced institutional – legal – developments.
The following three tables summarize the situation. They contrast the nature of the legal system in Europe and the Muslim world. The former is represented by early 19th century Code Napoleon (which epitomizes and aggregates many of the legal customs of continental Europe).
The important point to take from this comparison is that in the areas of the law most important for economic development – contract, constitutional, and taxation – the capacity of the Muslim state was particularly limited. In general, the Ottoman state was effective and fast in adopting military technology from Europe. This served Islam well on the battle fields. But the Ottomans were ineffective in enacting laws that could foster modern growth. Charity, contract law, property law, and inheritance law, among others were in the domain of the Islamic scholars, not the state. Laws in these areas would have considered un-Islamic and would only highlight the predicament of the state regarding its consistency with the Islamic law.Footnote 16
The cost of altering the laws covered by the Sharia was high to the Sultans as the changes reaffirmed the state's inherently un-Islamic nature. The high cost of changing the law limited the incentive to introduce transition-enhancing legal changes. Recall that the areas of the law most important for economic activities such as contract law, charity, and property law are covered by the sharia. It may very well be the case that early in the history of Islam, when the sharia law on these matters was appropriate for the needs of the time. But modern growth required commercial law, contract law, and property law very distinct from those of the previous era. In the absence of the capability to adapt gradually to changing circumstances, change, when it arrived in the 20th century, was violent and imposed top-down as the experience of Turkish Republic demonstrates.
An important question in growth economics is whether differences in legal systems affect economic growth or other welfare-related outcomes? An important line of work in economics has established the importance of the law and its historical origins on subsequent economic development. Perhaps the most important line of research is the legal origins literature initiated by Andrei Shleifer and his collaborators. They have shown that common law adopted by ex-British colonies fostered the deepening of financial markets and thus contributed to development.Footnote 17 By focusing on the adoption of colonial law, this work circumvented the question of the endogenous determinants of law (Berkowitz and Clay, Reference Berkowitz and Clay2011). Similarly, the important line of research associated with Kuran (Reference Kuran2011) that noted the role of law in the economic decline in the Muslim world has been neglected. His pioneering work examined implications of the Sharia on economic development and concluded that the inheritance laws, the lack of legally formal incorporation laws, and the rigidity of laws governing pious foundations limited capital accumulation and formation. The importance of this insight notwithstanding, it sidesteps the relationship between the cognitive foundation of the state and legal changes.
These issues are the focus of the analysis here. The transition to the modern economy required more than formal legal reforms: it required changing the cognitive rules of society and embedding these changes in the legal code. Kuran touches upon these issues when discussing the reasons why the Islamic world did not adopt or invent the Western-style business corporations.Footnote 18 Modern economic growth required more than adapting existing contractual and organizational forms to new tasks. It required forming different cognitive rules regarding the nature of the economy, the practice of business and commerce, the mechanisms of conflict resolution and contract enforcement, and the precise role of government in managing human affairs. At the same time, it also required an agile legal system that did not prevent political and social adaptations and legal innovations that underpinned growth as Great Britain possessed (Mokyr, Reference Mokyr2009: 377, 413–418). In order to implement the new technologies and knowledge to provide the basis for the modern market economy, there was a need to create and adopt new cognitive rules about the world around us in the nature of the economy.
Another important driver of changes in cognitive rules is the slow decline in fatalism and of the belief that human life's outcomes were the result of God's will and hence were inevitable destiny. Instead, the cognitive rule that slowly emerged viewed outcomes, either good or bad, in a different way. Economic outcomes were due to a combination of human agency, ability, and diligence, with luck, random events, and accident. The challenge was to tell one from the other.
This cognitive transformation had profound implications for the rise of a 'modern' political economy: it enabled legal changes that the transition required if it was to prevent the losers from trying to block any further progress (see Greif and Iygun, Reference Greif and Iygun2013; Greif and Tabellini, Reference Greif and Tabellini2010, Reference Greif and Tabellini2016 for analysis of these changes in distinct societies). To illustrate, the concept of the deserving poor (as distinct from the idle poor) recognized that although in medieval societies most individuals had direct access to land from which they could make a living, this was no longer the case in industrialized economies. In agrarian economies, people were still subject to shocks caused by weather and other natural factors, but these reflected divine will. Charity mitigated the worst results, but it was seen as a redemption of the giver, not the receiver.
After 1500, the working poor merited support once, due to no fault of their own, they became deserving poor. Western societies developed cognitive rules that stressed the distinction between people who were poor through no fault of their own (and thus merited relief), and those who were able-bodied but idle because of their own decisions and thus did not. Orphans, widows, cripples, blind, and mentally handicapped people were all unequivocally deserving, but so were people who had been the victims of economic and technological forces stronger than themselves. The history of the English poor law demonstrates how difficult it was to make this distinction and prevent moral hazard in such situations.
Similarly, in industrial market economies, in which innovations and exogenous shocks on both the supply and demand sides were common, it became recognized that a businessman might go bankrupt even if he took all the right actions. In England during the 17th century, the modern notions of bankruptcies and insolvencies were introduced. Was bankruptcy caused by force majeure or bad faith? Britain's bankruptcy laws, originating in 1542 but reformulated in the 1706 Bankruptcy Act, recognized that some debtors could not pay because of events beyond their control, and that punishing such people would have neither a deterrence nor a signaling value. Under the Lord's Act of 1759, Parliament allowed creditors to demand that bankrupt debtors prepare a list of their assets under oath, and they would be released from debtor's jail when they did.
6. Conclusions
In his first serious work on institutions North (Reference North1981) pointed to the importance of cognitive rules in explaining human behavior. He (somewhat confusingly) used the term 'ideology' but it is unambiguous what he meant: everyday behavior and the world around us are guided by what we think is knowledge, which he thought was at base theoretical, intellectual efforts to rationalize the behavioral patterns of individuals and groups (p. 48). Yet, he provided little elaboration on this insight, and the historical examples he provided were mostly concerned with property rights.
We have argued here that this framework, suitably expanded, can provide us with a critical component to understand the evolution of institutions. To understand historical change, we should explore not so much the physiological roots of cognition and the nature of consciousness as North (Reference North2005, chapter 4) suggested, but their evolution over time through learning, imitation, and persuasion. Cognitive rules change over time, the result of competitive forces in a 'market for ideas', in which basic cognitive rules are proposed and either accepted or rejected. Among the most important ones that established the modern economies are the legitimacy of the ruler, the incentive structures that govern wealth creation and distribution, and the agenda, methods, and purpose of scientific research. Far beyond his own focus on the evolution of property rights, North's insights provide us with a guidance of how to see 'the Rise of the Western World' in an entirely new light.
We do not mean this account to sound like some kind of Whiggish narrative in which good, progressive, and just ideas drove out selfishness and stupidity. Economic historians have written for decades about technological progress and institutional change. There is no presumption that over the long haul, there is a secular trend toward improvement in institutions or even in beliefs about the cognitive structures that underlie them.Footnote 19
The Enlightenment was followed by a counter-enlightenment of xenophobia, cultural arrogance, and romantic militarism, and with them came protectionism and new opportunities for rent-seekers. Democratic and open institutions are constantly challenged by the likes of Mussolini and Victor Orban. The evidence for institutional progress – even if we could find a consensus what it means – is spotty and ambiguous. Well-functioning and integrated markets can disintegrate faster than they can emerge, as happened in August 1914 (and within a hair's breadth, on 9/11). The rule of law, to say nothing of peace and the respect for life and property on which efficient allocations depend, has been abruptly reversed more than once – most recently in Syria and Libya.
Although some societies may have become more inclusive and open, in many others autocratic rulers have driven rent-seeking and corruption to a peak that has given rise to the term ‘kleptocracy’. For every Brazil, in which there has been significant improvement, popular perceptions notwithstanding (Alston et al., 2016), there is a Venezuela and an El Salvador.Footnote 20