PUBLIC CHOICE: AN INTRODUCTION

1. Origins

Public Choice has been defined as the application of the methodology of economics to the study of politics. This definition suggests that public choice is an inherently interdisciplinary field, and so it is. Depending upon which person one selects as making the pioneering contribution to public choice, it came into existence either in the late 18th century as an offshoot of mathematics, or in the late 1940s as an offshoot of economics.Condorcet was the first person, as far as we know, to discover the problem of cycling, the possibility when using the simple majority rule that an alternative x can lose to y in a vote between the two, y can lose to another alternative z, but z will also lose to x. The existence of such a possibility obviously raises the issue of how a community can decide among these three alternatives, when a cycle exists, and what the normative justification for any choice made will be. No cycle exists, of course, if some alternative, say y, can defeat both x and z. The literature has commemorated Condorcet’s contribution by naming such an issue like y a Condorcet winner. A vast number of papers and books have analyzed both the normative and positive implications of the existence of cycles.

Condorcet gave his name to one other important part of the public choice literature, when he proved what he called a theorem about juries, and what we now call the Condorcet jury theorem. This remarkable theorem provides both a justification for making collective decisions with the simple majority rule, and for the institution of democracy itself. It rests on three assumptions: (1) The community faces a binary choice between x and y, with only one of the two choices being the “right” choice for the community. (2) Everyone in the community wants to make the right choice. (3) The probability p that a citizen votes for the right choice is greater than 0.5. The theorem states that the probability that the community makes the right choice when it uses the simple majority rule increases as the number of voters increases approaching one in the limit.


That the theorem provides a normative case for the simple majority rule is obvious, if one accepts its premises. Condorcet described the collective decision as one regarding the determination of whether a person had committed a particular crime or not — hence the theorem’s name. For this type of collective decision the definition of “the right decision” is fairly controversial — the person is declared innocent only if she is in fact innocent. The assumption that everyone wants to make the right choice in this situation also seems uncontroversial.

The argument that the theorem also provides a justification for democracy is more subtle, and under it the assumptions underpinning the theorem become more controversial. Imagine, however, that everyone in the community agrees that they would like a “good government” that would be honest and provide goods and services and levy taxes so as to maximize the welfare of the community. Two parties compete for the honor of becoming the government, and each citizen votes for the party that he believes will form the best government. If each citizen has a greater than 0.5 probability of picking the party that will form the best government (two-party) democracy chooses the best government in a large electorate with near certainty.

The second and third assumptions take on extreme importance, when the theorem is used as a defense of democracy. Citizens share a common goal — good government. Each citizen has a greater than 0.5 probability of picking the party that will provide the best government. Citizens do not merely flip coins to decide how to vote, they study the parties and make an informed choice.

The assumption that everyone agrees on what good government is, becomes more controversial when we are thinking of the whole panoply of things governments do. If citizens disagree about what government should do, there will be no “right choice” for all citizens. This being the case, parties will compete not only on the basis of how good they will be at advancing the community’s welfare, but how that welfare should be defined. Finally, when one is thinking of a large electorate, even the assumption that voters are well-informed becomes controversial.

Many studies in public choice employ some of the assumptions needed to apply the Condorcet jury theorem to the study of politics, many others do not. All of the work on party competition that uses “spatial modeling” assumes, for example, that voters are well-informed, that they know the positions of the parties in the issue space. At the same time, however, this literature does not assume that voters agree on where the parties should be located in the issue space. Conflicts of interest or preferences are assumed, and thus voters do not agree on which party is best even when they are certain about what the parties will do in office — assuming that is that the parties will do different things. There is another branch of the public choice literature, however, that does assume common interests among citizens, and thus does accord with the second assumption underlying the jury theorem. This work often focuses on decisions made at the constitutional stage of the political process and today often goes by the name of constitutional political economy.

Thus, directly or indirectly Condorcet’s pioneering work raised many of the questions with which the modern public choice literature has been concerned. Do individuals share common interests? Is democracy stable or not (produce cycles)? Are voters sufficiently well-informed that one gains information by aggregating their preferences? What voting rule should be used to aggregate these preferences?1

Borda was critical of the use of the simple majority rule to aggregate preferences, and proposed instead a rule which today carries his name. If there are n possible outcomes to a collective decision, each voter assigns a one to his most preferred choice, a two to his second most preferred choice, and so on. The scores awarded are then added across all voters, and the Borda-count rule selects as the winner the alternative receiving the lowest score. With only two alternatives from which to choose, the Borda-count is equivalent to the simple majority rule. When n > 2, it avoids cycling and has additional desirable properties that make it attractive.2

Three more names deserve brief mention before we end this discussion of the forerunners to public choice. Another mathematician, the Reverend  L. Dodgson, better known today as Lewis Carroll, wrote a series of pamphlets analyzing the properties of voting procedures roughly a century after the work of Borda and Condorcet.3 John Stuart Mill’s Considerations on Representative Government (1861) must also be mentioned, since he was one of the great economists of the 19th century, although the work is arguably an early contribution to political science rather than to public choice, since it makes no noticeable use of economic reasoning. Nevertheless, the great thinker’s logical mind is quite evident, and it is one of the few works in political science from the 19th century that still warrants reading by students of public choice.

The same can be said of Knut Wicksell’s (1896) classic essay on Just Taxation written as the 19th century came to a close. As the title suggests, it is as much or more a contribution to public finance than to the study of politics, but it contains an early normative economic justification for the state, and a spirited defense of the unanimity rule for aggregating individual preferences.

2. Early Classics

The modern literature on public choice came into being with the publication of articles by Duncan Black (1948a,b), James Buchanan (1949) and Kenneth Arrow (1950) in the late 1940s and 1950. Retrospectively, one can identify three important contributions between Wicksell and Black, namely Hotelling (1929), Schumpeter (1942) and Bowen (1943), but it was Black, Buchanan and Arrow who got the public choice ball rolling.

Duncan Black’s two articles, first published in 1948 and then republished with extensions and an interesting account of the history of ideas lying behind his work, take up the problem of cycling under the simple majority rule and provide a proof of the famous median voter theorem. This theorem has been frequently invoked to describe equilibria in theoretical studies and has been the analytical foundation for much of the empirical work in public choice.

Arrow proved that no procedure for aggregating individual preferences could be guaranteed to produce a complete social ordering over all possible choices and at the same time satisfy five, seemingly reasonable, axioms. Indirectly Arrow’s theorem invoked the problem of cycling again, since one of his axioms was intended to ensure that cycling did not occur. Arrow’s 1950 article and 1951 book spawned much controversy and a huge literature.

Although Buchanan published several important articles prior to 1962, it was the book The Calculus of Consent, published in that year and coauthored with Gordon Tullock that established Buchanan and Tullock as leading scholars in the field. Although the book contains many interesting discussions of the properties of the simple majority rule, logrolling and the like, its most lasting contribution to the literature has been to introduce the distinction between the constitutional stage of collective decision making in which the voting rules and other institutions of democracy are selected, and the applications of these rules to the actual work of making collective choices.

In Capitalism, Socialism and Democracy, Schumpeter put forward “another theory of democracy” in which the social function of democracy is fulfilled incidentally by the competitive struggle for power between parties, just as the social function of markets is fulfilled incidentally by the competitive struggle for profits among firms (Schumpeter, 1950, Ch. 22). Anthony Downs did not cite this argument of Schumpeter directly, but he did state that “Schumpeter’s profound analysis of democracy forms the inspiration and foundation for our whole thesis” (1957, p. 27, n. 11). Downs was a student of Kenneth Arrow, and it appeared that with his dissertation he wished to develop Schumpeter’s insight and demonstrate how political competition between parties could produce a welfare maximum and thus avoid the dire implications of Arrow’s impossibility theorem. Downs ultimately failed in this endeavor, but succeeded in introducing a mode of analysis of competition using spatial modeling that was to have a profound impact on the development of the field, particularly among practitioners trained in political science. Building again on insights from Schumpeter (1950, pp. 256-64), Downs also developed a model of the rational voter who, among other things, rationally chooses to remain ignorant of most of the issues in an election (Chs. 11-14).

Another doctoral dissertation that was to have a profound impact on both the public choice field and political science in general was that of Mancur Olson published in book form in 1965.4 Just as Downs had shown that the logic of rational decision making led individuals to invest little time in collecting information to help them decide how to vote, “the logic of collective action” would prevent individuals from voluntarily devoting time and money to the provision of public goods. Mancur Olson did not invent the “free-rider problem,” but no one has put it to better use than he did in this and his subsequent contributions to the literature.

All of the “early classics” discussed so far were written by economists. One contribution by a political scientist that certainly falls into this category is William Riker’s The Theory of Political Coalitions (1962). In this topic Riker developed the logic of coalition formation into a theory that could explain among other things why “grand coalitions” were short lived. Riker’s book foreshadowed a large literature that would apply game theoretic tools to political analysis.

Deciding when the early classics end and the “late” ones begin is a somewhat subjective judgment. Perhaps from the vantage point of 2002, however, the definition of early can be extended up through the early 1970s to include three more sets of works. First, of these in chronological order would be an article published by Gordon Tullock in 1967. This article might be dubbed a “hidden classic,” since its seminal nature did not become apparent to the profession at large until its main idea was rediscovered and developed by Anne Krueger (1974) and Richard Posner (1975) sometime later. It was Krueger who gave the idea the name of rent seeking. Up until Tullock’s 1967 article appeared, standard discussions of “the social costs of monopoly” measured these costs solely in terms of the “deadweight triangle” of lost consumers’ surplus resulting from the monopolist’s restriction of output. The rectangle of monopoly rents was treated as a pure transfer from consumers to the monopolist and as such devoid of any welfare significance. Tullock pointed out, however, that the right to supply the monopolized product or service was a valuable right, and that individuals could be expected to invest time and money to obtain or retain this right. These investments constitute a pure social waste as they only serve to determine the identity of the monopoly rent recipient. They have no positive impact on the allocation of resources.

The social costs of rent seeking are potentially very large. Numerous articles have appeared since the pioneering contributions of Tullock and Krueger. One branch has analyzed theoretically the conditions under which the total resources invested in rent seeking fall short of, equal, or exceed the size of the rents pursued. A second branch has sought answers to the same questions empirically.5 One of the curiosities of this literature has been that it has by and large analyzed rent seeking as if it were exclusively a problem of the public sector, even though the logic of rent seeking applies with equal validity to the private sector.6

While Tullock’s rent-seeking article has proved to be a hidden classic, Sen’s (1970) article about the Paretian liberal might be dubbed an “unassuming classic.” Sen put forward another sort of paradox, in the spirit of the Arrow paradox, but neither the author nor any of the readers of this six page note is likely to have appreciated at the time it appeared the impact it was to have on the lit-erature.7 Where Arrow proved that it was impossible not to have a dictator and satisfy four other axioms, Sen proved that it was impossible to allow someone to be a dictator over even one simply choice — as for example whether he sleeps on his back or his stomach — and satisfy three other axioms.

The last early contribution that qualifies as a classic is William Niskanen’s (1971) book on bureaucracy. Niskanen posited that bureaucrats seek to maximize the size of their budgets and then proceeded to derive the implications of this assumption. A by now huge literature has been built on the analytical foundation that he laid.8

3. The Second Generation

3.1. More Impossibilities

During the 1970s several papers appeared, which extended the dire implications of Arrow’s impossibility theorem and the literature it spawned. Satterthwaite (1975) and Gibbard (1977) demonstrated the incompatibility of having a preference aggregation procedure that was both nondictatorial and strategyproof, where by strategyproof was meant that everyone’s best strategy was to faithfully reveal their true preferences. These theorems illustrated the close relationship between Arrow’s independence-of-irrelevant-alternatives axiom and the goal of having a preference aggregation procedure in which individuals did not have an incentive to behave strategically.

McKelvey (1976) and Schofield (1978) drew out a further implication of a procedure’s failure to satisfy the transitivity axiom. When a procedure leads to voting cycles it is possible to move anywhere in the issue space. An agenda setter can take advantage of this feature of cycling to lead a committee to the agenda setter’s most preferred outcome.

3.2. The Veil of Tears Rises

The theorems of McKelvey and Schofield might be regarded as the capstones — or should we say tombstones — for the literature initiated by Arrow. It paints a very negative picture of the capacity for democratic procedures to aggregate information on voter preferences in a normatively appealing matter. Collective decisions were likely to be arbitrary or dictatorial. Free riding and the strategic concealment of individual preferences undermined democracy’s legitimacy. Rent seekers and bureaucrats contributed to the “waste of democracy.” William Riker’s (1982) attack against “populist democracy” — the idea that democratic procedures could aggregate individual preferences reasonably — accurately conveys the flavor of this literature. Even before Riker’s book appeared, however, several developments in the public choice literature were taking place that painted a far more cheery picture of democracy’s potential. The first of these concerned the potential for direct revelation of preferences.

3.2.1. Voting Rules

In his classic article deriving the conditions for the Pareto optimal allocation of private and public goods, Paul Samuelson (1954) matter-of-factly proclaimed that it would be impossible to get people to honestly reveal their preferences, because no person could be excluded from consuming a pure public good. So things stood for nearly 20 years, when Clarke (1971) and Groves (1973) showed that individuals could be induced to reveal their preferences for public goods honestly by charging them a special “incentive tax” equal to the costs that their participation in the collective choice process imposed on the other voters. This class of procedures was first discovered in another context by William Vickrey (1961), and has come to be known in the public choice literature as “demand revelation” processes.

Mueller (1978, 1984) showed that the preference revelation problem could be solved using a three-step procedure in which each individual first makes a proposal — say a quantity of public good and a tax formula to pay for it; and then following a random determination of an order of veto voting removes (vetoes) one element from the set of all proposals.

Hylland and Zeckhauser (1970) added to the list of preference-revelation procedures by showing that individuals will allocate a stock of “vote points” across a set of issues to reveal the intensities of their preferences on these issues, if the quantities of public goods provided are determined by adding the square roots of the points each individual assigns to an issue. During the decade of the 1970s, one new method appeared after another to solve the heretofore seemingly insoluble problem of inducing people to reveal their preferences for public goods honestly.

3.2.2. Two-party Competition

During the decade of the 1980s, several papers appeared that suggested that two-party representative governments were far better at aggregating individual preferences than had previously been demonstrated. One set of these articles simply replaced the assumption of the Downsian voter model, that each individual votes with probability one for the candidate promising her a higher utility, with the assumption that the probability of an individual’s voting for a candidate increases when the candidate promises her a higher utility. Substituting this “probabilistic voting” assumption for the standard Downsian deterministic voting assumption allowed Coughlin and Nitzan (1981a,b) and Ledyard (1984) to prove that the competition for votes between two candidates led them to select an equilibrium pair of platforms that maximized some form of social welfare function. Schumpeter’s assertion that the competition for votes between parties resulted in a form of “invisible hand theorem” for the public domain was, after forty years, finally proved.

In a multidimensional issue space, every platform choice by one party can be defeated by an appropriate choice of platform by the other, and the two candidates might cycle endlessly, under the Downsian assumption of deterministic voting. Such cycling could in theory take the candidates far away from the set of most preferred points of the electorate. A platform x, lying far from the set of most preferred points of the electorate would, however, be dominated by some other point y, lying between x and the set of most preferred points of the electorate, in the sense that y could defeat every platform that x could defeat, and y could also defeat x. By restricting one’s attention to points in the issue space that are not dominated in this way, the set of attractive platforms for the two candidates shrinks considerably. The cycling problem does not disappear entirely, but it is reduced to a small area near the center of the set of most-preferred points for the population.9

These results clearly sound a more optimistic note about the potential for preference aggregation than many of the early classics and the works discussed in section A. The reader can see how dramatic the difference in perspectives is by comparing the books by Wittman (1995) and Breton (1996) to that of Riker (1982).

3.3. Political Business Cycles

Almost all Nobel prizes in economics have been awarded for contributions to economic theory. All of the early classics in public choice have been theoretical contributions, as have the subsequent contributions reviewed so far.10 As the public choice field has matured, however, an increasing number of studies has appeared testing every and all of its theoretical propositions. Space precludes a full review of the many empirical contributions to the field that have been made. We have therefore selected only three areas, where a lot of empirical work has been done, beginning with the area of “political business cycles.”

One of the most frequently quoted propositions of Anthony Downs (1957, p. 28) is that “parties formulate policies in order to win elections, rather than win elections in order to formulate policies.” Among the policies of great concern to voters few stand higher than the state of the economy. If the quoted proposition of Downs is correct, then parties should compete for votes on the basis of their promised macroeconomic policies, and both parties in a two-party system should offer the same set of policies. Kramer (1971) was the first to test for a relationship between the state of the economy and votes for members of the House and the President. Nordhaus (1975) and MacRae (1977) were among the first to develop a Downsian model of the political business cycle in which both parties are predicted to follow the same strategy of reducing unemployment going into an election to induce short-sighted voters to vote for the incumbent party/candidates.

Numerous observers of politics in both the United States and the United Kingdom have questioned the prediction of the one-dimensional Downsian model that both parties adopt identical positions at the most-preferred outcome for the median voter. This prediction appears to be blatantly at odds with the evidence concerning macroeconomic policies, where right-of-center parties clearly seem to be more concerned about inflation, while left-of-center parties are more concerned about unemployment. Early contributions by Hibbs (1977, 1987) and Frey incorporated these “partisan effects” into a political model of macroeconomic policy and provided empirical support for them.

In some areas of public choice, data for testing a particular proposition are difficult to obtain and empirical work is accordingly sparse. Such is not the case with respect to hypotheses linking policy choices to macroeconomic outcomes. Data on variables like unemployment and inflation rates are readily available for every developed country, as are data on electoral outcomes. Each passing year produces more observations for retesting and refining previously proposed hypotheses. The empirical literature on political business cycles is by now vast. The main findings grossly condensed are that partisan differences across parties are significant and persistent, but that both parties of the left and parties of the right do tend to become more “Downsian” as an election approaches and adapt their policies to sway the uncommitted, middle-of-the-road voters.11

3.4. Public Choice Goes Multinational

All of the early classics discussed in section II were written by either American or British authors. It is thus not surprising that the literature on representative government, as for example in the political business cycle area, has almost always assumed the existence of a two-party system — even when testing the model using data from countries with multiparty systems. In the last couple of decades, however, considerably more attention has been devoted to analyzing properties peculiar to multiparty systems. This literature has been heavily populated by persons trained in public choice, and is one in which the lines between political science and public choice are particularly blurred.

A salient feature of multiparty systems is that no single party typically wins a majority of seats in the parliament, and thus no single party is able to form the government. Consequently, a coalition of parties must come together if the cabinet is to reflect the wishes of a majority of the parliament, or a minority government forms. Two important questions arise: (1) which parties will build the coalition that forms the government, and (2) how long will it last?

Game theory provides the ideal analytical tool for answering the first question, and it has been used to make a variety of predictions of the coalition that will form after an election. Riker’s (1962) prediction, that a minimum winning coalition forms, receives as much support as any theory, although it accounts for less than half of the governments formed in European countries since World War II.12 In particular, it fails to predict that many minority governments have existed.

A theory that can account for the existence of minority governments has been put forward by van Roozendaal (1990, 1992, 1993). His theory emphasizes the pivotal position of a party that includes the median member of the parliament (a central party), under the assumption that the parties can be arrayed along a single, ideological dimension. Under the assumption that each party favors proposals coming close to their position along the ideological dimension over proposals lying far away, a central party will be a member of every coalition that forms. A large central party is likely to be able to successfully lead a minority government by relying on votes from the left to pass some legislation and votes from the right for other legislation.

When the issue space cannot reasonably be assumed to be one-dimensional, cycling is likely to arise, which in the context of cabinet formation implies unstable party coalitions. Here game theoretic concepts like the covered set and the heart have proven useful for identifying the likely members of the coalitions that eventually form.13

A long literature beginning with Taylor and Herman (1971) has measured the length of a government’s life and related this length to various characteristics of the government. One of the regularities observed is that minority governments tend to be relatively short lived, governments formed by a single, majority party long lived.14 One of the likely future growth areas in public choice is likely to be research on multiparty systems.

3.5. Experimental Economics

Experimental economics can be rightfully thought of as a separate field of economics and not just a “topic” in public choice. Two of its pioneering scholars — Vernon Smith and Plott — have also been major contributors to the public choice field, however, and an important stream of the experimental literature has dealt with public choice issues. It thus constitutes an important body of empirical evidence corroborating, or in some cases undermining, certain hypotheses in public choice.

The first experimental study of the new voting mechanisms described in section A was by Vernon Smith (1979). He ran experiments on the Groves and Ledyard (1977) iterative version of the demand revelation process, and a somewhat simpler auction mechanism that Smith had developed. In most experiments the subject chose a public good quantity and set of contributions that was Pareto optimal. The experiments also served to demonstrate the feasibility of using the unanimity rule, as the participants had to vote unanimously for the final set of contributions and public good quantity for it to be implemented.

Hoffman and Spitzer (1982) devised an experiment with an externality to test the Coase theorem and found that in virtually every run of the experiment the subjects were able to reach a bargain that was Pareto optimal.

A third set of experiments that might in some way be thought of as rejecting a prediction of an important theory, but it rejects the theory in favor of alternatives that support the behavioral premises underlying the public choice methodology. Frohlich et al. (1987) presented students with four possible redistribution rules — Rawls’s (1971) rule of maximizing the floor, maximizing the average, maximizing the average subject to a floor constraint, and maximizing the average subject to a range constraint. The students were made familiar with the distributional impacts of the four rules and were given time to discuss the merits and demerits of each rule. In 44 experiments in which students were uncertain of their future positions in the income distribution, the five students in each experiment reached unanimous agreement on which redistributive rule to use to determine their final incomes in every case. Not once did they choose Rawls’s rule of maximizing the floor. The most popular rule, chosen 35 out of 44 times, was to maximize the average subject to a floor constraint. Similar experiments conducted in Canada, Poland and the United States all found (1) that individuals can unanimously agree on a redistributive rule, and (2) that this rule is almost never Rawls’s maximin rule, but rather some more utilitarian rule like maximizing the mean subject to a floor (Frohlich and Oppenheimer, 1992). While these results may constitute bad news for Rawlsians, they lend support to the assumptions that underlie economic and public choice modeling. They suggest further that individuals are not concerned merely with their own welfare, but are also motivated by considerations of fairness and justice, although apparently not in the extreme form posited by Rawls.

The last set of experiments are less comforting for students of public choice. At least since the publication of Mancur Olson’s Logic of Collective Action in 1965, a basic tenet in the public choice literature is that individuals will free ride in situations where contributions to the provision of a public good are voluntarily. Countless experiments have demonstrated that they do free ride, but to a far smaller degree than one might have expected. If 100 is the contribution to the public good that produces the optimum quantity of the good for the collective, and 1 is the contribution that is individually optimal, then the typical finding in an experiment testing for free rider behavior is that the mean contribution of the participants is around 50. Some people do free ride, but many make contributions that are far larger than is individually optimal. In aggregate the total contributions fall far short of what would be optimal for the group, but far above what pure free riding behavior would produce.15

Many additional types of experiments have been run that have important implications for both public choice and other branches of economics, and many more will be run in the future. Experimental economics seems destined to remain an important source of empirical evidence for testing various theories and propositions from the field.16

4. The Next Generation

At the start of the new millennium the public choice field is some fifty years old and befitting its age has begun to resemble other mature fields in economics. Important theoretical breakthroughs are fewer and farther between than during the field’s first 25 years. Much current research consists of extending existing theories in different directions, and of filling in the remaining empty interstices in the body of theory. Much current research also consists of empirically testing the many theoretical propositions and claims that have been made up until now. The future development of the field will most certainly parallel that of other mature fields in economics — continually increasing use of sophisticated mathematics in theoretical modeling, continual use of more and more sophisticated econometrics applied to larger and larger data sets when estimating these models.

Two other trends that are apparent at the start of the new millennium are worth commenting upon. Although public choice is destined to remain just one of many fields in economics, it is possible — I would dare to say likely — that it eventually takes over the entire discipline of political science, takes over in the sense that all political scientists will eventually employ rational actor models when analyzing various questions in political science and all will test their hypotheses using the same sorts of statistical procedures that economists employ. Political institutions are sufficiently different from market institutions to require important modifications in the assumptions one makes about the objectives of rational actors in politics and about the constraints under which they pursue these objectives. Nevertheless, the assumption that individuals rationally pursue specific objectives has proven to be so powerful when developing testable hypotheses about their behavior, that this methodology — the methodology of public choice — must eventually triumph in some form throughout the political science field.

With the exception of Duncan Black all of the major contributors to the early public choice literature came from North America, and this continent can be said to be the “home” of public choice for much of its early life. The Public Choice Society was founded there and has grown to a point where its annual meetings attract over 300 participants from around the world. There now is also a Japanese Public Choice Society and an European Public Choice Society, however, with the annual meeting of the latter often attracting well over 200 participants. Thus, the second discernable trend as the third millennium begins is the full internationalization of the discipline. Scholars can be found applying the public choice methodology to the questions pertinent to their country on every continent of the globe, and an increasing fraction of the important contributions to this literature can be expected to come from outside the North American continent.

Next post:

Previous post: