The concept of a “public option” entered the American lexicon in 2009, during the congressional debate over what became the Affordable Care Act (ACA). It suggested what many considered a radical idea: that government itself would offer a health-care coverage plan, thereby forcing private insurance providers to compete with a lower-cost alternative. Progressive groups rallied behind the notion that a public option could bring much-needed choice into the health-care marketplace without the political challenges of adopting a single-payer system. But opponents still lambasted the entire ACA as a “government takeover” and regarded the “public option” in particular as the epitome of socialism.Footnote 1 Although that measure failed, the fact that the public option concept spurred such political controversy itself is deeply paradoxical because, in fact, public funding of social provision, as well as government intervention to support the broader market economy, has a long history in the United States – one that is far more expansive and broadly used today than many Americans realize.
Consider a thought experiment. What if, as part of the ACA, policymakers had instead offered citizens a “private option,” meaning that they could decline any form of health insurance or health care that was supported by public funds? Those who object to government involvement in health care could “take the pledge” and sign a promise vowing to refrain from using public programs like Medicare, Medicaid, or veterans’ health benefits. It would also require them to refuse to benefit from government subsidies that substantially lower the costs of their employer-provided health coverage; the amount that their employers pay for their health coverage would now be treated as a taxable benefit, and their taxes would increase accordingly. For anti-government purists to be satisfied with the private option for health care, those embracing it would need to swear off care in any hospital built with the support of public funds or by any doctor whose education benefitted from federal aid; they would be committed to declining any medical treatments developed through federal grants from the National Institute for Health. In short, in order to remain true to their principles, market devotees would need to pay far more for their health care, and they would be unlikely to find providers, facilities, or treatments that measured up to those supported by public funds.
As this example highlights, the government is already heavily implicated in health insurance and health-care markets in the United States. While a public option might represent a new form of government intervention, it is by no means the novel incursion into an otherwise free health-care market that many portray it to be. Nor is this state of affairs unique to health care. Contemporary Americans benefit from government interventions in numerous ways across many policy domains, many of which, owing to their policy designs, are not visible. As a result, Americans erroneously attribute to the market many benefits that government has a hand in providing. Even for those benefits that obviously stem from government, people often take them – and government’s role in providing them – for granted. It is not as if most of us are “self-made,” having lived our lives without the aid of publicly funded goods and services; to the contrary, government plays an immense but largely unappreciated role in the everyday lives of ordinary Americans, and this is hardly new. It is “private options,” not “public options,” that have been the exception to the rule in the United States, but the extensive role of government has often been camouflaged by policymakers – both intentionally and unintentionally – in most policies except those targeted at the poor.
The paradox of government’s expansive but frequently invisible intervention in and outright provision of a wide variety of goods and services has serious political consequences that, as we will argue, create both obstacles for those who might wish to promote public options and compelling reasons to do so. As such, while we respect the aims underlying the “public option” concept, we think it is worth turning the concept on its head by envisioning the alternative “private option” in order to expose some aspects of American politics that may bedevil reformers’ success.
First, we think that the concept, as it is typically communicated, is rooted in the same market model of social life that it aims to critique. This model is out of step with the long history of government social provision in the United States, which gained momentum particularly from efforts to protect democracy, by developing good citizens and rewarding citizens who sacrificed on behalf of the nation. By adapting uncritically the concept of a public option as an exception to the private provision of goods across a variety of domains, proponents may generate unintended consequences, perpetuating the myth that American life has developed historically and thrives today owing to autonomous markets, without much government aid or intervention. Public options, by this logic, represent new incursions in an otherwise independent economic system. As we will demonstrate, this myth, which stems in part from a legacy of active but invisible government intervention, fuels anti-government attitudes and complicates efforts at policy reform.
Furthermore, we suggest that the public option concept may have trouble gaining sufficient political support to prevail precisely because of these attitudes. American voters, who typically underestimate the extent to which government is already engaged in creating, subsidizing, and regulating private provision, may have difficulty embracing the idea that government has a role to play in what they see as the sole purview of private markets. Policymakers, too, have electoral incentives to hide government’s role in providing goods and services in an effort to maintain the myth of limited government.
These obstacles to the successful pursuit and adoption of new public options are not to be underestimated. But they also illuminate a crucial justification for embracing more visible public provision: Beyond the economic and social good that public options could create, increasing people’s positive experiences with visible sources of government support can enhance democratic engagement and government trust. The successful implementation of public options might cause more Americans to rethink the myth of limited government, subsequently increasing their incentives to participate actively in the political life of the country.
Our analysis begins by turning to history and discussing the development of policies in terms of their relationship to the private sector. We then explain the politics these policies generate in the contemporary period, when use of government social benefits is widespread, and yet Americans often fail to see government’s role in their lives. Finally, we offer our recommendations for policy renewal.
1.1 Government and Markets Entwined
The concept of the “public option” may inadvertently imply that the public provision of goods and services is unusual in American life, creating a misimpression both of history and of current reality. Many assume that until at least the late nineteenth century, if not the New Deal, the United States featured an autonomously functioning market, free from government intervention.
The study of policy analysis, informed by economics, can perpetuate this mistaken interpretation of American political economy. It takes the market as the starting point and puts forward the ideal of the perfectly functioning economy in which producers maximize profits and consumers maximize utility, promoting efficiency. This approach also acknowledges, though, that predictable “market failures” occur, for example, in the case of goods or services – such as lighthouses or military defense – that private actors are unwilling to provide because there is no way to charge beneficiaries for them, or in the creation of “externalities,” the side effects of economic activity that may generate consequences for nonparticipants, such as through carbon emissions that lead to global warming. In these carefully defined situations when the free-functioning market does not provide the most social utility, the logic goes, government “intervention” may be justified, requiring the provision of “public goods” or some regulation to limit externalities, for example, in response to the examples noted here.Footnote 2
This theory of public intervention overlooks the critical role that US government institutions and polices have played from the nation’s founding to the present both to make markets possible and to facilitate their growth. State governments and courts were crucial in defining rules about property and its exchange, establishing law and order to protect private property, enforcing contracts, and adjudicating disputes. The federal government promoted the development of the economy by coining money, setting market standards, regulating commerce, and stimulating the necessary system of communication, for example, by establishing the postal system early on. Government fostered the requisite transportation for market exchange, with early development of canals followed later by the regulation of railroads in the late nineteenth century and the development of the interstate highway system and air traffic control in the twentieth century. In each of these domains, government actively helped to establish the conditions under which US markets could flourish.
Beyond these investments to market infrastructure, public intervention has long been necessary to support the labor supply central to a growing market economy. The federal government began to promote the development of public schools as well as higher education by setting aside land for that purpose in the Northwest Ordinance of 1787. It declared, “Religion, morality, and knowledge, being necessary to good government and the happiness of mankind, schools and the means of education shall forever be encouraged.”Footnote 3 Today, informed by the market approach to public policy, we think of education as serving the purposes of economic development, by creating human capital. Certainly, some early statesmen saw things similarly; Benjamin Franklin is remembered for embracing this approach. Yet, economic justifications were not policymakers’ only considerations; the more dominant rationale for the public provision of education pertained to the promotion of citizenship, enabling self-government to thrive. Thomas Jefferson promoted education on these grounds, writing in 1820, “I know no safe depositary of the ultimate powers of the society but the people themselves; and if we think them not enlightened enough to exercise their control with a wholesome discretion, the remedy is not to take it from them, but to inform their discretion by education. This is the true corrective of abuses of constitutional power.”Footnote 4
In fact, public provision of goods and services in the United States, particularly in the realm of education and social welfare, was long justified not primarily in economic terms, but rather by their role in fostering democracy. Early social provision took the form of veterans’ benefits, provided to those who had taken on the role of citizen soldiers and put their lives in harm’s way for the sake of the nation. This tradition began by recognizing veterans of the Revolutionary War. After the Civil War, veterans’ pensions became far more generous and included benefits for veterans’ survivors as well. By the early twentieth century, as Theda Skocpol has shown, a “maternalist” welfare state provided “mothers’ pensions” to families in which no male breadwinner was present.Footnote 5 The rationale for these policies revolved around mothers’ role in raising future citizens; it was thought that this was crucial for the nation and would be compromised if mothers had to work outside of the home.Footnote 6
It should be noted that veterans’ and mothers’ pensions both operated simply as public programs with government directly offering benefits; the market did not supply a feasible alternative that would be affordable for most people. Perhaps these policies could be defined as public goods, but the rationale for them emanated not from market justifications but rather from those that prioritized the health of democracy. In the case of public support for higher education, certainly some universities and colleges already existed that were nominally “private,” such as Harvard, Yale, Princeton, and several others, though even these had been established through a combination of public and private support and initiative. Once the Northwest Ordinance was in place, states seized the opportunity to create public colleges, and the number proliferated quickly. Later in the nineteenth century, the federal government promoted the development of public colleges and universities once again, after President Abraham Lincoln signed the Morrill Land Grant College Act. The second version of this law, enacted in 1890, included states of the former Confederacy and gave rise to most of the historically black colleges.Footnote 7
Government intervention has also been leveraged in order to construct new private markets for social provision when they did not emerge “naturally,” further challenging the notion that distinct private and public options exist in American social provision. One of the most notable examples of this phenomenon occurred when government stepped in to create new consumer credit markets in response to the Great Depression. The Roosevelt administration was convinced that the economy, and especially private industry, would not recover unless the problem of underconsumption could be remedied. Thus, enhancing the purchasing power of consumers became a central component of many New Deal policies. While efforts to put money back in the pockets of American consumers took many forms, the administration was especially focused on creating economic tools that would put the construction industry, which comprised nearly one-third of those receiving government emergency relief, back to work.Footnote 8
Public officials sought to encourage the building of new homes and the renovation of old homes, but by 1933 the government estimated that as much as one-quarter of all home mortgages were in default, and even for homeowners lucky enough to escape the threat of foreclosure, few had the resources to finance renovation or new construction in such a precarious economy. The administration responded by offering several proposals designed to rescue mortgages and incentivize home buying and home renovation both by bolstering existing private markets and by creating new private markets from whole cloth.
The Federal Home Loan Bank Act was passed in 1932 to serve as a reserve credit system to prop up both troubled borrowers and lenders. One year later, the Home Owners’ Loan Act of 1933 established the Home Owners’ Loan Corporation (HOLC). HOLC introduced a new long-term, fixed-rate mortgage that made borrowing for homeownership more attainable for the average American. It also allowed defaulting borrowers to trade in their mortgage obligations for government bonds, both rescuing individual homeowners and stabilizing the lending market. In a more direct form of public support, HOLC provided limited funds to homeowners for the completion of necessary repairs.
An even more ambitious and enduring initiative, the National Housing Act (NHA), was adopted in 1934. The Act created the Federal Housing Administration (FHA) to offer federally backed mortgage insurance to approved lenders, authorized a national mortgage market to expand the availability of home loans, and created a home modernization loan program in which government subsidized banks to extend small lines of credit for home repair. It was the final plank of this program, established by Title I of the NHA, that used government incentives to establish a new consumer credit market where banks had previously been reluctant to lend. At the turn of the twentieth century, the administrative cost to issue a small personal loan was similar to that of a much larger loan. With state usury caps in place, most banks determined that the money they could earn from interest on a small loan was insufficient to cover their burdensome administration fee. As a result, reputable banks largely avoided small loan lending.
New Deal policymakers were wary of embracing a direct public loan program, so they chose instead to induce private companies to make loans to homeowners for renovation. As Marriner Eccles, then assistant secretary of the Treasury, explained to Congress during hearings for the implementation of the NHA, “There is no lack of money. It seems to me, however, that it lacks velocity.”Footnote 9 Title I provided lenders with the necessary encouragement by implementing a system of government insurance on private loans for home renovation and repair for up to 20 percent of the total value of loans made by a participating lender. By 1935, about 254 million dollars in modernization loans had been issued. But perhaps the more enduring consequence of this government intervention was the new market for private small consumer loans it sparked. Through this New Deal policy, banks discovered that consumer lending could be exceedingly profitable. The next two decades witnessed the evolution of several novel forms of consumer credit, especially the credit card, that would ultimately provide a stand-in for other public social programs designed to expand consumer purchasing power.
Viewed in the context of this large and varied history of state involvement in creating, sustaining, and supplementing ostensibly private markets, the “public option” concept seems somewhat incongruous. Public roles in economic development and in social provision both have a long history in the United States. While some of these programs take the form of traditional public benefit schemes, others appear to be private in origin, masking government’s critical role in their creation and continued development. Moreover, policymakers often promoted these interventions not only for economic reasons, or in some cases, not at all for such reasons, but rather because they served the aims of fostering democracy.
1.2 Social Policy Design and Government Visibility
From the New Deal onward, the federal government became further involved in promoting social welfare, education, and financing for American citizens, yet new policies would take a variety of forms, many of which obscured government’s role as a provider of benefits. The most lauded social policy emerging from the New Deal is what we now call simply “Social Security.” Enacted in 1935, this program created a payroll tax–funded system of old-age insurance (OAI) that is centrally administered by the federal government. Social Security involves the direct public provision of benefits, administered by the Social Security Administration. The state’s role in the provision of these benefits is, thus, highly visible to most Americans. It bears the hallmarks of what many Americans think of as government social insurance, with redistribution that aims to achieve public purposes. The Social Security Act also laid the foundation for another pillar of the twentieth-century American welfare system: A means-tested system of public assistance programs for families with dependent children (what would become AFDC and later TANF) designed to temporarily prop up the “undeserving” poor. Unlike its more generous OAI counterpart, this means-tested public program was administered by state governments.
While government’s role in early forms of means-tested public assistance was highly visible, the most commonly used social policies today feature policy designs that make the role of government less apparent. These include programs like employer-provided, government-subsidized health and retirement benefits, used by 48 and 39 percent of households, respectively, and the home mortgage interest deduction, claimed by 24 percent.Footnote 10 Such policies constitute the largest “tax expenditures,” programs that serve social purposes but that generally function by permitting people to pay less in taxes rather than to receive payments directly from government. None of these three was designed intentionally to serve the justifications that have become commonplace today, aiding middle-income Americans in attaining health coverage, retirement, and homeownership; each emerged through haphazard developments and grew in ways unforeseen by proponents.Footnote 11 Owing to their obscure design, Christopher Howard has called this constellation of policies the “hidden welfare state,” and they form the largest components of what one of us has termed the “submerged state.”Footnote 12 Jacob Hacker has shown how government’s hidden role in social provision evolved to include a mass of government regulations and subsidies applied to benefits distributed by private employers.Footnote 13 Most of these policies bestow their largest benefits on the affluent; the employer-provided benefits have grown more upwardly distributive over time, as fewer jobs – particularly those that pay less – come with benefits than was the case a few decades ago.
These policies do little to make governments’ role in subsidizing them evident. Beneficiaries rarely perceive government as having aided them and they are more likely to perceive the benefits to be attributable to their own efforts and private sector initiatives.Footnote 14 This is true even in the case of the Earned Income Tax Credit (EITC), which has evolved into the United States’ largest form of aid to low-income people, with 19 percent of households benefitting annually in recent years.Footnote 15 Yet, 47 percent of EITC beneficiaries reported that they had never used a government social program.Footnote 16 This is striking because in the case of the EITC, many beneficiaries have no tax liability, or at least receive more through the benefits than they would have owed in taxes if it did not exist. Nonetheless, its placement in the tax code obscures its status as redistributive aid from government.
Americans’ use of government social benefits from direct transfers from the federal government – Social Security, unemployment insurance, Medicare, Medicaid, Supplemental Nutrition Assistance Program (SNAP, or “food stamps”), and other such policies – has increased over time. In recent years, 17 percent of the average person’s income came from such benefits.Footnote 17 This does not include the “hidden” or “submerged” policies; if these are included in social spending, the United States boasts the second largest welfare state in the world after France.Footnote 18 If all of these policies are accounted for, it turns out that 96 percent of American adults report that they have used at least one federal social policy, and the average person has used five. Although specific policies target different groups, overall the pervasiveness of federal social policy usage spans differences of income, age, race and ethnicity, and partisanship, and the federal government bestows social transfers at least as liberally on “red states” as “blue states.”Footnote 19
1.3 The Political Consequences of Public Invisibility
These details of policy visibility are not simply an interesting footnote to the development of US public goods provision or market intervention. Decisions about policy design – particularly those that affect the visibility of government – carry major implications for how citizens think about public policies, their own and others’ relationships to government, and whether to take political action or to take their demands elsewhere. Each of these consequences is of critical importance for proposals to expand public options.
Once enacted, public policies that become lasting features of the political landscape have the capacity to shape people’s politics in a variety of ways.Footnote 20 These so-called policy feedback effects can take many forms, but particularly meaningful for proponents of the public option are findings about the effect of government visibility on people’s political preferences and behaviors. When people experience a public policy, they are learning lessons about the relationship between citizens and the state for a particular set of issues.Footnote 21 People’s experiences with policy implementation have been shown to influence their attitudes about government efficacy for a given issue. Especially relevant are findings that a lack of obvious interaction with government during the implementation of a policy can encourage citizens to underestimate the role government plays in that policy area.Footnote 22
This has two key consequences. First, the degree of state visibility for a particular policy can affect people’s perceptions of whether a problem requires a public or a private solution. When a policy obscures government’s role in social provision, it will encourage people to assume that private market forces are responsible for the benefits they receive. By contrast, policies that highlight government’s role will be more likely to lead people to think that government does, and should, play an active role in providing that good. As a result, government visibility can shape people’s attitudes about government intervention on a given issue.
Second, and perhaps more consequentially, these perceptions can influence whether people take political action to support public programs. As Douglas Arnold argues, the electorate must be able to link policymaking to a political actor in order to engage politically on that issue.Footnote 23 So, if government’s role in the provision or regulation of a particular social good is masked, it may diminish political participation on behalf of that issue. We can observe these dynamics at work across a number of policy domains.
Social Security provides a particularly interesting case to explore the effects of policy visibility on public engagement. Because Social Security is a highly visible instance of government spending, it should come as no surprise that beneficiaries and the broader public can connect the program to political actors. It is predictable, therefore, that beneficiaries represent some of the most politically active citizens,Footnote 24 and efforts to reduce or privatize Social Security have largely been met with outright public hostility and threats of electoral consequences.
Yet even in this instance, only 44 percent of beneficiaries, when asked if they had ever used a government social program, answered in the affirmative.Footnote 25 Granted, some people might associate the phrase “government social program” only with means-tested social benefits, and answer in the negative for that reason. Yet other analyses buttress the conclusion by showing that using more non-means-tested visible benefits administered directly by government – Social Security, Medicare, unemployment insurance, veterans’ benefits, or the GI Bill – bears no discernable impact on an individual’s likelihood of agreeing that government has helped in times of need or provided opportunities to improve one’s standard of living, or that public officials care much about them.Footnote 26 This perception may flow from the fact that these policy designs involve some ambiguity: In the case of Social Security, Medicare, and unemployment benefits, Americans typically perceive themselves to have earned their benefits through their participation in the workforce, analogous to payment for private insurance. In fact, President Franklin D. Roosevelt intended the financing feature of payroll contributions to convey that beneficiaries had earned their benefits; as he put it, “We put those payroll contributions there so as to give the contributors a legal, moral, and political right to collect their pensions and their unemployment benefits. With those taxes in there, no damn politician can ever scrap my social security program.”Footnote 27
Even for means-tested social benefits, policy design can obscure the link between government assistance and the citizens who receive it, with political consequences. Recall, for example, that EITC is delivered through the tax code rather than a traditional cash transfer program. Scholars have demonstrated that having benefitted from the EITC does not make people more likely to agree that government has helped them in times of need. In fact, receiving EITC benefits negatively correlates with the likelihood that someone agrees that government has provided opportunities to improve their standard of living. Despite the fact that policymakers intend for EITC to achieve precisely that goal, the policy’s design – which muddies government’s role in offering assistance – seems to preempt EITC recipients from acknowledging and subsequently mobilizing in support of that intervention. Of course, feelings of government inefficacy likely also reflect that the working poor who qualify for EITC may already feel that government has failed them, leaving them in vulnerable circumstances.Footnote 28
Another example of the consequences of policy visibility for political action plays out in the realm of financing and consumer financial protection. As previous sections described, the government has played a highly active but largely invisible role in creating and regulating consumer lending markets in the United States. The average borrower who relies on government regulations to protect them from predatory lending or who uses government-backed loans to buy a new home will rarely see the hands of the state on their financial contracts. The result of this hidden intervention is that Americans increasingly view their own financial protection as an apolitical issue, thus they are reluctant to turn to politics to demand policy reform – even when they have major grievances.Footnote 29
For example, a recent study found that the majority of borrowers place a greater degree of blame for problems with consumer credit on financial institutions than policymakers. This affected borrowers’ willingness to engage in political action to address both specific and systemic solutions to predatory lending problems. About one third of the borrowers surveyed had experienced at least one problem with credit in the past year. Of those who had problems, 80 percent took action to try and remedy the issue; however, nearly all who did (97 percent) turned to the market to do so, attempting to fix the problem with help from their lender, by finding a new lender, or by complaining to a trade association or engaging in a boycott. Only 13 percent took some type of political action, like complaining to a state or federal regulatory agency, and only 3 percent exclusively took political action. Borrowers were also far more willing to contact their bank than their member of Congress or a federal regulator to support policies designed to improve consumer financial protections, despite the fact that banks have few incentives to adopt such reforms.
1.4 Public Options: Political Obstacles and Opportunities
The effects of government visibility on the politics of social goods provision have significant consequences for the pursuit of public options. In order to get constituents to support and act on behalf of these programs, people must believe that government has a role to play in specific forms of social provision. This is complicated by the two trends in policy visibility described earlier: a shift toward hidden government and a bifurcation in visibility between policies that benefit affluent versus marginalized people. With respect to the first, scholars have illuminated lawmakers’ increasing fondness since the 1970s for policies that are characterized by market logic and that channel benefits and protections through market structures.Footnote 30 Jacob Hacker dubs it America’s “personal responsibility crusade” and Joe Soss, Richard Fording, and Sanford Schram describe the trend as a broad neoliberal project “that turns citizens into prudent market actors who bear personal responsibility for their problems.”Footnote 31 The result is that, as policymakers increasingly adopt policy designs that submerge government’s role within the private market, Americans are less likely to see, to support, and subsequently to take action on behalf of public programs that expand that role. The submerged policy designs provide the illusion that Americans are “going it alone” as self-sufficient individuals who are entirely responsible for their own well-being, when, in fact, social policies embed all of us within relationships of mutual interdependence.
One notable takeaway from this observation is that proponents of public options would be well advised not to frame their proposals as “new” forms of government social provision. Suggesting that a particular public option represents a break from an existing private market, in addition to being historically inaccurate, may also reify people’s belief that policymakers don’t have a role to play in that specific domain, and that any program would be an onerous expansion of government into a previously free-market system. So, while it might seem intuitive to suggest to voters that a public option is simply intended to improve market competition and efficiency, a better approach might be to contextualize public options as part of a longer tradition of government assistance, demonstrating to the public that such a program would not be a new and unwelcome incursion in the market, but instead a more beneficial form of existing government social provision.
The second major stumbling block in generating support for public options stems from the growing perception that government involvement is only necessary to support those who are socioeconomically marginalized. Public provision for the poor typically uses policy designs that make government’s role more obvious, and beneficiaries of such policies are more supportive of increased public funding for social policies generally. Meanwhile, however, middle- and upper-income Americans – despite typically using several social policies themselves – do not gain an awareness of government’s role in those policies, and they do not become more supportive of expanded social provision. Exacerbating this “government-citizen disconnect,” it is the latter group who are far more likely to take political action than the former, voicing their anti-government sentiments to lawmakers even as they themselves benefit from it.Footnote 32 This bifurcation in government visibility between public interventions designed to assist those at the socioeconomic margins versus more advantaged groups offers a cautionary tale for proponents of public options designed to provide a “basic” level of assistance when the market fails to do so – for example, providing a bare bones health insurance plan for those who cannot afford more premium options. Framing public options as, effectively, another means-tested form of government intervention may reinforce these existing attitudinal and participatory divides.
Each of these consequences complicate the prospects for public options, but they also suggest a crucial benefit of successfully expanding clear public “alternatives” for social provision: improving perceptions of government and increasing democratic participation. When people are able to associate government with a particular issue or benefit, it can increase their willingness to engage politically on behalf of that program.Footnote 33 Relatedly, when people have positive experiences with government service provision, their trust in government and feelings of civic efficacy can increase.Footnote 34 The introduction of a public option could, therefore, help improve the relationship between citizens and the state. While the proposed health-care “public option” failed to gain approval as part of the ACA in 2010, the expansion of government health-care plans under the Act still offers an example of how this scenario might play out: Expanded government provision of health insurance has reshaped debate about government’s role in securing health care in the years since, and support has grown for a single-payer health-care system in the United States.
Of course, this outcome is dependent upon people having positive experiences with policy disbursement. Scholars have detailed the negative consequences for political efficacy that emanate from feeling poorly treated by agents of the stateFootnote 35 – effects that are more frequently incurred by marginalized communities.Footnote 36 Existing public welfare program administration exemplifies this cautionary tale. As the administrative burden increases for public benefits, people’s sense of civic efficacy and their resulting political engagement decrease.Footnote 37 Thus, poorly conceived public options may do more democratic harm than good. Ensuring smooth implementation should be a priority for any proposed reforms.
Perhaps the most valuable aspect of the “public option” concept is that it could help to spur a public conversation about the role that government already plays in the lives of American citizens. Far from it being an exception to the rule, government intervention is and has long been the norm, but it is far too often unperceived and unappreciated. Policymakers should consider the impact of policy designs, not only for goals such as efficiency, but also for ends that serve democracy, such as access, inclusion, fairness, and the promotion of civic education and political participation. These latter goals each have a legacy in the United States, and the nation’s future can be strengthened by finding ways to instill them once again.