We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This chapter offers our first empirical analyses of media coverage of policy, across the various policy domains and news organizations. We first compare the aggregated “media signals” to actual changes in policy. Does aggregated coverage follow policy over time? Does this relationship vary across domains? Given the multiple measures developed in the previous chapter, this chapter also considers whether and how the measures matter for what we observe. This chapter centers on figures depicting the ebb and flow of policy and media coverage over time. In so doing, it offers the first large-scale comparison of policy change, and media coverage of policy change, across six domains over a forty-year period. Do patterns vary across newspapers? How about across media, particularly television coverage? Does it match what we see in newspapers? This chapter offers some critical diagnostics, assessing the degree to which media coverage has followed public policy; and relatedly, whether media coverage reliably includes the information citizens need to respond to policy change.
This chapter spells out how we believe the mass media cover public policy, particularly the outputs government produces. Although there is a considerable body of work detailing a range of biases in coverage and a lack of policy content, we posit that mass media can and do track trends in policy, at least in very salient policy areas that attract a lot of attention. Put differently, even as media can be biased and provide inaccurate information, there also can be a signal of important policy actions amidst the noise. News organizations have a professional and economic interest in doing so, at least up to a point. We are especially interested in media coverage of policy change. This is in part because we suppose that media often reports on change in policy, not levels, much as research on news coverage of other areas, for example, economic conditions, has revealed. (Change also seems easier to directly measure.) The conceptualization and theory in this chapter guide both the measurement and analyses that follow.
Chapter 3 laid out the building blocks for our measures of the media policy signal and presented a preliminary version of that signal across newspapers, television, and social media content. We now turn to a series of refinements and robustness tests, critical checks on the accuracy of our media policy signal measures. We begin with some comparisons between crowdsourced codes and those produced by trained student coders. Assessing the accuracy of crowdsourced data is important for the dictionary-based measures in the preceding chapter and for the comparisons with machine-learning-based measures introduced in this chapter. We then turn to crowdsourced content analyses of the degree to which extracted content reflects past, present, or future changes in spending. Our measures likely reflect some combination of these spending changes, and understanding the balance of each will be important for analyses in subsequent chapters. Finally, we present comparisons of dictionary-based measures and those based on machine-learning, using nearly 30,000 human-coded sentences and random forest models to replicate that coding across our entire corpus.
Does media coverage matter for the functioning of representative democracy? Do people notice news coverage? Do they take it into account? In particular, do citizens use the information that media content conveys to update their policy preferences? These questions are the central motivation for this book. In this chapter we try to provide some answers. We begin by introducing our principal measures of public preferences from the General Social Survey. We then consider a smaller, unique body of data on public perceptions of policy change, from the American National Election Studies. These data allow us some preliminary insight into whether the public notices government spending and media coverage of government spending. The remainder of the chapter then presents results of analyses of public preferences, first to establish the effects of spending on preferences, and then to assess the role of the media signal. Results document thermostatic public responsiveness, as found in previous research, and also that news coverage is a critical mediating force.
Preceding chapters have provided evidence that media coverage frequently reflects public policy, and that public preferences respond to a combination of policy and the media “policy signal.” Those results speak to some important questions about the nature and functioning of representative democracy, we believe. A good number of questions nevertheless remain. This chapter attempts to address some of what seem to us to be the most pressing issues. First, we consider the impact that trends in media consumption have on public responsiveness. Second, we consider heterogeneity in public responsiveness to the media policy signal. Third, we reconsider the causal relationships between policy, news coverage, and the public. Fourth and finally, we investigate several of the domain-specific media effects identified in Chapter 6. Media coverage of policy matters, but to varying degrees and in different ways. We offer additional analyses here to help illuminate some of these domain-level differences in information flows.
This chapter provides an introduction to the ideas and literatures that guide the analyses that follow. We consider past work on the potential role of media coverage in representative democracy and public responsiveness.
This chapter moves from theory to practice and implements a measure of media coverage. We introduce our database of news coverage. We also described the unique “layered dictionary” approach used to identify sentences on the direction of policy change. The focus on change in policy and not levels is critical, and we discuss this in some detail. We also compare the use of application of both dictionary and supervised machine-learning approaches to content analyses of news content. This chapter is necessarily technical, but it also is an opportunity for us to introduce the methods to a broader audience. We plan to escort readers through the various available approaches, our implementation of them, and then an assessment of the outputs they produce. We end the chapter with some substantive findings: the overall amount of coverage of policy change in newspapers and television, and the general trends in aggregated “media signals” generated by the different approaches.
This chapter reviews the findings in previous chapter and considers their implications for research on media democracy, as well as for citizens and journalists.
Around the world, there are increasing concerns about the accuracy of media coverage. It is vital in representative democracies that citizens have access to reliable information about what is happening in government policy, so that they can form meaningful preferences and hold politicians accountable. Yet much research and conventional wisdom questions whether the necessary information is available, consumed, and understood. This study is the first large-scale empirical investigation into the frequency and reliability of media coverage in five policy domains, and it provides tools that can be exported to other areas, in the US and elsewhere. Examining decades of government spending, media coverage, and public opinion in the US, this book assesses the accuracy of media coverage, and measures its direct impact on citizens' preferences for policy. This innovative study has far-reaching implications for those studying and teaching politics as well as for reporters and citizens.
Chapter 6 examines the relationship between presidential remarks on Supreme Court cases and news coverage of those remarks. We argue that presidents make concerted efforts to influence media coverage of their perspectives to mold how the public thinks about the constitutional issues involved in the Court’s cases. We examine the ability of presidents to shape the volume of news attention to the Court’s cases, as well as the tone of newspaper coverage of the president’s remarks (using Lexicoder text analysis software) for all New York Times coverage of presidential speeches on Supreme Court decisions from 1953 to 2017. We find that presidents are capable of influencing the volume of news coverage of their discussions of Court cases, with coverage associated with the length and type of the presidential statement, the tone presidents use to describe the cases, and the timing and location of the speech, among other factors. However, presidents are generally incapable of affecting the tone of media coverage of their remarks.
Media reports on disasters may play a role in inspiring charitable giving to fund post-disaster recovery, but few analyses have attempted to explore the potential link between the intensity of media reporting and the amount of charitable donations made. The purposes of this study were to explore media coverage during the first four weeks of the 2010 earthquake in Haiti in order to assess changes in media-intensity, and to link this information to data on contributions for emergency assistance to determine the impact of media upon post-disaster charitable giving.
Methods
Data on newspaper and newswire coverage of the 2010 earthquake in Haiti were gathered from the NexisLexis database, and traffic on Twitter and select Facebook sites was gathered from social media analyzers. The aggregated measure of charitable giving was gathered from the Center for Philanthropy at Indiana University. The intensity of media reporting was compared with charitable giving over time for the first month following the event, using regression modeling.
Results
Post-disaster coverage in traditional media and Twitter was characterized by a rapid rise in the first few days following the event, followed by a gradual but consistent decline over the next four weeks. Select Facebook sites provided more sustained coverage. Both traditional and new media coverage were positively correlated with donations: every 10% increase in Twitter messages relative to the peak percentage was associated with an additional US $236,540 in contributions, while each additional ABC News story was associated with an additional US $963,800 in contributions.
Conclusions
While traditional and new media coverage wanes quickly after disaster-causing events, new and social media platforms may allow stories, and potentially charitable giving, to thrive for longer periods of time.
Lobb A, Mock N, Hutchinson PL. Traditional and social media coverage and charitable giving following the 2010 earthquake in Haiti. Prehosp Disaster Med. 2012; 27(4):1-6.
Recommend this
Email your librarian or administrator to recommend adding this to your organisation's collection.