Editor’s note: In this plenary talk given at the 33rd Annual Meeting of the Association for Politics and the Life Sciences at the University of Wisconsin, Madison, on October 24, 2015, Professor Dietram Scheufele of the Department of Life Sciences Communication at Madison discussed his research regarding the role and impact of new information communication technologies on public understanding of science and policy discussions around emerging technologies. Scheufele, who holds the John E. Ross Chair in Science Communication and is Vilas Distinguished Achievement Professor at Madison, is an elected fellow of the American Association for the Advancement of Science, the German National Academy of Science and Engineering, and the International Communication Association.
My comments are partly based on work that we’ve done at Wisconsin in the Department of Life Sciences Communication in the College of Agriculture and Life Sciences and which we are expanding right now as part of the Morgridge Institute for Research here in Madison. I want to touch on three different areas, which goes back to Erik’s comments about the interface between politics and science becoming more and more important and blurry. I will mention about those developments briefly and how they have coincided with rapid changes in our media system. These are not rapid changes that are getting us from a clear point A to a clear point B, but changes that are getting us away from the types of media use and content that we in communication research have been familiar with for a long time.
Not too long ago, we knew exactly how to measure our media effects variables, and how to measure our media use variables, including attention, exposure, attitude change, and so on. Now, we’re all of a sudden in a whole new world of information gathering and sharing through social media, with new ways in which people are exposed to real and fake news. Later, I will review some empirical realities that we as communication researchers have probably paid less attention to than we should, including a few thoughts about what this means for us as researchers and teachers.
So, why worry about this blurring line between science and society? Think about some technologies that have emerged in the last 30 years, stem cell research being just one example. From the beginning of clinical applications, embryonic stem cell medical research blurred the line between science and politics, with Michael J. Fox doing ads in Missouri Senate races and the exact same ads running in Wisconsin for the gubernatorial race. It clearly remains a political issue — tied to federal funding and many other ongoing policy debates.
A second example is climate change. Climate change was a highly politicized issue before Al Gore emerged as one of its most prominent spokespersons following the highly contentious and fiercely litigated 2000 U.S. presidential race, and it remains politicized to this day.
Even obesity was turned into a political issue when Michelle Obama suggested that we could do a better job providing more healthy meal options in schools. Some conservatives were not being very subtle about their opposition until people like Mike Huckabee, Chris Christie, and others actually came to her defense, saying essentially that it was actually an issue that we should worry about. Reference Oliphant1
A fourth example of politicized science is synthetic biology, that is, exploring the building blocks of biology and creating organisms from the bottom up. In 2010, J. Craig Venter first inserted synthetic DNA into a live bacterial cell. It’s been called “creating life in the lab.” Others referred to it as “jump-starting life.” Reference Schwille2 Similarly, genetically modified organisms (GMOs) are back on the political agenda in at least four states. Finally, there is human germ line editing — the idea that we can edit the human genome in ways that can be inherited. There is no doubt that political debates will arise with respect to germline editing.
Why science is politicized
We could spend days talking about the reasons that make issues more politicized, but I want to single out two of them because I think they are particularly important. The first one is the nature of new scientific breakthroughs. From an interdisciplinary perspective, some people refer to an NBIC (nanotechnologies, biotechnologies, information technology, and cognitive sciences) revolution.
A new type of science
New areas of science emerge at the interface of more traditional fields of research, which has been a trend over the last century. What does that mean? First, it means that we didn’t learn much about the sciences behind these sciences in school. These are highly interdisciplinary and yet highly specialized emerging areas of science. For instance, even a material scientist working on nanomaterials might not necessarily understand what a soil scientist might be working on in the area of nanoparticle toxicity.
Most importantly, science doesn’t have the answers to many of the questions that are being raised by these new technologies. Dan Sarewitz, codirector of the Consortium for Science, Policy & Outcomes at Arizona State, has written about this quite a bit. Reference Kriebel, Sarewitz, Heazle and Kane3,Reference McNie, Parris and Sarewitz4 We often talk about the ELSI (ethical, legal, and social implications) of new technologies. The ELSI label came out of the Human Genome Initiative in the Department of Energy, and it relates closely to NBIC technologies. Most of the questions about these new technologies have answers beyond science. Should we create life in the lab? Is it morally acceptable? Should we mandate labeling of GMOs? Those are inherently political questions, which require a political process to sort out their ethical and regulatory implications.
Some scholars have referred to these new research frontiers as “post-normal” science. Reference Turnpenny, Jones and Lorenzoni5 In the past, science was perceived as fairly straightforward applied science. Decision stakes were moderate and we kind of knew what we were doing. Furthermore, we were moving sufficiently slow to monitor the societal impact of our innovations. But fast moving, highly complex, post-normal science has not only raised the decision stakes for policy, it has done so with increasing uncertainties regarding societal impacts.
Human gene editing that involves the use of new biologically based tools, such as CRISPR/Cas9, is a great example. If we are editing the human genome in ways that can be inherited, we are likely to encounter extremely high decision stakes. Accordingly, we will be faced with considerable uncertainty in terms of how to deal with what the social side effects of post-normal science will be from an economic, religious, and moral perspective.
Our translators of science are disappearing
These concerns are exacerbated by the fact that print and traditional media are on life support, to put it mildly. Particularly in science journalism, that’s a real problem because we desperately need translators, that is, people who can translate 20 years of research in a highly specialized field into 300 words (or less) for public consumption so that we can make sense of the science and connect it to our daily lives. Good science journalism helps clarify, i.e., it can help the broader public get to a point of feeling like: This is what the science is. I don’t understand all the technical aspects, but this is how I can make sense of it. These are the policy options, and these are why certain choices make or don’t make any sense.
Examples of several ‘translators’ who no longer work for traditional newspapers or, in some cases, in the news industry at all, include Rick Weiss from the Washington Post, Barnaby Feder left the New York Times, Miles O’Brien of CNN, and Ken Weiss of the Los Angeles Times. Dan Vergano also left USA Today to go and write, eventually, for BuzzFeed Science. Why is this so important? Because they all wrote for elite newspapers that cater to highly educated audiences who should ideally care about science. The small and the midsize papers that once had translators on their staffs went through that process a long time ago. Many people who teach science writing in our journalism schools are people who got laid off, as profit margins in commercial media were first held to unrealistic expectations and then saw their profitability slip away with the rise of online advertising.
Science’s marketing problem
So, the science has gotten more complex and more fast moving, surrounded by highly salient ethical, legal, and social implications. Meanwhile, we’re losing the news infrastructure that helped all of us make sense of emerging science. These are only some of the reasons that made people like Larry Page — cofounder of Google — very concerned about our science communication infrastructure. In a talk he gave at the American Association for the Advancement of Science Annual meeting in San Francisco in 2007, he said, “Look, we have a marketing problem in science. And scientists and academia, if we like it or not, need to get more involved in media, but also into the political aspects of science.”
Response from the academic community has varied. But there are signs that the tide may be changing, and administrators in the academic community are starting to address the issues. Alan Leshner, former CEO of the American Association for the Advancement of Science, has spoken about the need to have a dialogue about highly complex, controversial science that doesn’t just push the science but that talks about the limits, the perils, the downsides, of moving ahead with certain areas of science. “Instead of simply increasing public understanding of science,” he wrote, “scientists need to have a real dialogue with members of the public, listening to their concerns, their priorities, and the questions they would like us to help answer. We also need to find ways to move science forward while adapting to their legitimate concerns.” Reference Leshner6
Now, all of that, of course, is much easier said than done. But gradually, scientists are becoming part of the discourse in a rapidly changing information and public opinion environment. If you look at data from the National Science Board’s Science and Engineering Indicators survey 7 that asks people, “Where do you turn to for information about current events, about science and tech?” and, “Where do you go for specific scientific issues?,” we still watch TV with a very minor slight edge as the main source for current news. The internet has long been catching up as a source for science and technology. And, what do we all do if we really want to know something about anything or anybody? We Google it, and that’s exactly what the data show when we’re talking about people looking for specific scientific content.
So, when people look for hard information, they search in a media environment that we as social scientists haven’t really examined all that carefully, at least, not across all contexts.
The first context that I want to touch on illustrates a positive development that came out of the transition to online information environments. When we first started talking about this new online information commonwealth (or “information superhighway”) in the 1990s, and when we first started writing these theoretical pieces about the internet, there was a lot of excitement. And there was excitement for two good reasons.
The first reason is the idea that the internet made more information available to more people with less effort wherever they are than ever before. All of us can look up more stuff on our mobile devices than we were ever able to find in most libraries. As we will talk about later, it’s also much easier to avoid all that information much more successfully or efficiently than ever before.
The second one is interactivity. Not only can I find this information, I can discuss it, and I can repurpose it on blogs or social media. I don’t have to rely on the Ken Weiss and Charlie Gibson [L.A. Times print journalist and former anchor of ABC News, respectively] and the other mass communicators who tell me about the world. I can take bits and pieces of information I find and discuss it and see if it makes sense, or I can find counterinformation. So, it’s these Web 2.0 ideas that are fundamentally transforming modern information environments, potentially for the better. Some evidence suggests that this is working.
A classic example of misinformation is the idea held by some that President Barack Obama is Muslim. There is nothing wrong with a president being Muslim, Barack Obama just happens to not be. In national surveys, however, one in five or one in four Americans still think that’s true. Another persistent piece of misinformation was Sarah Palin having said that she could see Russia from her house, which, of course, she never said. She gave an interview with Katie Couric, who asked what aspect of her foreign expertise prepared her for the [vice] presidency. She responded by arguing that being governor of Alaska, with its proximity to Russia, gave her plenty of foreign policy expertise. One can quibble with the merits of that argument, but one thing is clear: Sarah Palin didn’t originate the phrase “I can see Russia from my house.” It originated on the late-night comedy show, Saturday Night Live, in a skit by Tina Fey.
Four years later, when Fox News announced that Sarah Palin was considering running again, they used a picture of Tina Fey by accident. At this point she was appearing on the Fox News payroll as a commentator. Which just tells you how deeply rooted in popular culture Tina Fey’s portrayal is.
So, in our study, we gave respondents different options on Obama’s religion and different options, including Katie Couric, Tina Fey, and others, who might have said “I can see Russia from my house.” Here are the numbers: one in five Americans think Obama is a Muslim. Seven in 10 Americans think that Sarah Palin has said, “I can see Russia from my house.” Seven in 10 hold something as true that was on Saturday Night Live.
So where do these perceptions come from? This can be seen in a multivariable model in which probabilities of being wrong about Sarah Palin’s (or rather Tina Fey’s) statement are plotted on the Y axis against media source used. The higher the percentage here, the more likely respondents are wrong. As people are using different levels of media, controlling for demographics, ideology, other types of media use, and other variables, the more you’re relying on traditional media, the more wrong you are. Late-night comedy produces the same patterns. So, we didn’t find a corrective effect from comedy, which is not super surprising given that these shows tend to offer caricatures of candidates.
The interesting thing, though, is this: the more you use online sources, the more likely you are to get the answer right, or the less likely you are to be wrong. That’s partly because this information is easily available online. You can see the primary source or the original clips of Tina Fey and Sarah Palin. And you can talk to other people who may point out to you that you’re reposting a hoax or fake news.
Contextualized news
I want to talk about two downsides, or potential downsides, which we have spent a lot of time looking at that I think are really important from a policy or political science angle. The first one is the idea of contextualized news. And by contextualized news, I mean not just second screen viewing. I mean that we no longer consume news in isolation. All of us used to read newspapers at the breakfast table by ourselves and maybe drinking coffee with a spouse or partner or family member. But nobody was telling us what to think about an article; that article stood by itself.
Now, every article we read and every show we watch is contextualized with tweets, Facebook likes, or reader comments. For every article we see, we know how often it’s been retweeted, or how often it’s been liked. We are surrounded by what, for all intents and purposes, are social cues about how popular a news article is, about how relevant it is, about who likes it, and so on and so forth.
News is no longer consumed in an isolated fashion but it is now contextualized and that is highly relevant. Here’s a study with Dominique Brossard who was PI on this grant. In a hybrid survey experiment using nationally representative respondents, we showed everybody the exact same blog post about a type of nanomaterial — nanosilver. Reference Brossard and Scheufele8 This material has antimicrobial and antibacterial qualities that might have both risks and benefits for humans and the environment.
As the study unfolded, everybody read the exact same blog post. But we put people in different conditions that exposed them to reader comments that were manipulated to be more or less civil in nature. The comments did not differ on the type of content or the types of arguments that were offered. Instead, in one condition, people would read, “I disagree with you. Nanosilvers are not at all dangerous. They have no downsides.” In contrast, the uncivil condition comments would sound something like, “What kind of idiot are you? Of course, nanosilver does XYZ.” There were a bunch of swear words and ad hominem attacks in there. In short, comments in one condition were uncivil and in the other one civil, even though the arguments in terms of content were exactly the same.
And here’s what we found, and replicated, based on additional nationally representative data collections: people who saw more uncivil comments also saw more bias in the article. Let that sink in for a minute and think about this from a journalistic perspective. Journalists write a perfectly neutral story or a perfectly balanced story. But now people can engage in Web 2.0 discourse afterward. It doesn’t even matter what they say, because we held that constant. It’s how they say it. They start yelling at each other, and journalists are paying the price since their readers now see the story as potentially biased simply because it’s surrounded by uncivil online discussions. We label this the “nasty effect.” Reference Anderson, Brossard, Scheufele, Xenos and Ladwig9
Secondly, and this is more troubling from a scientific perspective, with exposure to incivility people ended up being more polarized or entrenched in their views regarding the risks of nanosilver. If they liked nano in the first place and they encountered all the yelling and screaming, they became more positive in their view toward nanotechnology. But, if they were worried about nanotechnology initially, they became more negative. So, people ended up being more polarized on the science. Moreover, views of the science, which were described in exactly the same way to every person in the study, ended up being influenced.
To illustrate the point we were trying to make with this study, here is some of the media coverage of our work, which, ironically, attracted plenty of reader comments. It’s almost too postmodern, or meta, to be true. This was the coverage in the print version of the Milwaukee Journal Sentinel, which was one of the first news organizations to pick up this story. In the online version, you see a slight tweak in the headline. Reference Nichols10 But other than that, everything is exactly the same — except for two things: a couple of hours into publication, the story had generated 171 reader comments and over a thousand people had liked this on Facebook. The story, in other words, was contextualized almost immediately.
Now put yourself in the reader’s shoes. I’m reading the printed story versus the online story. I already get the impression that the online version is interesting. Apparently, it’s interesting enough for 171 people to immediately post comments about it. Let me show you some of these comments here because they illustrate the kind of discourse that can exist around science. And again, I’d like to make sure you know that Dominique was first author on this.
∙ “Because these are pointy-headed scientists, 99 percent of whom are socialists and communists.”
∙ “The University of Wisconsin professors bewildered by science opinion being questioned rather than being accepted as told.”
And then, of course, my personal favorite:
∙ “Anyone who aspires to this conclusion hates God and country.”
The pinnacle of my academic career was when the local website A.V. Club Milwaukee, which is affiliated with The Onion, actually picked up on this research. And, they added one of the greatest headlines ever: “Awful online comments hurt understanding news reports, reports local news site filled with awful online comments.” Reference Fanlund11 They also had some of the best summary of our social science, by the way. Among almost every news outlet that covered our research, an online satire site had the most accurate overview of the social science.
In the wake of this publicity, Popular Science made a highly visible decision to yank their comments section with reference to this study. Reference LaBarre12 We can talk about the merits of implementing a new policy like this based on a single study, but they agonized long and hard over this. The agonizing was over the idea that they needed to moderate comments 24/7 to get comment moderation even halfway right, number one. Then they agonized over how comments can undermine quality journalism. Simply by letting readers vent on your website, you might undermine the very value of what you’re trying to sell.
Immediately, people started yelling, “this is restricting free speech,” which I personally think makes very little sense given how many forums there are for exercising your free speech and posting articles. Nonetheless, that kicked off yet another debate in the New York Times about how closing comment sections was violating the principle of free speech and if news organizations have the responsibility to provide a forum for those debates. Again, think about the transition in online news environments. If you look at the Popular Science letters to the editor section in the 1970s, there were a handful of letters per issue. They were edited, shortened, and selected by the editors. There was no comments section and no hint that readers have a right to broadly debate their views about the science.
What happened next was that the Sun-Times in Chicago killed off their comments section. And then Reuters killed their comments sections, and CNN phased out their comments sections. And then Reddit went up live with their new news site and had no place for comments. Of all the places in the world you would expect to see comments, I mean, Reddit is, basically, a commenting and posting platform. Reference Gross13 This is what some of the unintended consequences of contextualized news look like.
Augmented selectivity
The second example, which is politically even more important, is the idea of what I would call augmented selectivity. I’ll come back to the label in a second. The premise is that we’re not talking about mass media anymore. We’re talking about targeted media. Of course, we have all read the work of Markus Prior, Reference Prior14 who has talked about this. I think the most succinct summary still comes from a talk that Rachel Maddow gave at Harvard in 2010 where she talked about highly ideologically focused media. Reference Maddow15 If that’s Fox News on the right or MSNBC on the left, it’s going to provide you with information you already believe in, that fits your prior beliefs and predispositions. And Maddow talked about ideologically charged niche news making all the money traditional news is losing. “Business plan? No. We struck gold. Every time I deliver anti-conservative news to an MSNBC news audience, my ratings go up.” 16
So, we know that this strategy is likely to work, at least since cable. We’ve, of course, perfected this online. We’ve perfected this with algorithms-as-editors — the idea that news choices about where to place stories and what kind of headlines to use are made in real time by looking at click rates, forwards, and other user metrics. BuzzFeed pioneered the idea of A/B testing stories with different headlines in real time to see how they fare and then zeroing in on the one that creates the most online buzz. Since then, newspapers and everyone else have followed suit. Facebook news feeds are the perfect example of the idea of narrowcasting and curating an information stream that fits your beliefs.
Commercially, there’s no reason not to do this. Providers want to keep users on their platform. This is not a democratic rationale, it’s a commercial strategy. If the idea of soap operas is to keep you there to expose you to advertising, the idea of Facebook is to keep you there as long as possible to collect as much data from you as possible. That’s obviously what ends up producing money for Facebook.
We all live in homophilic networks, i.e., we surround ourselves with people who are like us. We live in residential areas where people are like us. We surround ourselves politically with people who are ideologically like us. So we have that tendency in the first place.
Coming back to the idea of augmented selectivity, on a topic on which Matthew Nisbet and I have written. Reference Scheufele and Nisbet17 There are two levels to distinguish between. On the one hand, we have the media problem or the media level. We have news that is highly polarized, highly tailored, to niche audiences in order to maximize profit or to be successful in the marketplace. Then, of course, we have the audience level. And the two levels will overlap through selective exposure and attention that is enabled by increasingly sophisticated online platforms.
What I mean by augmented selectivity in the context of these comments is how our individual tendencies and homophilic networks are now interconnected by a new narrowcast media environment. We did a piece on Google Search rankings, Reference Liang, Anderson, Scheufele, Brossard and Xenos18 for instance, using Nielsen data and search data. It found that over-time recommendations from search engines will drive traffic, search rankings, and — in turn again — Google recommendations. If you type in “nanotechnology” in Google, for example, there will be a whole bunch of recommendations on what you could look at. Those recommendations are determined and ranked based on a host of factors, including your geography and whatever else, but also by the amount of traffic that a particular site has generated. Over time, as people pick these recommendations, particular topics get pushed even higher in the search rankings, which means they get reinforced. Ultimately, what you get from Google is the most popular piece of information, not what is necessarily the best piece of information. This shows how choices made by a media or information organization can interact with individual behavior to narrow choices.
That’s what I mean by augmented selectivity. When I used to read a newspaper, I had to go through to pick out what I didn’t want to read. Now, I can use Google News, Flipboard, Feedby, or whatever news aggregators are out there to filter out stuff I never want to see. As a result, it’s really easy for me to ignore all the sources that I don’t want to go and look at. They will never show up in front of my eyes. Or, they will appear lower in search result rankings because I’ve made certain choices and because I’ve clicked on certain things in Facebook or liked particular posts. As a result, I’m basically only being offered news and information that fits my priors.
To test this a bit more systematically, we created another fake blog post and showed respondents another story about nanosilver. This time we focused on food, and we asked respondents to first find out more information before asking them a series of questions. This approach is similar to what Bennett and Iyengar Reference Bennett and Iyengar19 have done: giving people a randomized set of information and mix that with ideological cues, in this case, Fox News, MSNBC, and Canadian broadcasting.
What happened was basically a straight ideological sorting effect. What news source you pick first — MSNBC or Fox News — depends completely on your ideology. The results are as clean as they are depressing. It doesn’t matter that this is a scientific issue that’s not politically charged and shouldn’t produce such partisan effects. What respondents did was to click on Fox if they were conservative and MSNBC if they were liberal. That illustrates two processes: selective exposure, á la Festinger, Reference Festinger20 and motivated reasoning.
As a German, I’m always fascinated by how many people you compare to Hitler in this country. President Bush has been Hitler, President Obama has been Hitler. It depends on how you look at the world, apparently. That’s classic motivated reasoning, that is, confirmation-disconfirmation biases. People often confuse that with selecting information and ignoring it. But it’s not selective exposure. It’s about me putting 10 pieces of information in front of you and forcing you to look at all ten. Motivated reasoning means that, even when we are exposed to a complete set of facts, we all weigh more heavily the stuff that fits with what we already believe in and weigh less heavily — disconfirming — those pieces of information we agree with. We assimilate information into our priors to not have our belief systems challenged.
For us as political scientists or communication researchers, that’s not a very surprising statement. But think about this in a scientific context. The same scientific information will mean different things to different people. There’s no single scientific fact that we can put out there that says, “Look, this is what the science says. You need to believe in climate change.” Instead, everything will be filtered in some way, shape, or form.
We had a recent experience with this in Wisconsin when Governor Scott Walker cut or proposed a number of cuts to the University of Wisconsin (UW) system budget. Part of that was to cut some subsidies that had also gone toward bioenergy research here at the university. There were a lot of very surprised voices at UW who said, “Why would anybody support that kind of cut? It hurts the state economically!”
Well, we happen to have survey data from Wisconsin that looked at how people thought about this issue. Again, these graphs control for demographics and other potential confounding variables. Plotting the interaction as a dichotomous proposition here — low information intake, high information intake on the X-axis, and perceived impacts on the economy on the Y-axis, we asked do people see a net positive impact of biofuels on the economy or a negative one?
Plotting the interaction media and ideology, we found that as Democrats have more information intake from television, they become more excited about biofuels and their economic potential. Even for information uptake from newspapers that allow for a bit more selectivity, the effect is still positive.
But we found that for Republicans it’s exactly the opposite — high information uptake leads to less support. This has real implications for how we as universities think about how to best interact with state legislatures. It’s not enough to just say, “Well, I don’t understand why anybody would support Governor Walker in his attempt to cut biofuels research.” In fact, this partisan dynamic provides a rationale: the arguments that you are making around biofuels in a fairly finite news environment, like Wisconsin, will produce different outcomes, depending on audience members’ priors. This analysis confounds both selective exposure and motivated reasoning. However, there are other analyses that disentangle the two and show the exact same effects.
Where to go from here
So, are there solutions to this? Here is one suggestion from the MIT Technology Review. 21 Some Italian computer scientists tried to program an polarization anti-algorithm. If everything you do points you to Democratic news sites, you will get every piece of Republican news item under the sun, and vice versa. Sounds like a great idea democratically, right? We’ve done lots of research on why information heterogeneity will produce positive democratic outcomes, ranging from political literacy to civic engagement.
The tricky part, of course, is that while this seems like a great idea theoretically, this is not viable commercially. Why would any news organization use an algorithm like this? Why would Mark Zuckerberg drive users away from Facebook by feeding them news they disagree with? It makes no sense. Similarly, if I represent Google, why would I offer you search results that rub you the wrong way? I’m having a hard enough time keeping you on Google Plus, so why would I incentivize you to look for information somewhere else?
Even if information providers were successful in keeping audiences attentive to belief-inconsistent news environments, that is, if you’re a strong liberal and I put Fox News in front of you, or you’re a strong conservative and I put Rachel Maddow in front of you, you will still experience all the confirmation-disconfirmation biases discussed earlier.
This is not a problem of an uninformed electorate. In fact, we know that sophisticated individuals engage in motivated reasoning even more than others. There may be some mechanisms that counter this problem, however. Two of them are particularly interesting: social pressure or social accountability, and cognitive tuning.
If I know I will routinely be exposed to views different from my own in my social circles, I will feel more social pressure to look at information from all sides, not just from my own perspective. The reason is that I do not want to seem uninformed or unprepared for counterarguments. As a result, I will also think through information more carefully in preparation for potential conversations with non-likeminded others, something that has been described as cognitive tuning.
We have some data that wasn’t collected to test this directly, but it ends up illustrating this really nicely. Mike Xenos was first author on this study. Reference Xenos, Becker and Anderson22 We tested student reactions to nanotechnology in a lab setting and essentially said to them: “After this experiment is done, we’re going put you in a discussion situation.” They were then randomly assigned to one of three experimental groups, or a control. They were told that they had to (1) talk to others, without any additional information about what others meant; (2) talk to others who would be assigned to the same group because they had opposing viewpoints; (3) talk to others who had similar viewpoints; or (4) they were assigned to a control group. In reality, they didn’t discuss anything with anyone.
We gave them a gated news environment so they could go online for more information in preparation for their hypothetical conversation. Those articles were grouped into three areas: general news, science, and medicine, followed by op-ed pieces that showed pro and con arguments for each topic. The last group of articles provided the two-sided information you would need if you were in a social accountability situation where you must defend your viewpoint to others. In other words, it provided participants with all 10 pieces of information in the example I mentioned earlier, not just a smaller subset of confirmatory tidbits that you typically would weigh most heavily because they fit your priors.
In the results, we see that the numbers are higher in the all-talk conditions than they are in the no-talk control condition. The clicks for the op/ed pieces were highest in the opposing-others condition. In other words, if study participants expected they would be talking to non-likeminded others, they were most driven, at least relative to the other groups, toward seeking out two-sided information that would provide them with the side of the argument that they otherwise wouldn’t attend to. Again, cognitive tuning and social accountability are the most likely mechanisms explaining this.
A lot of the work that we’ve been doing here at Wisconsin over the years has dealt with the impact of heterogeneous discussion networks. Reference Scheufele, Hardy, Brossard, Waismel-Manor and Nisbet23 How often do you talk to people who are just fundamentally different from you, both demographically and ideologically? If you look at the impact of heterogeneity, it is positive — and not just on participation but also information seeking and other democratically desirable outcomes.
Ultimately, disagreement is something that is very good for us. And it highlights why the problem of augmented selectivity in media “filter bubbles” of our own making is not just a theoretical issue but is really one that ends up having fundamental consequences for democracy.
Concluding thoughts
Let me end with a few quick concluding thoughts. I think there are new citizen competencies, not just for rarified notions of “citizens,” but also for the electorate more broadly. Like it or not, the idea that there are quality sources that I can go to and that will basically provide a mainstream summary of what we as a society think is in serious trouble. I am not sure if a golden era of media quality has ever existed or if it has existed on different levels at different times in this country. But it certainly doesn’t exist now. Citizens are instead faced with the challenge of sorting through and judging what pieces of information are relevant to them and what pieces of information they’re using because of their internal preferences.
Second, in the context of the “nasty effect,” is the idea that we have a hard time debating issues with one another in a civil fashion. Reference Brossard and Scheufele24,Reference Revkin25 We have a hard time having civil debates, especially in online environments that lack a lot of the nonverbal cues and social norms that we’ve established over time for face-to-face talk. De Tocqueville and others wrote about talk being the soul of democracy (and then turned around and spent pages and pages being pessimistic about that idea). Nonetheless, that whole idea has always been fundamental to this democracy. But in online environments, we need new competencies to maintain our soul” as a nation.
Finally, we need to realize that effective science communication might not always involve being right about the facts. Climate change is a great example. Communication about climate change and the policy options surrounding it is not about being right — it’s about finding a suitable compromise across the political aisle. In the end, it doesn’t matter to many citizens if climate change is real or not, as long as mitigation or regulatory policies make sense for reasons that are palatable across different groups. Finding values that unite us, therefore, and communicating around those values is crucially important. And that is a valuable lesson for those of us in academia as well.
In 2009, Senator Tom Coburn introduced a legislative amendment to defund political science research by the National Science Foundation. Much of the debate that followed was summarized in a subsequent New York Times op-ed, which noted that “even some of the most vehement critics of the Coburn proposal acknowledge that political scientists themselves vigorously debate the field’s direction, what sort of questions it pursues, even how useful the research is.” Reference Cohen26 I think it highlights some of the debates within political science. Cornell’s Peter Katzenstein said something along the following lines: look, we as political scientists need to communicate the value of our discipline better — and we may have to tackle the big sloppy questions of our time. The ones that don’t have clear answers. We should make contributions to them. We have gotten too used to slicing off small manageable problems with limited societal impact. Reference Gourevitch, Keohane, Krasner, Laitin, Pempel, Streeck and Tarrow27
I think this is more of a midwestern and “lake water” university problem than it is a “salt water” or coastal university problem, but we have sliced off really narrow pieces that we can answer really, really well down to the fifth decimal place. But the value of that kind of research to society is much more difficult to demonstrate. And by us not making contributions to the big, difficult questions of our time and by not having a problem-focus to our work, we have undersold the value of social science to some parts of the electorate.
So, as academics we need to provide empirical answers to questions that matter. A great study just came out in the American Sociological Review Reference Foster, Rzhetsky and Evans28 that showed how difficult it is to publish a piece of research that’s really groundbreaking versus one that’s incremental and just barely peaks over the shoulders of giants. 29 What that illustrates is that we’re very often too afraid of the big statement because it may not be publishable, or it’s safer to just leave out the high-risk piece of research.
Meanwhile, we’re not answering the big questions as media systems are changing all around us. We just haven’t done that as a discipline or as the social sciences in general, even though society depends on those findings to guide us. The nasty effect is such a great example of that. When we first published our findings, the question that every editor asked was, “so what’s the solution?” And our answer was at the time: “well, we haven’t researched that yet.” But they also ask, “what is the other literature out there?” And there is no other literature, or there is very little other literature. So, it’s just one example that illustrates both the problem and the solution at the same time.
My last point — and this is a really personal pet peeve of mine — is related to teaching. When Google bought the A.I. company DeepMind a few years back, they immediately created an ethics board for it. The reason they did that is not because they knew something horrible would happen or they would do unethical things. Instead they said, “No, the reason we’re doing this is because we don’t know what’s going to happen.”
In other words, as educators, we’re charged with preparing students for jobs that don’t yet exist, for marketplaces that will shift fundamentally for many years to come. That is why there is so much urgency to have problem-focused approaches to our research and to our teaching. It’s absolutely fundamental. If we build educational infrastructures around narrowly defined disciplinary problems, degrees and disciplines will end up being irrelevant. If that’s the case, Senator Tom Colburn and Representative Lamar Smith, and everybody who retweets and repeats their arguments, will have a better argument than they should have.