Hostname: page-component-cd9895bd7-hc48f Total loading time: 0 Render date: 2024-12-25T19:39:35.611Z Has data issue: false hasContentIssue false

Fear and Loathing: ChatGPT in the Political Science Classroom

Published online by Cambridge University Press:  11 April 2024

Phillip J. Ardoin
Affiliation:
Appalachian State University, USA
William D. Hicks
Affiliation:
Appalachian State University, USA
Rights & Permissions [Opens in a new window]

Abstract

ChatGPT has captured the attention of the academic world with its remarkable ability to write, summarize, and even pass rigorous exams. This article summarizes the primary concerns that political science faculty have about ChatGPT and similar AI software with regard to academia. We discuss results of a national survey of political scientists that we conducted in March 2023 to assess faculty attitudes toward ChatGPT and their strategies for effectively engaging with it in the classroom. We present several assignment ideas that limit the potential for cheating with ChatGPT—a primary concern of faculty—and describe ways to incorporate ChatGPT into faculty teaching. Several suggestions for syllabi that address political science students’ use of ChatGPT also are provided.

Type
Article
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
© The Author(s), 2024. Published by Cambridge University Press on behalf of American Political Science Association

In November 2022, OpenAI released to the public ChatGPT-3.5, which uses artificial learning techniques to generate human-like text in response to user prompts. Its popularity has been astounding, reaching more than 100 million users within two months and averaging approximately 13 million daily visits (Hu Reference Hu2023). Utilizing an exhaustive knowledge base, ChatGPT has the ability to generate human-like text to answer a broad range of questions; summarize complex information; and write essays, computer code, and even poetry. As an example of its impressive knowledge base and aptitude, ChatGPT passed the US Medical Licensing Exam and the final MBA exam for the University of Pennsylvania’s Wharton School and it published a law review article (Kelley Reference Kelley2023). The ability of ChatGPT to write, summarize, and even successfully pass demanding exams has garnered substantial attention within academia. Moreover, it has dominated the headlines of leading higher-education publications such as the Chronicle of Higher Education, which has published more than 130 articles related to ChatGPT in the nine months following its November 2022 release.

Since the release of ChatGPT, faculty throughout academia have expressed concerns regarding the integration of ChatGPT in the classroom. A key concern is the potential for plagiarism and academic dishonesty because the AI’s human-like text-generation capabilities may encourage students to submit AI-generated work as their own (Marche Reference Marche2022; Perkins Reference Perkins2023).Footnote 1 This issue is compounded by the difficulty in distinguishing between AI- and human-generated content (Abd-Elaal, Gamage, and Mills Reference Abd-Elaal, Sithara and Mills2022; Clark et al. Reference Clark, August, Serrano, Haduong, Gururangan and Smith2021; Cotton, Cotton, and Shipway Reference Cotton, Cotton and Shipway2023; Fazackerley Reference Fazackerley2023; Kumar, Mindzak, and Racz Reference Kumar, Mindzak and Racz2022; Wahle et al. Reference Wahle, Ruas, Foltýnek, Meuschke, Gipp and Smits2022).

In addition, there is apprehension that an over-reliance on ChatGPT could diminish students’ critical-thinking skills, impairing their ability to evaluate sources and engage in independent learning (Dans Reference Dans2023; N. Hughes Reference Hughes2023; Kitazawa Reference Kitazawa2023).Footnote 2 Furthermore, the possibility of bias in AI-generated content and the propagation of misinformation presents concerns for perpetuating stereotypes and undermining inclusive learning environments (Bolukbasi et al. Reference Bolukbasi, Chang, Zou, Saligrama and Kalai2016; Borji Reference Borji2023; Petkauskas Reference Petkauskas2023). Considering the critical and sensitive issues that political science courses regularly address, issues of bias are of particular concern.

Data privacy and student-information protection also emerge as a concern because ChatGPT’s data-input requirements raise questions about the security of student data and the potential misuse of this information (O’Shea Reference O’Shea2023; Ryan Reference Ryan2023). Overall, these concerns highlight the need for a cautious approach to incorporating ChatGPT into college classrooms.

As an example of the fluency and knowledge of ChatGPT, the three previous paragraphs summarizing concerns with ChatGPT were produced by ChatGPT-4 with only minor edits (figure 1 shows the prompt and the original output). Although the original ChatGPT output is well written and provides an accurate summary of faculty concerns, only two of the six references in the original output were valid. Two of the referenced articles and authors were completely fabricated by ChatGPT, and two other references incorrectly identified the authors or journal.Footnote 3

Figure 1 Example of ChatGPT Output of Concerns with ChatGPT

As exhibited in the previous ChatGPT output, another significant concern with ChatGPT is its propensity to provide misinformation or “hallucinations,” as defined by AI software experts. This is due to the process by which chatbots and other AI are trained. AI software learns independently by evaluating patterns in enormous amounts of data, including sites such as Reddit that contain substantial false information, hate speech, and biases, whereas traditional software applications are programmed carefully one line of code at a time.Footnote 4 Although ChatGPT uses filters to limit offensive output, there is no large-scale editorial review of the data training ChatGPT and its output. For instance, research by Hartmann, Schwenzow, and Witte (Reference Hartmann, Schwenzow and Witte2023) and Baum and Villasenor (Reference Baum and Villasenor2023) provides systematic evidence of ChatGPT’s bias toward pro-environmental and ideologically liberal output.

A final concern not clearly addressed in the literature and/or identified by any of our faculty survey respondents is the use of ChatGPT to rewrite or summarize paragraphs or ideas directly copied from primary sources. Identifying traditionally plagiarized text is relatively straightforward. Instructors compare a student’s writing to the suspected original source and, if it has not been quoted or referenced properly, there is an issue of plagiarism. However, with ChatGPT, students easily can submit primary source material as their own after requesting ChatGPT to rewrite the material. Figure 2 is an example of this using a portion of Martin Luther King’s famous “I Have a Dream” speech. The text output provided by ChatGPT was reviewed with Turnitin software, which did not identify Martin Luther King’s “I Have a Dream” speech as a potential source of the text and did not recognize the text as being ChatGPT output.

Figure 2 Example of Use of ChatGPT to Rewrite Primary Texts

Considering the significant concerns regarding the impact of ChatGPT on the future of higher education and the teaching of political science, our study surveyed political scientists’ attitudes toward ChatGPT and their plans for responding and engaging with ChatGPT in the classroom. Steps to limit potential cheating with ChatGPT and potential assignments for incorporating ChatGPT into faculty teaching also are discussed. Finally, several suggestions for syllabi statements addressing students’ use of ChatGPT are provided.

WHAT DO POLITICAL SCIENTISTS THINK?

To provide a better understanding of political scientists’ attitudes toward ChatGPT and plans for addressing ChatGPT in the classroom, we developed and distributed a brief survey in March 2023 to a sample population of political science faculty in the United States (see online appendix A for the survey instrument).Footnote 5 The survey initially was sent via email to 100 political science faculty members, requesting each of these recipients to share the survey link with their colleagues.Footnote 6 Our 100 survey requests resulted in 109 completed survey responses from faculty members in 27 different states and more than 40 unique colleges and universities representing minority-serving institutions and private, public, and community colleges (Ardoin and Hicks Reference Ardoin and Hicks2024).

Our sample included more men than women: almost 60% identified as men, 39% identified as women, and the remaining 1% identified as other. Slightly more than 80% of our sample were faculty members who identified as white; approximately 7% identified as Latino, 4% as Black, 2% as Asian, and 4% as Middle Eastern/North African. The plurality of our sample included respondents who predominantly teach American politics (40%). However, about 17% teach comparative politics, 14% international relations, 8% public administration or policy, and 6% political theory. Approximately 40% of our sample included faculty members who teach in departments that grant PhDs; slightly more than 35% teach in departments whose highest degree granted is an MA, 20% in departments that grant only BA/BS degrees, and the remaining 2% in community colleges. Most of our sample included faculty who teach two or three classes per semester: 38% and 41%, respectively. Although precise data on political scientists are difficult to obtain, recent estimates from the American Political Science Association suggest that our sample population is reasonable with respect to many of these dimensions (Smith Reference Smith2023). Specific information regarding our sample is in online appendix A.

When respondents were asked to identify the primary challenges or threats created by ChatGPT or other AI chatbots to the teaching and learning experience in higher education, their survey responses generally reflected concerns noted in the literature. The two most common concerns were student cheating or plagiarism (37%) and the challenges that ChatGPT presents for assessing student work (31%). Respondents also noted concerns regarding the long-term impact that ChatGPT may have on students’ writing and research skills (26%). Surprisingly, only a few respondents noted any concerns with ChatGPT’s tendencies to fabricate information or hallucinate (6%).

The findings from our survey focus on two broad themes. First, we asked a set of specific questions to faculty members about what they currently are doing or plan to do to prevent cheating using ChatGPT or other AI. Second, we asked them their abstract opinions about what the advent of ChatGPT and other AI means for the future of teaching and learning. We explore and discuss faculty members’ opinions about these themes in this order.

We also were interested in learning why some answers to these questions might systematically vary. For example, why might some faculty members actively change critical features of their courses in light of ChatGPT, whereas others give the technological innovation the equivalent of a shoulder shrug? We view our research as somewhat exploratory with respect to this purpose. We encourage future researchers to collect more data and to evaluate similar and different covariates. That said, two covariates that we explored are rank (i.e., junior versus senior) and underlying concerns with cheating in general; for example, perhaps some faculty members simply are more concerned with cheating in any form and therefore are more activated by technologies such as ChatGPT.Footnote 7

We view these as reasonable covariates to explore. With respect to rank, it seems reasonable to assume that higher-ranking faculty members may be more antagonistic toward AI than lower-ranking faculty members because, on average, the former have spent more time developing and administering teaching and learning materials in a pre-AI/pre-ChatGPT world. With respect to cheating concerns, we assume for various reasons that some faculty members have deeper concerns about student conduct and cheating (whether these concerns are based in experience or personality traits, we do not know), and these concerns could lead to higher levels of concerns about new technologies that change the nature of cheating.

To measure respondents’ underlying concern for cheating, we combined their answers to two different questions. Respondents were asked how concerned they were about students: submitting another student’s work as their own and hiring or having someone write their papers. The choice of answers ranged from “none at all” (0) to “a great deal” (5). These two questions had a Cronbach’s alpha of 0.7996, which indicates that they measured the same construct. We viewed these questions as indicators that correlate with larger concerns about cheating in general. We first calculated the average value for each respondent across these two questions. We then divided respondents into two groups based on their average scores. Those whose mean scores were above average were classified as more concerned with cheating; those whose mean scores were at or below average were classified as less concerned. We reduced the number of categories in this covariate because our sample is small. (We explore an alternative conception in which we divided respondents into thirds in online appendix A.) Of course, we wanted to use the full richness of every covariate but our sample was sufficiently small as to make such an exercise more problematic.

To measure faculty rank, we asked faculty members how long they have taught at a college or university. We used this question to divide faculty into junior and senior groups. Junior-level faculty were grouped as respondents who said that they have taught at a university or college for seven or fewer years. Respondents who have taught longer than that were classified as senior faculty. We explore how our outcomes vary across a wider array of teaching experience in online appendix B. However, similar to cheating concerns, our sample was sufficiently small such that we were compelled to explore the smallest number of meaningful categories in order to maximize the sample size in each category.

Figure 3 plots our findings with respect to what faculty members were currently doing or planning to do in light of ChatGPT. One clear pattern revealed in figure 3 is that there were few differences based on faculty rank.Footnote 8 For example, t-tests revealed statistically significant differences in only two of eight questions. First, we found that junior faculty were more likely than senior faculty to require students to write essays in the classroom (p<0.1). Second, we found that senior faculty were more likely than junior faculty to allow students to use AI tools as long as they acknowledged their use (p<0.05). For lack of better terms, it appears from these two questions that junior faculty were more conservative and senior faculty were more liberal on the usage of ChatGPT.

Figure 3 Active Changes to Teaching and Learning in Light of ChatGPT by Faculty Rank

Note: Horizontal bars around the point estimates represent smoothed confidence intervals. As the interval nears 100, the shading lightens.

Despite these few differences, the point estimates also revealed interesting patterns. A supermajority of respondents (more than 80%) stated that they were actively changing the types of questions they ask and/or the types of assessments they assign in light of ChatGPT. We also found majorities among both senior and junior faculty members who claimed that they will use more questions about recent events in their assessments. Because AI often trains on data (e.g., websites) that are stored and become outdated quickly, very recent events are likely to be more challenging for AI to include. However, ChatGPT can utilize its knowledge of past events to generate responses concerning current events. For instance, ChatGPT drew on its understanding of past midterm elections to respond to a query in December 2022 regarding factors that influenced the 2022 midterm elections. Finally, we found a narrow majority of faculty members who use or plan to use software to detect the use of AI. Fewer faculty members overall were interested in using browser-blocking software or asking students to write essays in the classroom.

Figure 4 presents our findings but, in this case, faculty members were divided according to their underlying concerns about cheating. In this case, of the eight questions, t-tests revealed one statistically significant difference and one near-significant difference. Faculty members who were more concerned about cheating also were more likely to require students to write in the classroom (p<0.05). These concerned faculty members also were more likely to use software to detect AI (p≈0.14). Concerns about cheating in particular appeared to be a stronger motivation than rank for faculty members to take action in light of ChatGPT.

Figure 4 Changes to Teaching and Learning in Light of ChatGPT by Concern for Cheating

Note: Horizontal bars around the point estimates represent smoothed confidence intervals. As the interval nears 100, the shading lightens

Figure 5 presents our findings regarding the effects of ChatGPT and other AI on the future of teaching and learning. We were particularly interested in whether faculty view AI as a threat or an opportunity regarding teaching and learning. The proportion of faculty members who thought AI will fundamentally change higher education was almost 50%. It was slightly more than 50% for senior-level faculty and slightly less for junior-level faculty, although the differences were not statistically significant. That said, senior-level faculty were more likely to believe that AI and ChatGPT represent opportunities to re-envision learning relative to junior-level faculty and less likely to believe that they represent a threat to teaching and learning. The data also suggest that junior-level faculty were more skeptical than their senior-level colleagues about the role of AI in teaching and learning. However, the proportion of junior- and senior-level faculty who believe that ChatGPT and other AI represent a significant threat was lower than 50%.

Figure 5 ChatGPT and the Future of Teaching and Learning According to Seniority

Note: Horizontal bars around the point estimates represent smoothed confidence intervals. As the interval nears 100, the shading lightens.

Figure 6 illustrates how these proportions vary with respect to faculty concerns about cheating. Only one sharp difference was found: the extent to which ChatGPT and other AI represent a threat to teaching and learning. A majority of faculty who generally were more concerned with cheating believed AI to be a significant threat, whereas only a minority of those who were less concerned about cheating believed the same. A t-test reveals that the difference was statistically significant (p≈0.07). Also, faculty members with more concerns about cheating were much more likely to indicate that AI will fundamentally change higher education. It appears that some of the pessimism regarding AI is anchored in concerns about cheating.

Figure 6 ChatGPT and the Future of Teaching and Learning Divided by Cheating Concerns

Note: Horizontal bars around the point estimates represent smoothed confidence intervals. As the interval nears 100, the shading lightens.

ChatGPT IN THE CLASSROOM

It may not be possible to create assignments or exams that are ChatGPT-proof; however, faculty can develop questions and essay prompts that minimize the likelihood of students misusing ChatGPT. This section discusses some of the limitations of ChatGPT about its capacity to accurately respond to questions and includes recommendations for crafting questions that minimize the likelihood of students inappropriately using ChatGPT.

  • ChatGPT has problems with tasks that involve the application of knowledge. Therefore, questions that require students to apply course concepts and theories to contemporary political issues will limit their ability to use ChatGPT.

  • Assignments incorporating scaffolding such as outlines, multiple drafts, and opportunities for constructive feedback or peer review will limit students’ use of ChatGPT. These measures also align with best practices in writing pedagogy.

  • Requiring students to include in their research primary data from personal interviews, surveys, and experiments can restrict the use of ChatGPT. However, it is important to recognize that ChatGPT has the capability to generate fabricated interviews and survey results (Kelly Reference Kelly2023a).

  • ChatGPT training is not continuous and the latest version (i.e., ChatGPT-4) ceased in September 2021 (Wiggers Reference Wiggers2023); therefore, it struggles with answering questions concerning current events. As noted previously, faculty should recognize that ChatGPT can utilize its knowledge of past events to generate responses concerning current events.

  • The use of ChatGPT is limited when assignments or essay prompts require students to reflect on class discussions or activities because ChatGPT lacks knowledge of what transpired in these individual and unique settings.

  • ChatGPT struggles to accurately summarize and critique recently published scholarship not included as primary sources in its training data. ChatGPT will use secondary sources that may have referenced the recently published scholarship or previous scholarship by the same authors, leading to inaccurate summaries and critiques.

  • Faculty should dedicate an afternoon to personally exploring and experimenting with ChatGPT. Spending a few hours interacting with ChatGPT will provide a basic understanding of its potential impact and identify ways to minimize its potential negative effects on teaching.

  • There is no question or assignment that is “ChatGPT-proof.” To assess ChatGPT’s potential responses, faculty should submit assignments and questions and adjust as needed. Also, it is important to recognize ChatGPT’s answers to the same question vary each time it is submitted.<end bl>

Recognizing the challenges of creating assignments and questions that are ChatGPT-proof, we encourage faculty to embrace rather than ignore or avoid this emerging technology and to integrate it into their teaching. ChatGPT is becoming increasingly prevalent in the field of politics and likely will be an essential tool for political science students’ future careers in campaigning, government, and law. An ability to effectively use ChatGPT may not only improve students writing and research skills (Sakhno Reference Sakhno2023; Watkins Reference Watkins2022) but also help them to recognize abuses of ChatGPT. As noted by Kreps and Kriner (Reference Kreps and Kriner2023), ChatGPT may present a significant threat to democratic engagement and representation by nefarious individuals or interest groups who utilize AI software to misrepresent public opinion through orchestrated astroturf campaigns. Incorporating ChatGPT in the classroom can help students to better understand how the technology works as well as its potential biases, limitations, and threats to manipulate democratic political processes.

Following are suggestions for incorporating ChatGPT into courses and facilitating students’ learning, writing, and critical thinking about the technology.Footnote 9

Reflect and Improve

Step 1: Ask students to identify a major question or issue related to course content that ChatGPT can address.

Step 2: Have students reflect on ChatGPT’s output (e.g., what is correct or incorrect, what they do not know is correct or incorrect, what needs to be verified, and/or what needs to be expanded).

Step 3: Using Track Changes in MS Word or Suggesting in Google Docs, have students improve the output of ChatGPT (e.g., correcting errors or misinformation and expanding on shallow content).

Step 4: Have students submit their prompt and the improved ChatGPT response with their added content highlighted.

Reflect and Reference

Step 1: Ask students to identify a major question or issue related to course content that ChatGPT can address.

Step 2: Have students evaluate the references provided by ChatGPT’s output and then confirm the proper references and/or provide additional references. Significant weaknesses of ChatGPT are that it struggles to properly reference scholarship and it regularly makes up fake references.

Step 3: Have students submit a revised response to their initial ChatGPT prompt, which includes validated references with an annotated bibliography.

Alternative Perspective

Step 1: Ask students to identify a major question or issue related to course content that ChatGPT can address.

Step 2: Have students respond to ChatGPT’s output from a different perspective, apply a critical lens, clarify misinformation, or expand on specific arguments provided in ChatGPT’s response.

Reflective Learning

  • Step 1: Ask students to identify a major question or issue related to course content that ChatGPT can address.

  • Step 2: Have students submit the same question to ChatGPT multiple times, which will produce various ChatGPT responses.

  • Step 3: Have students write a reflection essay on ChatGPT’s response(s) to their question. What did they learn from ChatGPT, and what issues did ChatGPT fail to address in some response and not others? Did they find ChatGPT’s responses to be biased and, if so, how?

SYLLABI EXAMPLES

Students may not fully understand that using ChatGPT is a form of academic dishonesty, mainly because ChatGPT and similar AI programs are very new and many universities have not provided or communicated to students any standard policy for the use of AI programs.Footnote 10 Following are a few examples of syllabi language for faculty members to potentially include in their syllabi.

Faculty should understand that the use of AI-generated text—even with AI software detection programs—is difficult to prove and, as ChatGPT improves, it will become even more difficult.Footnote 11 With traditional incidences of plagiarism, faculty easily can confirm a student’s academic dishonesty by identifying the plagiarized source and comparing the student’s text to the source. This ability of faculty to compare potentially plagiarized student text to a published source does not exist with AI-generated text. Also, it is possible that a student’s suspected AI submission is simply an example of very generic writing that an AI detector may falsely identify as “written by AI” (PackBack.com).

In addition to the challenges of identifying AI-generated texts, it is difficult to determine what is and is not allowed. If students use ChatGPT to review their original texts for grammar and readability, is this an inappropriate use? Grammarly and other writing tools included in common writing software are simply more basic versions of AI software, and future updates of Microsoft Word are expected to include a form of ChatGPT to check for grammar and writing style (Kelly Reference Kelly2023b).

If faculty do not want students utilizing ChatGPT or similar AI programs in their courses, a clear statement in the syllabus (see the following example) should directly address the issue.

  • Syllabus Example 1: All written work submitted in this course must be originally produced by you. If you use an outside source, you must properly cite the source in the assignment. Using a ChatGPT or similar AI software and submitting the work as your own is a form of academic dishonesty.

If you want your students to use ChatGPT or other generative AI software in an academically honest way—similar to how they may use and cite other traditional sources—consider adding syllabus language that instructs them on the appropriate use of generative AI. Following are three potential syllabi examples:

  • Syllabus Example 2: AI-generated content (e.g., from ChatGPT) can be a useful tool for research and learning. All work submitted for this course must be your original work. Using AI-generated content without proper citation or submitting AI-generated content as your own is considered plagiarism.

  • Syllabus Example 3: AI tools, such as ChatGPT, can be valuable for research, brainstorming, and generating ideas. If you are using AI tools, it is crucial that you critically engage with the information and develop your own understanding and analysis. Directly submitting AI-generated content without proper citation, substantial modification, or critical analysis is considered plagiarism.

  • Syllabus Example 4: In this course, it is expected that students will properly cite and attribute all sources, including AI-generated content. If you choose to use AI-generated content (e.g., from ChatGPT) in your work, you must provide appropriate attribution, clearly indicating the source of the content. Failing to do so is considered plagiarism.

CONCLUSIONS

ChatGPT and AI software are part of our future in the field of political science, academia, political campaigns, government, and our world. Many of today’s political science students likely will be using ChatGPT or similar AI software as an essential tool in their future career. AI software is likely to have a significant role in the practice of law, extracting salient information from boxes of evidence and assisting with the drafting of motions and citing relevant case law (Villasenor Reference Villasenor2023). ChatGPT also will assist with developing and writing campaign speeches, policy papers, assessment plans, and many other essential tasks. Microsoft already has partnered with AI and recently released a premium version of its Teams software, which integrates ChatGPT-based tools for summarizing meeting notes, organizing personal tasks, and translating texts (Kelly Reference Kelly2023b).

ChatGPT and AI software are part of our future in the field of political science, academia, political campaigns, government, and our world. Many of today’s political science students likely will be using ChatGPT or similar AI software as an essential tool in their future career.

Many of the concerns that faculty voice about ChatGPT echo those voiced by earlier generations about calculators, the Internet, and Wikipedia. Ten years ago, faculty regularly banned the use of Wikipedia; today, students create wiki pages and contribute to Wikipedia topics as part of their courses (Cassell Reference Cassell2018; Norell Reference Norell2022).

Many of the concerns that faculty voice about ChatGPT echo those voiced by earlier generations about calculators, the Internet, and Wikipedia. Ten years ago, faculty regularly banned the use of Wikipedia; today, students create wiki pages and contribute to Wikipedia topics as part of their courses.

Rather than viewing ChatGPT as a threat to academia and a potential tool for student cheating, perhaps we can recognize and embrace it as the next iteration of a more sophisticated form of Google, Wikipedia, and Grammarly. At the same time, we cannot ignore the serious concerns that have been identified with ChatGPT. One of the most notable concerns with ChatGPT is “garbage in, garbage out.” ChatGPT can answer many questions, but AI software does not actually “know” anything. AI has no standard or editorial review process for evaluating biases, distinguishing fact from fiction, and reliably providing users with proper references. Although the most recent version of ChatGPT-4 has displayed substantial improvements with its language skills and is less likely to provide “hallucinations” (A. Hughes Reference Hughes2023), these issues remain a serious concern for students blindly using AI software.Footnote 12 The significant improvements displayed in ChatGPT-4 lead us to wonder about the improvements and abilities that are likely in a few years with the inevitable release of ChatGPT X.x and other new and improved versions of AI software.

Rather than viewing ChatGPT as a threat to academia and a potential tool for student cheating, perhaps we can recognize and embrace it as the next iteration of a more sophisticated form of Google, Wikipedia, and Grammarly.

As political science faculty, it is particularly important that we assist our students with developing the skills necessary to use AI software responsibly and to better understand its challenges and weaknesses as they prepare for their career and life after the political science classroom.

DATA AVAILABILITY STATEMENT

Research documentation and data that support the findings of this study are openly available at the PS: Political Science & Politics Harvard Dataverse at https://dataverse.harvard.edu/dataset.xhtml?persistentId=doi:10.7910/DVN/U5IQBE.

CONFLICTS OF INTEREST

The authors declare that there are no ethical issues or conflicts of interest in this research.

Footnotes

1. An example of the concerns related to ChatGPT is well represented by the title of an article published by Stephen Marche (Reference Marche2022) in The Atlantic: “The College Essay Is Dead: Nobody Is Prepared for How AI Will Transform Academia.”

2. Considering the heavy focus on critical-thinking skills in political science curricula and the importance of evaluating sources when examining contemporary politics, these issues are of particular concern to political scientists.

3. G. Baker (2021); C. Fadel and C. Lemke (2022), and M. Weller (2021) do not exist as publications. T. Bolukbasi et al. (Reference Bolukbasi, Chang, Zou, Saligrama and Kalai2016) is currently available online as an open access manuscript and has not been published in the identified journal. The article—OpenAI’s new language generator GPT-3 is shockingly good and completely mindless — was authored by Will Douglas Heaven in 2020, not K. Hao (2020) as identified by ChatGPT in the original output.

4. Although AI ChatGPT does not release to the public the specific data sources used to train ChatGPT, OpenAI confirms that ChatGPT-3.5 was trained on 45 terabytes of text data by multiple sources including English Wikipedia, New York Times, LA Times, open access journals, Google Patents, Public Library of Science, the Book Corpus, and Reddit (Iyer Reference Iyer2022).

5. The ChatGPT Faculty Survey was submitted to Appalachian State University’s Institutional Review Board (IRB) on February 16, 2023, and was approved for distribution by the IRB on March 1, 2023.

6. The initial 100 emails were sent to a sample of political scientists created from editorial boards on which the authors have served as well as their personal networks. Additional political science faculty from minority-serving institutions, regional universities, and community colleges were included to achieve a more representative sample. Finally, junior-level colleagues at the authors’ institution were recruited to share the survey with their networks to give more junior-level faculty a voice.

7. In addition to rank, several other analyses were conducted to examine potential differences among gender, race, type of university, and primary area of teaching. No substantive or significant differences were identified among any of these additional variables.

8. The nodes—diamonds and squares for senior- and junior-level faculty members, respectively—represent point estimates for the proportion who already are doing or plan to do something. The horizontal lines represent confidence intervals that lighten in shading as the intervals near 100.

9. Writing and critical-thinking skills are two of the primary goals of most political science curricula and universities (Association of American Colleges and Universities 2012).

10. As an example of the issues facing faculty with regard to ChatGPT, a student in New Zealand admitted to using AI to write their papers, justifying it as a tool like Grammarly or Spellcheck: “I have the knowledge, I have the lived experience, I’m a good student, I go to all the tutorials, and I go to all the lectures and I read everything we have to read but I kind of felt I was being penalized because I don’t write eloquently and I didn’t feel that was right.” The student argued that they were not cheating because the university guidelines identified cheating only as allowing somebody else to write your paper. ChatGPT-3 is not “somebody else”—it is a program (Marche Reference Marche2022).

11. Kumar, Mindzak, and Racz (Reference Kumar, Mindzak and Racz2022) found that identifying whether text was provided by humans or by ChatGPT-3 is difficult. Research participants found that identifying AI- versus human-generated text was a challenging task, with a high likelihood of ascribing the AI writing samples to humans.

12. Research has shown that ChatGPT can be trained to state how confident it is in producing a factually correct answer (Lin, Hilton, and Evans Reference Lin, Hilton and Evans2021).

References

REFERENCES

Abd-Elaal, El-Sayed, Sithara, H. P. W. Gamage, and Mills, Julie E.. 2022. “Assisting Academics to Identify Computer-Generated Writing.” European Journal of Engineering Education 47 (5): 725–45. https://doi.org/10.1080/03043797.2022.2046709.CrossRefGoogle Scholar
Ardoin, Phillip J., and Hicks, William D.. 2024. “Replication Data for ‘Fear and Loathing: ChatGPT in the Political Science Classroom.’” PS:Political Science & Politics. DOI:10.7910/DVN/U5IQBE.CrossRefGoogle Scholar
Association of American Colleges and Universities. 2012. “Liberal Education and America’s Promise (LEAP).” www.aacu.org/leap/vision.cfm.Google Scholar
Baum, Jeremy, and Villasenor, John. 2023. “The Politics of AI: ChatGPT and Political Bias.” Brookings Institute Commentary, May 8. www.brookings.edu/articles/the-politics-of-ai-chatgpt-and-political-bias.Google Scholar
Bolukbasi, Tolga, Chang, Kai-Wei, Zou, James Y., Saligrama, Venkatesh, and Kalai, Adam T.. 2016. “Man Is to Computer Programmer as Woman Is to Homemaker? Debiasing Word Embeddings.” In Proceedings of the 30th International Conference on Neural Information Processing Systems (NIPS’16), 4356–64. Red Hook, NY: Curran Associates, Inc. https://dl.acm.org/doi/10.5555/3157382.3157584.Google Scholar
Borji, Ali. 2023. “A Categorical Archive of ChatGPT Failures” [unpublished manuscript]. ResearchGate. www.researchgate.net/publication/368333200_A_Categorical_Archive_of_ChatGPT_Failures.CrossRefGoogle Scholar
Cassell, Mark K. 2018. “When the World Helps Teach Your Class: Using Wikipedia to Teach Controversial Issues.” PS: Political Science & Politics 51 (2): 427–33. https://doi.org/10.1017/S1049096517002293.Google Scholar
Clark, Elizabeth, August, Tal, Serrano, Sofia, Haduong, Nikita, Gururangan, Suchin, and Smith, Noah A.. 2021. “All That’s ‘Human’ Is Not Gold: Evaluating Human Evaluation of Generated Text.” In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, Vol. 1 : Long Papers, 7282–96. https://doi.org/10.18653/v1/2021.acl-long.565.CrossRefGoogle Scholar
Cotton, Debby R. E., Cotton, Peter A., and Shipway, J. Reuben. 2023. “Chatting and Cheating: Ensuring Academic Integrity in the Era of ChatGPT.” Innovations in Education and Teaching International. https://doi.org/10.1080/14703297.2023.2190148.CrossRefGoogle Scholar
Dans, Enrique. 2023. “ChatGPT and the Decline of Critical Thinking.” Insights, January 27. Segovia, Spain: IE University. www.ie.edu/insights/articles/chatgpt-and-the-decline-of-critical-thinking.Google Scholar
Fazackerley, Anna. 2023. “AI Makes Plagiarism Harder to Detect, Argue Academics—In Paper Written by Chatbot.” The Guardian, March 19. www.theguardian.com/technology/2023/mar/19/ai-makes-plagiarism-harder-to-detect-argue-academics-in-paper-written-by-chatbot.Google Scholar
Hartmann, Jochen, Schwenzow, Jasper, and Witte, Maximilian. 2023. “The Political Ideology of Conversational AI: Converging Evidence on ChatGPT’s Pro-Environmental, Left-Libertarian Orientation.” ArXiv/abs/2301.01768.Google Scholar
Hu, Krystal. 2023. “ChatGPT Sets Record for Fastest Growing User Base Analyst Note.” Reuters, February 1. www.reuters.com/technology/chatgpt-sets-record-fastest-growing-user-base-analyst-note-2023-02-01.Google Scholar
Hughes, Alex. 2023. “ChatGPT: Everything You Need to Know About Open AI’s GPT-4 Tool.” BBC Science Focus Magazine, March 16. www.sciencefocus.com/news/chatgpt-gpt-4-tool-everything-you-need-to-know.Google Scholar
Hughes, Neil C. 2023. “ChatGPT and the Slow Decay of Critical Thinking.” Cybernews, March 5. https://cybernews.com/editorial/chatgpt-decay-critical-thinking.Google Scholar
Iyer, Aparna. 2022. “Behind ChatGPT’s Wisdom: 300 Bn Words, 570 GB Data.” Analytics India Magazine, December 15. https://analyticsindiamag.com/behind-chatgpts-wisdom-300-bn-words-570-gb-data.Google Scholar
Kelly, Samantha Murphy. 2023a. “ChatGPT Passes Exams from Law and Business Schools.” CNN Business, January 26. www.cnn.com/2023/01/26/tech/chatgpt-passes-exams/index.html.Google Scholar
Kelly, Samantha Murphy. 2023b. “Microsoft Is Bringing ChatGPT Technology to Word, Excel, and Outlook.” CNN Business, March 16. www.cnn.com/2023/03/16/tech/openai-gpt-microsoft-365/index.html.Google Scholar
Kelley, Kevin Jacob. 2023. “Teaching Actual Student Writing in an AI World.” Insider Higher Ed, January 18. www.insidehighered.com/advice/2023/01/19/ways-prevent-students-using-ai-tools-their-classes-opinion.Google Scholar
Kitazawa, Emily. 2023. “ChatGPT and Education: The Benefits & Dangers Explained.” Shortform, March 17. www.shortform.com/blog/chatgpt-and-education.Google Scholar
Kreps, Sarah, and Kriner, Douglas. 2023. “How Generative IA Impacts Democratic Engagement.” Brookings Institute Commentary, March 21. www.brookings.edu/articles/how-generative-ai-impacts-democratic-engagement.Google Scholar
Kumar, Rahul, Mindzak, Michael, and Racz, Rachel. 2022. “Who Wrote This? The Use of Artificial Intelligence in the Academy.” Scholars Portal Books. http://hdl.handle.net/10464/16532.Google Scholar
Lin, Stephanie, Hilton, Jacob, and Evans, Owain. 2021. “Teaching Models to Express Their Uncertainty in Words.” arXiv preprint arXiv:2205.14334. https://doi.org/10.48550/arXiv.2205.14334.CrossRefGoogle Scholar
Marche, Stephen. 2022. “The College Essay Is Dead: Nobody Is Prepared for How AI Will Transform Academia.” The Atlantic, December 6. www.theatlantic.com/technology/archive/2022/12/chatgpt-ai-writing-college-student-essays/672371.Google Scholar
Norell, Erik. 2022. “Civic Engagement Meets Service Learning: Improving Wikipedia’s Coverage of State Government Officials.” PS: Political Science & Politics 55 (2): 445–49. DOI:10.1017/S1049096521001451.Google Scholar
O’Shea, Dan. 2023. “ChatGPT and the Security Risks of Generative AI.” Fierce Electronics. March 11. www.fierceelectronics.com/electronics/chatgpt-and-security-risks-generative-ai.Google Scholar
Packback.com. “Professor Guide to AI Text Detection.” Accessed April 25, 2023. www.packback.co/pedagogy/professor-guide-to-ai-text-detection.Google Scholar
Perkins, Mike. 2023. “Academic Integrity Considerations of AI Large Language Models in the Post-Pandemic Era: ChatGPT and Beyond.” Journal of University Teaching & Learning Practice 20 (2). https://doi.org/10.53761/1.20.02.07.CrossRefGoogle Scholar
Petkauskas, Vilius. 2023. “ChatGPT’s Answers Could Be Nothing but a Hallucination.” Cybernews, March 6. https://cybernews.com/tech/chatgpts-bard-ai-answers-hallucination.Google Scholar
Ryan, Tom. 2023. “How Much of a Security Risk Is ChatGPT?” Retailwire.com, April 10. https://retailwire.com/discussion/how-much-of-a-security-risk-is-chatgpt.Google Scholar
Sakhno, Inessa. 2023. “Exploring the Impact of ChatGPT on Political Science Research and Teaching.” The European Consortium for Political Research: Advancing Political Science, September 29. https://ecpr.eu/news/news/details/685.Google Scholar
Smith, Steven R. 2023. “2023 Executive Director’s Report.” Political Science Today 2 (3). www.flipsnack.com/politicalsciencetoday/political-science-today-v3-n2.html.Google Scholar
Villasenor, John. 2023. “How AI Will Revolutionize the Practice of Law.” Brookings Institute Commentary, March 20. www.brookings.edu/articles/how-ai-will-revolutionize-the-practice-of-law.Google Scholar
Wahle, Jan Philip, Ruas, Terry, Foltýnek, Tomáš, Meuschke, Norman, and Gipp, Bela. 2022. “Identifying Machine-Paraphrased Plagiarism.” In Information for a Better World: Shaping the Global Future, ed. Smits, Malte, 393413. New York: Springer International Publishing. https://doi.org/10.1007/978-3-030-96957-8_34.CrossRefGoogle Scholar
Watkins, Ryan. 2022. “Update Your Course Syllabus for ChatGPT.” Medium.com, December 19. https://medium.com/@rwatkins_7167/updating-your-course-syllabus-for-chatgpt-965f4b57b003.Google Scholar
Wiggers, Kyle. 2023. “OpenAI Releases GPT-4, a Multimodal AI That It Claims Is State-of-the-Art.” Tech Crunch, March 14. https://techcrunch.com/2023/03/14/openai-releases-gpt-4-ai-that-it-claims-is-state-of-the-art.Google Scholar
Figure 0

Figure 1 Example of ChatGPT Output of Concerns with ChatGPT

Figure 1

Figure 2 Example of Use of ChatGPT to Rewrite Primary Texts

Figure 2

Figure 3 Active Changes to Teaching and Learning in Light of ChatGPT by Faculty RankNote: Horizontal bars around the point estimates represent smoothed confidence intervals. As the interval nears 100, the shading lightens.

Figure 3

Figure 4 Changes to Teaching and Learning in Light of ChatGPT by Concern for CheatingNote: Horizontal bars around the point estimates represent smoothed confidence intervals. As the interval nears 100, the shading lightens

Figure 4

Figure 5 ChatGPT and the Future of Teaching and Learning According to SeniorityNote: Horizontal bars around the point estimates represent smoothed confidence intervals. As the interval nears 100, the shading lightens.

Figure 5

Figure 6 ChatGPT and the Future of Teaching and Learning Divided by Cheating ConcernsNote: Horizontal bars around the point estimates represent smoothed confidence intervals. As the interval nears 100, the shading lightens.