Policy Significance Statement
Artificial Intelligence (AI) is transforming electoral integrity worldwide, posing significant risks and offering innovative solutions. This commentary examines their impact, particularly in the context of the Philippines’ 2025 midterm elections, where disinformation threatens democratic processes. Policymakers are urged to implement robust regulatory frameworks, prioritising transparency, real-time monitoring, and enhanced digital literacy. Cross-sector collaboration and the development of culturally and linguistically tailored AI tools are critical to building resilient information ecosystems. These measures will help protect electoral integrity, promote informed public participation, and reinforce trust in democratic institutions.
1. Introduction
The rapid proliferation of artificial intelligence (AI) technologies, including generative AI (GAI), large language models (LLMs), and natural language processing (NLP), is reshaping global information ecosystems. While these technologies enable transparency and accessibility, they also exacerbate misinformation risks, threatening electoral integrity. This shift is significant alongside developments in the Indo-Pacific, a region marked by diverse political systems, rising geopolitical tensions, and widespread digital engagement. The combination of evolving democratic institutions, varying levels of media literacy, and rapid technological adoption creates both opportunities and heightened challenges for managing misinformation. These factors make the Indo-Pacific especially vulnerable to AI-driven disinformation, with significant implications for electoral integrity and regional stability. As democracies increasingly rely on digital platforms, understanding the dual role of AI is vital for addressing disinformation and fostering trust in democratic processes.
This commentary explores AI’s role in electoral misinformation, focusing on the Philippines’ 2025 midterm elections, with comparative insights from Taiwan and India. These regions are selected due to their diverse political environments, significant digital engagement, and recent experiences with misinformation in elections, making them valuable case studies for understanding AI’s impact in democratic processes. This analysis underscores the importance of regulatory frameworks, collaborative initiatives, and technological innovation to mitigate misinformation’s impact. By combining case studies and actionable recommendations, this report contributes to the broader discourse on leveraging AI responsibly to protect democracy. It explores the dual role of AI and proposes scalable solutions and regulatory measures to combat disinformation effectively.
2. Section 1: AI as a double-edged sword
This section explores the dual role of AI in modern electoral processes around the world, highlighting its ability to both combat and propagate misinformation. It examines how AI contributes to misinformation through deepfakes (AI-generated manipulations of media), algorithmic biases, and hyper-realistic synthetic media. At the same time, these technologies offer solutions such as automated fact-checking, content moderation, and enhanced accessibility to political information. The section sets the stage for understanding AI’s opportunities and risks within the broader context of democratic integrity.
It is essential to differentiate AI-generated material from disinformation, which can also stem from non-synthetic sources and often becomes more persuasive when grounded in partial truths or emotionally resonant narratives. Moreover, distinguishing between “fake news” (deliberately fabricated content) and “distorted news” (subtly manipulated facts) is essential. The latter is often more persuasive and difficult to identify, especially as the boundaries between fact and fiction are shaped by personal beliefs and context. Legal and cognitive challenges arise in defining and regulating misinformation, as these distinctions are not always clear-cut (Neuwirth, Reference Neuwirth2021).
Misinformation spreads faster and more broadly than the truth across all types of information, with false political news being particularly impactful (Vosoughi et al., Reference Vosoughi, Roy and Aral2018). Fake news often mimics legitimate content, making it difficult to differentiate the two, while its rapid spread outpaces fact-checking efforts. For instance, during the 2024 U.S. elections, X’s AI chatbot, Grok, spread false information about the process for adding new candidates to the ballot. While X initially resisted corrections, election officials intervened to clarify the facts (Leingang, Reference Leingang2024). Similarly, in elections in Indonesia and Pakistan, AI-generated “softfakes”—manipulated media portraying candidates favourably—raised ethical concerns about voter manipulation and the risks to democratic processes (Chowdhury, Reference Chowdhury2024). Although sophisticated AI-generated disinformation had minimal impact on recent elections in the UK, France, and the European Parliament, it mostly reinforced existing beliefs (Stockwell, Reference Stockwell2024). However, traditional methods, like bots (discussed in 3.1) and influencers, were more effective in reaching a broader audience and spreading disinformation (Heikkilä, Reference Heikkilä2024). While AI algorithms are helpful in addressing these issues, they have limitations. As deceptive tactics evolve, a multidisciplinary approach becomes essential (Aïmeur et al., Reference Aïmeur, Amri and Brassard2023).
As AI technologies continue to shape global political discourse, the need for AI-specific policies and accountability measures becomes ever more urgent. The 2024 “super election year” saw AI-driven disinformation influence campaigns in over 60 countries. While safeguards such as policy protections, industry standards, and voter scepticism helped limit the negative effects, three key trends remain: the development of increasingly persuasive AI tools, the growing prevalence of AI-generated content, and public disengagement from political discourse (Carr and Köhler, Reference Carr and Köhler2024). Also, trust disparities remain stark. Developing countries report higher levels of trust in institutions compared to G7 nations. Governments face significant distrust, driven by perceptions of incompetence, unethical behaviour, and the belief that leaders intentionally mislead the public. Furthermore, poorly managed innovation and the perception of political interference in science further exacerbate trust issues, particularly in developed nations (Edelman, 2024). Also, Filipinos now demand tangible proof before extending trust, urging institutions to adopt transparency, competence, and ethical conduct as foundational values (EON The Stakeholders Relations Group and Ateneo de Manila University, 2024).
On the other hand, AI-generated content, especially from well-trained models, can positively contribute to democracy by enhancing access to accurate information, supporting fact-checking initiatives, and fostering informed public discourse. For example, AI, including NLP (discussed in 3.3) and machine learning, has been used in peacebuilding efforts by the United Nations (UN), enabling large-scale digital dialogues in conflict zones to identify shared concerns and potential areas of consensus (Alavi et al., Reference Alavi, Wählisch and Konya2022). Also, the 2024 Nobel Prizes in Physics and Chemistry recognised groundbreaking contributions to AI, underscoring its immense potential in shaping the future of medicine and science (Li and Gilbert, Reference Li and Gilbert2024).
The section underscores that while AI offers transformative opportunities for improving electoral integrity, its misuse poses significant risks. This duality necessitates a nuanced approach to leveraging these technologies responsibly. The next section delves into regional case studies to illustrate how AI-driven misinformation and countermeasures manifest in diverse political contexts.
3. Section 2: regional challenges and strategies
Disinformation is a global challenge affecting democracies at all stages of development, not just those with weak regulation or political instability. In the Indo-Pacific, the Philippines, Taiwan, and India provide distinct perspectives on how political, cultural, and technological factors influence the spread and management of disinformation.
The Philippines faces significant risks due to its young democracy, high digital engagement, and history of political manipulation through social media. These risks are heightened by increasing geopolitical pressure from China (Council on Foreign Relations, 2024), similar to Taiwan’s situation. Taiwan has built strong defences against disinformation through proactive regulation, real-time fact-checking, and media literacy—strategies the Philippines could adopt. India’s experience, with its vast, diverse population, offers insights into combating disinformation on a large scale through public reporting mechanisms and digital literacy initiatives, relevant to the Philippines’ own regional diversity across regions like Luzon, Visayas, and Mindanao.
This section provides analysis of how disinformation has influenced electoral processes in the Philippines, Taiwan, and India. It highlights the Philippines’ challenges with historical revisionism and social media exploitation, Taiwan’s strategies to counter geopolitical disinformation, and India’s innovative public reporting mechanisms during its general elections. By comparing these case studies, the section showcases both the commonalities and distinct responses across the region.
3.1. Philippines: the role of social media in shaping the 2025 election
Social media’s influence is especially pronounced in the Philippines, which has one of the highest rates of social media usage globally (Balita, Reference Balita2023; Telenor Asia, 2023). As the country approaches the 2025 midterm elections, it faces a growing threat from AI-driven disinformation. Building on patterns from previous election cycles discussed below, this threat now includes the added complexity of AI-generated content, such as deepfakes, which has the potential to intensify disinformation and undermine electoral integrity. Digital literacy remains limited, and entrenched political interests continue to benefit from the spread of disinformation (Enriquez, Reference Enriquez2024).
In 2016, the Philippines earned the label “patient zero” in the global disinformation epidemic due to rampant false narratives. Former President Duterte’s campaign effectively used social media to promote aggressive rhetoric, while media literacy efforts lagged. Disinformation networks like Twinmark Media amplified Duterte’s message through platforms such as Trending News Portal (TNP). Although Twinmark was banned from Facebook in 2019 for “coordinated inauthentic behaviour,” it quickly resurfaced with the help of micro-influencers, bypassing platform regulations (Fallorina et al., Reference Fallorina, Lanuza, Felix, Sanchez, Ong and Curato2023; Hapal, Reference Hapal2024). During Duterte’s presidency (2016–2022), authoritarian policies, like the anti-drug campaign, gained support through disinformation from state-backed “troll farms,” framing opposition figures as communist sympathisers and silencing critics (Arugay and Mendoza, Reference Arugay and Mendoza2024).
In 2022, President Marcos Jr. constructed a complex media ecosystem blending historical revisionism with influencer narratives, polarising the political landscape and evading regulatory oversight. The Marcos Jr. campaign focused on rehabilitating the Marcos family image and swaying public opinion, particularly among young Filipinos who were digitally active but vulnerable to disinformation due to limited media literacy (Chua and Khan, Reference Chua and Khan2023; Marcelino, Reference Marcelino2023). Many young people in the Philippines, unaware of the dictatorship’s history of human rights abuses and corruption, have developed favourable views influenced, among other factors, by economic struggles and the punitive nature of post-Marcos reforms (Tigno et al., Reference Tigno, Ducanes, Rood and Licudine2024). TikTok played a critical role in Marcos Jr.’s digital strategy, with influencers sharing videos portraying the Marcos regime as a time of prosperity and stability (de Guzman, Reference de Guzman2022). TikTok’s algorithm amplified these messages, allowing them to go viral rapidly. This environment enabled the spread of revisionist narratives, including the “Marcos gold” myth and conspiracy theories about the EDSA revolution, which were presented as a fabricated power grab (Marcelino, Reference Marcelino2023; Arugay and Mendoza, Reference Arugay and Mendoza2024). Marcos Jr. use of social media allowed him to avoid traditional media channels, which often critique the Marcos legacy (de Guzman, Reference de Guzman2022; Marcelino, Reference Marcelino2023). The blend of historical nostalgia, a desire for continuity, and regional loyalty outweighed secondary factors like age, education, and socioeconomic status in the success of Marcos Jr.’s campaign (Dulay et al., Reference Dulay, Hicken, Menon and Holmes2023). Disinformation campaigns have targeted both political figures and governmental institutions, spreading false narratives that led to harassment, violence, and stigmatisation (Fallorina et al., Reference Fallorina, Lanuza, Felix, Sanchez, Ong and Curato2023).
This drift towards digital autocratisation under Duterte and Marcos Jr., fuelled by state-backed disinformation, poses a serious challenge (Arugay and Mendoza, Reference Arugay and Mendoza2024). Efforts to combat disinformation, led by civil society, academia, and media, focus on integrating Media and Information Literacy into school curricula and fostering fact-checking collaborations (Chua and Khan, Reference Chua and Khan2023). Also, the National Library of the Philippines offers virtual reference services to ensure equitable access to information (Romero and Fuellos, Reference Romero and Fuellos2024).
However, significant challenges remain, including legal, ethical, and privacy concerns, limited AI awareness, and resource constraints, as highlighted in the National AI Roadmap and the Philippine Innovative Startup Act (Marcelino, Reference Marcelino2023; Amil, Reference Amil2024). The country’s weak regulatory framework previously allowed entities like Cambridge Analytica to test online propaganda tactics (Wylie, Reference Wylie2019). While there is no direct evidence that Duterte and Marcos Jr. have used AI-driven tools, the growing use of these technologies in the Philippines highlights a serious threat to electoral integrity. Addressing these issues requires regulatory reforms and a collaborative, multi-stakeholder approach to dismantling entrenched disinformation networks (Enriquez, Reference Enriquez2024). To regulate deepfake creation and distribution, the House of Representatives introduced the Deepfake Accountability and Transparency Act (Bill 10,567), requiring clear verbal and written disclosures for AI-generated content (Digital Policy Alert, 2024). Similarly, the Commission on Elections (COMELEC) has issued guidelines for the 2025 election to counter AI-driven disinformation. These include mandating transparency in AI-generated content and banning deepfakes used to spread falsehoods (Enriquez, Reference Enriquez2024). However, disinformation campaigns remain highly adaptive, exploiting encrypted platforms like WhatsApp, which are challenging for AI systems to monitor.
To address the risks posed by deepfakes and synthetic media, scholars suggest strengthening existing laws, such as the Data Privacy Act, the Intellectual Property Code, and the Consumer Act, rather than introducing new regulations. They also recommend implementing a charge system to penalise irresponsible AI use and promoting co-regulation, which involves collaboration between government, industry, and civil society. Additionally, integrating AI governance into the National AI Roadmap is advised (Dayrit et al., Reference Dayrit, Nalagon, Pajo, Pineda and Rivera2024). Also, partnerships with international organisations, governments, and the private sector are crucial for technology transfer, capacity building, and improving digital literacy. A positive development is President Marcos Jr.’s emphasis on balanced global partnerships to enhance the country’s internet infrastructure and cybersecurity (Schipper, Reference Schipper2024). Similarly, the Philippine Department of Information and Communications Technology (DICT) is collaborating with AI providers such as OpenAI and Google to counter the threat of deepfakes ahead of the 2025 midterm elections (Dizon, Reference Dizon2024). The DICT advocates embedding watermarks (discussed in 3.4) in AI-generated content to indicate its investments in tools to monitor and detect fake content online. Inspired by Singapore’s approach, the DICT is exploring fact-checking mechanisms that allow disputed posts to remain visible but include government-verified information to provide balanced perspectives. However, improving media literacy among the population is essential for this measure to be effective.
Given the prevalence of historical revisionism in the Philippines, AI-driven tools must prioritise detecting and mitigating narrative manipulation, especially in politically sensitive contexts where cultural identity and national history are critical. Similar to the Philippines, other countries in the region, such as Taiwan, face their own unique challenges with misinformation, demonstrating the global nature of AI’s dual use in electoral integrity.
3.2. Taiwan: GAI in democratic engagement and disinformation
Similar to the Philippines, Taiwan faces unique challenges with misinformation, albeit through different mediums and countermeasures. Taiwan, a stable democracy facing constant geopolitical pressure—primarily from China—implements proactive measures such as real-time fact-checking and media resilience. In Taiwan’s 2018 local elections, the University of Queensland (Australia) employed an advanced AI algorithm to detect and explain fake news. This system not only identified false information but also clarified how it reached its conclusions, prioritising transparency and accountability (Sadiq and Demartini, Reference Sadiq and Demartini2024). During the 2020 elections, Taiwan’s strategy also focused on swiftly identifying, combating, and punishing disinformation while promoting transparency. Key elements of this strategy include media literacy, rapid debunking, and coordination between government and civil society (Kuo, Reference Kuo2021).
Taiwan amended its laws in 2023, including the Presidential and Vice-Presidential Election and Recall Act and the Civil Servants Election and Recall Act, to impose severe penalties for deepfakes. The government continued collaboration with civil society and independent fact-checkers to combat disinformation. Taiwan AI Labs plays a proactive role, developing solutions like the “Infodemic” platform for real-time monitoring and analysis of disinformation (Council of Asian Liberals and Democrats, 2024).
However, during Taiwan’s 2024 presidential election, GAI tools played a dual role as both allies and adversaries in the fight for democratic integrity. Incorporation of social, cultural, and political symbols into TikTok-based anti-disinformation campaigns highlights the complementary role of symbolic communication in fostering engagement and trust (Bhattacharya et al., Reference Bhattacharya, Agarwal and Poudel2024). Media outlets like Taiwan Television Broadcasting System (TVBS) and Formosa Television (FTV) leveraged GAI to counter disinformation effectively. However, challenges persisted with the rapid spread of AI-generated content on platforms such as YouTube and Douyin (TikTok). The proliferation of deepfakes and other AI-generated content blurred the distinction between factual and fabricated material, exacerbating political divisions and increasing susceptibility to foreign influence (Hung et al., Reference Hung, Fu, Liu and Tsai2024). Taiwan’s experience underscores the urgent need for the Philippines to actively cultivate high media literacy and foster strong civil society engagement.
3.3. India: battle against AI-generated misinformation in the 2024 general election
India, as the world’s largest democracy, faces challenges related to large-scale misinformation across diverse regions and languages, offering valuable insights into managing disinformation at a national level. To address this, the country integrates information literacy into education, promotes digital literacy through initiatives like the Digital India campaign, and improves access to trustworthy information sources (Bhakte, Reference Bhakte2024). India’s proactive integration of digital literacy into education curricula and community-driven fact-checking can serve as a model for the Philippines to emulate.
During the 2024 Indian General Election, AI-generated misinformation peaked, becoming a significant challenge. In response, the Misinformation Combat Alliance launched the Deepfakes Analysis Unit (DAU), a pioneering initiative that enabled the public to report suspicious audio and video content via a WhatsApp tipline (Nannaware et al., Reference Nannaware, Pillai, Kate, Gupta, Bohara, Kovid and Pandla2025). The DAU categorised content into deepfake, cheapfake, and AI-generated, aiding in the identification of misleading materials.
The tipline received hundreds of submissions—mainly videos—which were analysed using AI detection tools. When manipulation was confirmed, the DAU collaborated with fact-checkers to verify the content and publish public reports, offering guidance on identifying synthetic media. The initiative also highlighted the surge in cheapfakes—low-quality AI-generated content—which outnumbered sophisticated deepfakes during the election cycle (Raina, Reference Raina2024).
By partnering with media outlets and detection experts, the DAU raised public awareness about AI-driven misinformation and set a precedent for global collaboration. To further combat AI-related election misinformation, India seeks to strengthen existing laws, such as the Information Technology Act, and encourage self-regulation for high-risk AI applications. Drawing inspiration from the EU’s AI Act and the U.S.’s voluntary frameworks (as discussed in Section 4), India plans to develop targeted guidelines through collaborative governance, potentially through the proposed Artificial Intelligence Standards Institute (AISI) (Mohanty and Sahu, Reference Mohanty and Sahu2024).
Combating disinformation requires context-specific strategies, as no universal solution fits all. The Philippines faces widespread distrust in government due to past false narratives, now worsened by AI-driven disinformation, demanding proactive and culturally sensitive responses. Taiwan, despite strong democratic institutions, struggles with disinformation from foreign influence. India’s vast diversity and linguistic complexity make misinformation harder to manage. Success in one country may create new challenges elsewhere. Governments must continuously adapt and build public trust to effectively counter disinformation. Building on these insights, the next section examines selected tools and frameworks available to combat misinformation effectively.
4. Section 3: leveraging solutions to combat misinformation
This section focuses on the technological tools and regulatory frameworks essential for combating misinformation. It discusses the potential of GAI—which can create new content and simulate human-like creativity—, LLMs—deep learning models that can understand and generate human language—, and advanced NLP methods—such as those used in chatbots and language translation—to detect and counter false narratives. The section also highlights ethical considerations, such as transparency, fairness, and human oversight, while analysing global and regional regulatory approaches to AI governance.
4.1. Generative AI (GAI)
GAI can create hyper-realistic synthetic content, such as deepfakes, which poses significant risks in spreading misinformation. For instance, non-consensual deepfake pornography underscores the urgent need for regulatory measures (Roseman, Reference Roseman2024). Similarly, deepfake videos and audio are increasingly weaponised in disinformation campaigns, influencing public opinion in concerning ways (Li and Callegari, Reference Li and Callegari2024). Disinformation bots can misuse GAI to spread false narratives or manipulate information at scale. Bots, in particular, flooded social media, making it hard for users to distinguish real from fake content, undermining trust in the electoral process. Also, tools like DALL-E and ChatGPT, which contribute to training datasets, risk creating a negative feedback loop, making even less convincing AI-generated content pose risks. This could degrade model quality over time, reinforcing biases and reducing the diversity of future AI systems (Martínez et al., Reference Martínez, Hernández, Watson, Juarez, Reviriego and Sarkar2023; Angwin et al., Reference Angwin, Nelson and Palta2024; Chafetz et al., Reference Chafetz, Saxena and Verhulst2024).
On the positive side, GAI enhances academic research by streamlining idea development, automating content creation, expediting data analysis, and fostering interdisciplinary collaboration, significantly improving publishing efficiency (Khalifa and Albadawy, Reference Khalifa and Albadawy2024). Moreover, GAI can strengthen information ecosystems by automating content moderation and detecting manipulated media. These applications highlight the dual nature of GAI and the necessity for strict oversight and ethical guidelines to harness its potential responsibly.
4.2. Large language models (LLMs)
Disinformation on social media often follows a predictable pattern. AI tools, like LLMs, generate convincing false content, which is amplified by social media algorithms prioritising engagement. Analytics then target specific demographics, boosting disinformation through likes, shares, and comments (Barman et al., Reference Barman, Guo and Conlan2024).
However, LLMs also play a critical role in addressing disinformation despite the amplification of false content by social media algorithms. Scalable solutions, such as the RoBERTa model, achieve up to 98% accuracy in detecting fake news, offering a promising tool for countering misinformation effectively (Wang et al., Reference Wang, Zhang, Liu and Zhang2024). These models integrate seamlessly with systems like Facebook’s DeepText and Google’s Perspective API. In low-resource settings, few-shot learning frameworks like DetectYSF improve efficiency by reducing the need for large datasets. DetectYSF leverages pre-trained models and advanced techniques to achieve high accuracy with limited data, incorporating social context and misinformation patterns to improve performance, especially in politically sensitive environments (Jin et al., Reference Jin, Wang, Tao, Shi, Bi, Zhao, Wu, Duan and Yang2024).
Several strategies are being explored to enhance LLMs in combating misinformation. These include expanding training data, using active learning to focus on the most relevant information, and guiding models to provide more accurate responses (Zeng et al., Reference Zeng, Huang, Malik, Yin, Babic, Shacham, Yan, Yang and He2024; Manfredi Sánchez & Ufarte Ruiz, Reference Manfredi Sánchez and Ufarte Ruiz2020). A promising technique, adversarial contrastive learning, helps LLMs identify and separate truthful information from falsehoods more effectively. Meta-learning allows LLMs to adapt quickly to emerging misinformation trends, ensuring their effectiveness in real time (Chen and Shu, Reference Chen and Shu2024). Additional methods, such as knowledge-augmented strategies, integrate external information to improve fact-checking, while multilingual fact-checking ensures accuracy across different languages. LLMs could also flag false information in real-time, preventing it from spreading (Vykopal et al., Reference Vykopal, Pikuliak, Ostermann and Šimko2024). However, many AI models are not optimised for regional languages like Ilocano or Cebuano (Philippines), limiting their effectiveness in rural areas.
Moreover, LLMs carry inherent biases shaped by their design and regional contexts. Western models, for instance, often prioritise individual freedom, while non-Western models emphasise state security and stability (Vecellio Segate, Reference Vecellio Segate2022; Buyl et al., Reference Buyl, Rogiers, Noels, Dominguez-Catena, Heiter, Romero, Johary, Mara, Lijffijt and De Bie2024). These biases can influence political discourse, particularly during elections, where they may amplify misinformation or favour specific ideologies. The Expanded ASEAN Guide on AI Governance and Ethics showcases regional initiatives like Singapore’s Moonshot Project and Vietnam’s PhoGPT, which promote collaboration and culturally relevant AI tools. The Moonshot Project evaluates LLMs through benchmarking and automated red-teaming, ensuring safety and alignment with ASEAN contexts, while PhoGPT, tailored for Vietnamese language and culture, fosters innovation and addresses gaps in mainstream models (ASEAN, 2024).
4.3. NLP and LSTM networks
NLP is a field of AI that enables machines to understand, interpret, and generate human language. It involves text analysis, language generation, speech recognition, machine translation, text summarization, and question answering. NLP techniques were used to categorise TikTok posts and comments based on the presence and type of social, cultural, and political symbols, leveraging advanced models like OpenAI’s GPT-4 for detailed analysis and interpretation of large datasets (Bhattacharya et al., Reference Bhattacharya, Agarwal and Poudel2024). NLP plays a critical role in improving accessibility to political information, helping combat misinformation by making complex data, such as parliamentary speeches, more comprehensible (Alcoforado et al., Reference Alcoforado, Ferraz, Bustos, Oliveira, Gerber, Du Mont Santoro, Fama, Veloso, Siqueira and Reali Costa2024).
Long Short-Term Memory (LSTM) networks, a type of AI adept at learning from sequential data, offer significant potential for identifying misinformation patterns. By retaining long-term dependencies through memory cells and gates, LSTM can analyse social media for anomalies like rapid content spread or spikes in activity—common indicators of disinformation campaigns. However, fairness is vital to ensure these systems do not disproportionately target specific groups (Han et al., Reference Han, Lam, Li, Newbery, Guo and Chan2024).
4.4. AI detection tools and automated fact-checking
To tackle AI-generated deepfakes, experts recommend a multi-layered approach that combines detection tools, public awareness, and legal measures. Companies like OpenAI and Microsoft have developed tools to identify synthetic media, while AI detection systems provide extra protection. Digital watermarks, which embed hidden data in AI-generated content, can be detected using advanced detection systems, like Microsoft’s Video Authenticator, ensuring traceability without affecting the content’s appearance (AI Team, 2024). The EU’s AI Act, for example, includes requirements for providers of AI systems to mark their output as AI-generated content. Also, authenticity standards, supported by the Coalition for Content Provenance and Authenticity (C2PA), are vital in distinguishing authentic from manipulated content (Li and Callegari, Reference Li and Callegari2024).
Automated fact-checking is an emerging tool in the fight against disinformation, but fully automated solutions are still being developed. One challenge in creating these systems is detecting complex truth claims, which may require more flexible categories than the rigid true/false dichotomy that current systems typically use (Kavtaradze, Reference Kavtaradze2024). Recently, Meta, influenced by the X platform, ended its third-party fact-checking program, allowing user corrections instead (Isaac and Schleifer, Reference Isaac and Schleifer2025). While seen by some as a win for free speech, this shift risks fuelling misinformation. In the Philippines, with the 2025 elections nearing, it could erode trust in internet voting and amplify disinformation targeting overseas voters, underscoring the need for robust local safeguards (Pangalangan, Reference Pangalangan2025).
4.5. Transparency and accountability
To maximise the positive impact of AI, it is crucial to establish systems of transparency and accountability. These systems help ensure AI tools are applied in constructive ways, such as supporting fact-checking efforts and enhancing media literacy, while preventing their misuse in spreading misinformation (Endert, Reference Endert2024). Promoting digital literacy empowers individuals to critically assess online information, helping to curb the spread of false content. Public education and modernised libraries, for instance, offering reliable information sources are crucial in developing countries. Effective information literacy relies on fostering critical thinking, ensuring access to high-quality information, and enhancing the ability to evaluate source reliability (Haque et al., Reference Haque, Senathirajah, Qazi, Afrin, Ahmed and Khalil2024). In the Philippines, AI adoption is growing rapidly, particularly among knowledge workers who view it as essential for business competitiveness (Microsoft and LinkedIn, 2024). However, frequent use of AI tools negatively affects critical thinking, especially among younger users who heavily rely on AI (Gerlich, Reference Gerlich2025). This dependence increases vulnerability to misinformation, particularly in a country facing digital literacy challenges. Addressing this requires improved training to foster critical engagement with AI and reduce cognitive dependence, preventing its political misuse.
Moreover, effective transparency requires comprehensive auditing frameworks. Governance audits ensure adherence to ethical practices, model audits evaluate performance and identify biases, and application audits track real-world usage to prevent the spread of disinformation. This multi-layered approach is vital for safeguarding the integrity of AI systems in the battle against misinformation (Mökander et al., Reference Mökander, Schuett, Kirk and Floridi2024).
A regional sample focusing on youth development could indeed be critical, with California serving as a noteworthy example. The state is particularly proactive in combating misinformation, especially in the context of AI-generated content. Through legislation like AB 2839, SB 942, and AB 2013, California has introduced clear mandates aimed at enhancing transparency and accountability in digital media. These laws not only regulate the manipulation of political content but also require AI developers to provide tools that help users detect synthetic media and mandate transparency in AI training data (Pinto, Reference Pinto2024). The Philippines can learn from California’s efforts to balance free speech with anti-disinformation initiatives in AB 2839 (Rabiu, Reference Rabiu2024). However, successful implementation would require addressing enforcement challenges, strengthening partnerships with tech companies, and investing in digital infrastructure. Public education on the risks of misinformation is crucial for building support for such regulations.
4.6. Building resilience: enhancing digital security through proactive design
The technological singularity refers to a point at which AI exceeds human intelligence, potentially amplifying risks such as AI-driven cyberattacks and disinformation (Radanliev et al., Reference Radanliev, De Roure, Maple and Ani2022). While AI strengthens digital security, it also introduces vulnerabilities like data poisoning and AI-driven phishing (Vassilev et al., Reference Vassilev, Oprea, Fordyce and Anderson2024). As AI is used to develop increasingly sophisticated malware that adapts to evade detection (Gaber et al., Reference Gaber, Ahmed and Janicke2024), the risk of manipulation rises, particularly in the spread of fake news. Deepfake technology, powered by GAI, enables social engineering attacks, such as impersonating executives in phishing schemes. These activities have become so widespread that the EU has initiated legal action against Meta for failing to adequately prevent malicious actors, including a Russian influence campaign, from exploiting its platform (McMahon, Reference McMahon2024). A resilience-by-design approach—creating systems that can quickly recover from disruptions and ensuring their continued function—coupled with defense-in-depth strategies—implementing multiple security measures at various levels—can help mitigate these risks (Sai, Reference Sai2024). However, gaps often exist between intentions and implementation due to resource and integration challenges. Addressing these requires better resource allocation, clear policies, and regular assessments (Radanliev, Reference Radanliev2024).
The success of discussed technologies in addressing misinformation depends on their ethical application and strong oversight. The next section will explore practical recommendations for the responsible use of AI, particularly in electoral contexts.
5. Section 4: regulatory frameworks for safe and ethical AI usage
This section synthesizes key insights to propose strategies for addressing AI-driven misinformation. It highlights the need for comprehensive regulatory frameworks, enhanced digital literacy, and collaboration among governments, tech companies, and civil society to tackle misinformation effectively.
5.1. The evolving regulatory landscape for AI
The regulatory landscape for AI is evolving, necessitating clear guidelines to ensure AI systems are safe, reliable, and accountable. Regulations must balance technical safety with broader societal concerns, including the risks of disinformation and AI’s impact on governance and security. In the Indo-Pacific, Big Tech’s influence exacerbates vulnerabilities due to limited local resources and expertise, hindering innovation and eroding sovereignty. Therefore, equitable regulation is essential to safeguard societal interests (Bąk, Reference Bąk2024).
5.2. International cooperation and diverging approaches
International cooperation is critical to managing AI risks, as definitions of AI safety vary widely across countries. Regulations and approaches from regions such as the US, EU, Singapore, and China often serve as models in the Indo-Pacific (Dayrit et al., Reference Dayrit, Nalagon, Pajo, Pineda and Rivera2024; Dizon, Reference Dizon2024; Mohanty and Sahu, Reference Mohanty and Sahu2024). Southeast Asia adopts a flexible, business-friendly approach, guided by voluntary principles like the ASEAN Guide on AI Governance and Ethics (Haie et al., Reference Haie, Chitranukroh, Avramidou and Lamsam2024). Singapore provides a key example of responsible AI use, with governance frameworks and contingency planning prioritizing AI failure responses, offering valuable lessons for Southeast Asia (Soon and Quek, Reference Soon and Quek2024). China’s emphasis on national security and information control can shape disinformation dynamics, while stricter regulations may reduce transparency, inadvertently fostering unchecked misinformation (Guest, Reference Guest2024). The EU’s AI Act, while intended to protect consumers, risks stifling innovation if not carefully crafted, similar to concerns raised about the EU’s broader regulatory environment that may hinder tech growth (Bradford, Reference Bradford2024; Graf, Reference Graf2024). Key areas such as liability, privacy, intellectual property, and cybersecurity remain underdeveloped, leaving gaps that could hinder technological advancement (Novelli et al., Reference Novelli, Casolari, Hacker, Spedicato and Floridi2024). The EU’s General Data Protection Regulation (GDPR) provides a model for transparency and accountability in data processing. The U.S. focuses more on fostering innovation than on providing regulatory clarity (Guest, Reference Guest2024). A proposed UN Office for the Coordination of AI could centralize efforts to foster global collaboration and responsible AI development (Fournier-Tombs and Siddiqui, Reference Fournier-Tombs and Siddiqui2024).
5.3. AI in electoral contexts: Necessity for comprehensive regulations
In electoral contexts, comprehensive AI regulations are essential to ensure transparency, accountability, and fairness. These regulations must address bias, establish international standards for consistency, and incorporate ongoing monitoring to preserve electoral integrity (Juneja, Reference Juneja2024). AI’s global impact on elections requires nuanced regulations that balance its benefits with the need to protect electoral integrity (Hasan, Reference Hasan2024). AI tools that provide accurate, responsible information are crucial in elections.
5.4. Ensuring ethical AI Use in fact-checking: the need for human oversight and transparency
AI can assist in fact-checking by analysing language patterns to identify misleading content, but its effectiveness depends on the quality of training data and algorithmic design. Biases in data or flaws in algorithms can compromise accuracy, highlighting the need for human oversight in fact-checking (Toner-Rodgers, Reference Toner-Rodgers2024). To mitigate AI-related risks, transparency, human oversight, and certification standards are crucial. Ultimately, human involvement ensures that AI tools are used ethically and effectively. This includes maintaining oversight, human decision-making, and preparing for AI failures through staff training and contingency planning (Cortés et al., Reference Cortés, Norden, Frase and Hoffmann2023).
5.5. Designing effective anti-disinformation regulations
Additionally, regulations aimed at combating disinformation must be designed carefully to avoid misuse, particularly in politically sensitive contexts like elections. Poorly crafted laws could inadvertently suppress opposition or manipulate the democratic process (Mahapatra et al., Reference Mahapatra, Sombatpoonsiri and Ufen2024). Anti-disinformation measures must, therefore, be clear, transparent, and subject to independent oversight to safeguard credibility and prevent abuse.
By adopting these recommendations, stakeholders can build resilient information ecosystems that safeguard electoral integrity and uphold public trust in democratic processes. As AI technology continues to evolve, these strategies must adapt to protect democracy in the digital age, beyond the 2025 Philippine elections.
6. Conclusions and recommendations
The challenges posed by AI-driven disinformation, particularly during elections, underscore the urgent need for a balanced approach to leveraging AI technologies. While the principles of transparency, human oversight, and robust regulatory frameworks are widely acknowledged, their practical implementation remains a critical issue. The experiences of countries like Taiwan, India, and the Philippines offer valuable insights into addressing these challenges, particularly as the Philippine midterm elections in May 2025 approach.
To safeguard democratic processes and counter the risks associated with AI-driven misinformation, a multifaceted strategy is essential:
-
1. Enhancing Digital Literacy
-
○ Widespread educational initiatives on AI and digital literacy should be prioritised, targeting younger populations who are more susceptible to misinformation. These programmes must foster critical thinking and awareness of AI-generated content.
-
○ Key Insight: Taiwan’s grassroots digital literacy campaigns, which integrate public participation and rapid fact-checking, provide an effective model for empowering citizens to critically assess online information.
-
-
2. Developing Comprehensive Regulatory Frameworks
-
○ Governments must collaborate with international bodies and technology companies to establish clear and enforceable AI regulations that address safety, bias, and disinformation. Such frameworks should balance innovation with societal protections.
-
○ Key Insight: India’s integration of AI governance into existing laws, alongside public reporting mechanisms like the DAU, highlights the importance of regulatory adaptability and inclusiveness in combating misinformation.
-
-
3. Promoting Cross-Sector Collaboration
-
○ Partnerships among governments, civil society, and the private sector should be strengthened to create scalable, transparent, and accountable solutions. These collaborations must prioritise resource sharing and establish standards for AI systems.
-
○ Key Insight: The Philippines’ multi-stakeholder approach, including partnerships with international organisations and tech companies, demonstrates the value of collective action in combating disinformation effectively.
-
-
4. Strengthening Human Oversight in AI Applications
-
○ Human decision-making must remain central to AI systems, particularly in fact-checking and disinformation detection. Training programmes for AI developers and regulators should focus on recognising and mitigating algorithmic biases.
-
○ Key Insight: The Philippines faces challenges in addressing disinformation, including limited digital literacy and resource constraints. Efforts by the government, such as the proposed Deep Fake Accountability and Transparency Act, combined with initiatives by civil society to enhance media literacy, highlight the critical role of human oversight in implementing AI-driven countermeasures.
-
-
5. Implementing Election-Specific Countermeasures
-
○ Measures such as media watermarking and authenticity standards for AI-generated content are critical during electoral processes to ensure transparency and maintain public trust.
-
○ Key Insight: Taiwan’s real-time fact-checking systems, alongside its legal amendments targeting deepfakes, underscore the need for proactive election-specific measures to protect electoral integrity.
-
-
6. Ensuring Ethical AI Use in Politically Sensitive Contexts
-
○ Anti-disinformation laws must be designed with clear, transparent mechanisms for independent oversight. These frameworks should strike a balance between preventing misuse and protecting democratic freedoms.
-
○ Key Insight: Lessons from China’s focus on controlling information illustrate the risks of overregulation, underscoring the necessity of balanced frameworks that promote both transparency and accountability.
-
7. Looking ahead
The evolving nature of AI-driven disinformation demands continuous refinement of strategies. Developing multilingual AI tools, particularly for underrepresented languages, will be crucial in addressing diverse contexts. Moreover, adaptive regulatory frameworks that evolve alongside technological advancements are essential to ensure resilience against emerging threats.
The Philippines’ unique context, combined with lessons from Taiwan and India, highlights the need for urgent action. By fostering collaboration, prioritising ethical AI practices, and empowering citizens through digital literacy, stakeholders can navigate the complexities of AI and misinformation. These measures will help uphold the integrity of electoral processes and sustain public trust in democratic institutions in the digital age.
Abbreviations
- AB
-
Assembly Bill
- AI
-
Artificial Intelligence
- EU
-
European Union
- GAI
-
Generative Artificial Intelligence
- LLM
-
Large Language Model
- LSTM
-
Long Short-Term Memory
- NLP
-
Natural Language Processing
- RAG
-
Retrieval-Augmented Generation
- ReAct
-
Reasoning and Acting
- US
-
United States
- UK
-
United Kingdom
Data availability statement
No datasets were generated or analysed during the preparation of this commentary, making data sharing inapplicable.
Author contribution
Dr. Tetiana Schipper conceptualised the study, conducted the analysis, and authored the entire commentary. She is solely responsible for the work’s content and conclusions.
Funding statement
This work did not receive funding from any public, commercial, or non-profit entities.
Competing interests
The author declares none.
Comments
No Comments have been published for this article.