By Wei-Ping Li, Ph.D.

The first half of 2025 appeared more eventful than in previous years, notably as multiple countries underwent pivotal elections in 2024 that led to significant political shifts. Moreover, driven by advancements in technology, such as AI, the spread of disinformation and misinformation has intensified, becoming increasingly prevalent alongside notable geopolitical events and conflicts. In this analysis, we explore the narratives and trends surrounding false information in the Sinosphere over the past six months, with a special focus on Taiwan, and highlight the challenges faced by the fact-checking community.

From January to June 2025, alongside Russia’s invasion of Ukraine and the ongoing Gaza War, another regional conflict erupted between India and Pakistan. In the United States, the second-term Trump administration introduced dramatic changes to American foreign and domestic policies that sent shockwaves throughout the world, including significant cuts in federal funding, a restructuring of the U.S. federal government, strict enforcement of deportation policies, and the imposition of high tariffs on a wide range of countries, including U.S. allies. These changes also impacted relationships between Taiwan, the United States, and other nations. 

Meanwhile, in addition to the ongoing threat from China, which has increased its large-scale military drills around the island, Taiwan has faced domestic challenges stemming from the new power dynamics following the election of Lai Ching-te as president and a new congress where the Kuomintang (KMT, also known as the Nationalist Party of China) holds more seats than the ruling party. 

The chaotic political landscape intertwines with the debate surrounding Taiwanese identity and how to coexist with individuals from China, amid escalating hostility from the Chinese government. Beyond political and social upheavals, AI technology has significantly expedited the production and dissemination of information, both positive and negative.

Prevalent narratives 

Amid the increasingly chaotic information environment, we identify key themes and narratives of false information that have impacted the Sinosphere media landscape and shaped both political and social discourse. In terms of categories of false narratives and techniques, the following stood out:

  1. False claims misrepresenting or fabricating the policies to trigger anxieties during controversial events or crises.

For example, when Taiwanese President Lai Ching-te called China a “foreign hostile force” and vowed to strengthen Taiwan’s military, disinformation that inaccurately described military laws and misstated the scope of the military draft to evoke Taiwanese people’s anxiety proliferated.

During the controversy surrounding the Taiwanese government’s deportation of Chinese spouses married to Taiwanese citizens to China, there was also a false claim suggesting that Australia had revoked permanent residency for divorced Chinese spouses to support the Taiwanese government’s decision to deport them.   

  1. Inaccurate characterization of political parties’ legislative proposals.

These types of claims were most frequently seen during budget reviews and legislative sessions, especially this year when KMT lawmakers proposed significant budget cuts affecting a wide range of areas, including national defense. The complexity of budget matters made them easily misrepresented, creating opportunities for malicious actors to exploit the knowledge gap and engage in wordplay that misled audiences. One example was the false claims regarding how the budget cuts would affect the passport application process in central Taiwan’s cities.

  1. Fabricated events scared or appealed to the Taiwanese 

Fabricated events frequently emerged in false claims aimed at instigating anxieties within Taiwanese society, particularly through disinformation asserting that the Chinese military appeared in Taiwan’s territory or that the Taiwanese military was inadequate. Recently, an influx of AI-generated videos has been posted on YouTube, depicting fabricated international events showcasing countries endorsing Taiwan’s entry into global organizations, alongside narratives of Taiwan receiving accolades from foreigners and even Chinese individuals and visitors.

  1. Incorporating messages alongside significant events to promote ideology or propaganda

This type of narrative exploited international events to generate false information, embedding messages that malicious actors sought to disseminate. For instance, during the Los Angeles wildfires, numerous YouTube channels shared AI-generated videos featuring Elon Musk. In these clips, a fake Musk lauded China’s technological advancement and compared the responses of China and the United States to natural disasters, including the wildfires.

In another case, discussions surrounding Diversity, Equity, and Inclusion (DEI) policies in the United States gained attention in the Chinese-language world and became a target of false information. A prevalent falsehood within the Chinese-speaking community claimed that DEI influenced the hiring priorities or reasons for L.A. Fire Department Chief Kristen Crowley.

A Facebook post shared an AI-generated video in which Musk compared California wildfire relief with rescue actions in China. In this video, Musk praised China's use of advanced technology to help detect earthquakes, take preemptive actions, and execute rescue plans.
A Facebook post shared an AI-generated video in which Musk compared California wildfire relief with rescue actions in China. In this video, Musk praised China’s use of advanced technology to help detect earthquakes, take preemptive actions, and execute rescue plans.
  1. False claims about COVID-19 and vaccines

False claims about new outbreaks of mutant viruses or bacteria originating from China that could cause pneumonia have been a recurring theme in false information spread in the Chinese language. Another notable type of this false information relates to COVID-19 vaccines. With anti-vaccine activist Robert F. Kennedy appointed as the U.S. health secretary in Trump’s administration, disinformation about the COVID-19 vaccine being lethal surged again. Examples included claims such as “Bill Gates, Anthony Fauci, and those who promoted mRNA will face the death penalty” and “the United States FDA admitted that mRNA vaccines can lead to cancer.”

Trends

Among the false information spread during the first half of 2025, we also identified several notable trends:

  1. The proliferation of AI-generated videos. 

As experts have warned, AI technology has become increasingly prevalent in creating false information, particularly in the form of visuals. AI has been used not only in creating scripts, narrating stories with voices, or presenting in the form of AI anchors, but also in impersonating real people, such as Elon Musk, or even creating fabricated scenes. Furthermore, due to AI’s ability to rapidly produce content, producers can create multiple similar pieces of content with slight variations in a short time and distribute these videos across various YouTube channels and social media platforms.

The subject matter of these AI-generated videos goes beyond mere narration or commentary on current events. They are not just created for information influence campaigns or propaganda. Many also deliver “health information” targeting senior audiences. However, fact-checkers have found that a significant number of these “health tips” are inaccurate. Despite this, these video channels have attracted a large subscriber base and a significant number of views on YouTube, generating revenue for the creators behind these misleading videos.

Screenshots of AI-generated videos on YouTube channels providing health tips for seniors. Some of the videos have been found to contain inaccurate information.
Screenshots of AI-generated videos on YouTube channels providing health tips for seniors. Some of the videos have been found to contain inaccurate information.
  1. An increasingly diversified and multi-directional dissemination route

With more people joining the online influencer industry, false information has proliferated, and its dissemination has also become quicker and more widespread. Moreover, while micro-influencers are emerging as new nodes in networks of false information, some established influencers —whether politicians, traditional talk show guests, or online macro-influencers who have attracted millions of followers —are exerting their influence in the media ecosystem and amplifying false information.

For example, several familiar faces in Taiwan’s talk show programs periodically criticized Taiwan’s military as weak while also criticizing President Lai’s new policy aimed at strengthening the country’s military. Some micro-TikTok influencers around the same period of time also echoed these statements. More interestingly, the rhetoric of these micro TikTok influencers sounded remarkably similar, suggesting that they may have been following the same script.

  1. The manipulation of emotions and identities

Propaganda and disinformation gain strength when they effectively address identity issues and evoke emotions within audiences. Over the past six months, the spread of problematic information has shown that malicious actors, whether they are motivated by profit or information influence campaigns, have identified the preferences of both Taiwanese and Chinese speakers living outside of China. Additionally, they have keenly recognized the emotions and concerns of various groups regarding different issues.

For instance, some narratives have exploited the Taiwanese people’s fears related to national security, military service, pandemics, and economic changes. Others have taken advantage of their hopes for support and recognition from the international community. Furthermore, some messages have addressed the sensitive identity issue of “who counts as Taiwanese” within society.

Challenges for Taiwanese society and fact-checkers

The analysis of narratives and trends highlights an increasingly difficult battle for those who value authentic and quality information. Here, we aim to outline the challenges and suggest potential solutions to address the situation:

  1. Disinformation and propaganda produced by AI-generated videos have become a trend that is here to stay. They will certainly proliferate and become more difficult to distinguish from real images. The society needs to raise awareness about AI-generated false information and promote AI literacy. While there are still not many reliable AI-detection tools available to the public, the public should at least be aware of the information shared online and know where to find trusted resources to consult.
  2. The public should also be aware of the types of emotions and issues that have been exploited in false information, as well as how actors can produce false information by taking advantage of these emotions and issues. The knowledge could help the public to be more aware of false information and identify suspicious pieces.
  3. The ecosystem of production and dissemination of false information has continued to expand, with more people joining in as content production tools have become increasingly accessible. As the production and dissemination of false information have become more diverse and complex, the detection and deterrence of false information should be addressed through a systematic approach, identifying and tackling different layers of producers and disseminators with diverse motivations.
  4. Topics that require specialized knowledge and expertise have increasingly become targets for disinformation. Fact-checkers are facing greater challenges when addressing complex issues, such as government budgets, national defense, and international geopolitics. To effectively combat the intricacies, rapid spread, and large volume of false information, fact-checking communities need to foster closer collaboration with experts across various fields. 

Wei-Ping Li is a research fellow at the Taiwan FactCheck Center.