top of page
MI4People Logo
Zoom logo

Introducing the New Age of AI-Manufactured Reality

  • Writer: Tine Scheffelmeier
    Tine Scheffelmeier
  • 5 days ago
  • 4 min read
Foto von Robynne O auf Unsplash
Foto von Robynne O auf Unsplash

Written by Darwish Thajudeen

 

The information ecosystem faces an unprecedented crisis. What began as occasional rumors and poorly edited posts has evolved into an industrial system of AI-generated synthetic content: Voices indistinguishable from real speakers, deepfakes so convincing that even experts struggle to detect them, and fabricated articles that mimic legitimate journalism. Since the topic is of great relevance to our community and needs more than a single newsletter, we would like to dive deeper into many more aspects of this topic to create awareness as our first line of defence against this new change. This newsletter establishes an introduction to the world of AI Generated misinformation and deepfakes and sets the stage for deeper examination in future editions.


  1. The Transformation 

A critical shift has occurred in how false information spreads. Ten years ago, misinformation relied primarily on human effort—manually editing photos, crudely splicing videos, or writing obviously flawed articles. Today, machines generate sophisticated fake content at scale, requiring specialized technical knowledge to identify (Morris, 2024). 

Breakthrough technologies in the field of AI have substantially lowered barriers to creating hyper-realistic audio-visual forgeries. The accessibility revolution has democratized deepfake production—what once required specialized expertise now requires only a smartphone and internet access (Morris, 2024). 

The shift is quantifiable. AI-driven fake news sites grew tenfold in a single year, flooding the information ecosystem with low-cost, algorithmically generated propaganda (Virginia Tech News, 2024). AI voice synthesis now produces voices indistinguishable from authentic speakers (TechXplore, 2025), and research demonstrates that ChatGPT can generate completely fabricated scientific articles with convincing citations and embedded errors (Májovský et al., 2023).


  1. The Three Pillars of Crisis


2.1 Scale 

Current threat parameters are accelerating exponentially. Approximately 500,000 deepfake videos shared on social media in 2023, with projections reaching 8 million by 2025 (DeepStrike, 2025). AI-generated fake news sites now number over 1,200, with deepfake content growing 5× in the past two years (DeepStrike, 2025). 

Beyond volume, financial impact reveals urgency. Fraud losses from generative AI are expected to rise from USD 12.3 billion in 2024 to USD 40 billion by 2027—a 32% compound annual growth rate (KeepNet Labs, 2025).


2.2 Speed 

Malicious actors have discovered that AI-driven disinformation thrives on velocity. Fact-checking response times improved from 45 minutes to approximately 15 minutes, yet this remains problematic because false news achieves massive distribution within minutes (DeepStrike, 2025). Adversaries deliberately capitalize on breaking news moments to inject falsehoods before facts are verified.


2.3 Believability 

Perhaps most troublingly, human perception can no longer reliably distinguish synthetic from authentic content. Research shows humans correctly identify high-quality deepfake videos only approximately 24.5% of the time (Thies et al., 2019). A Nature study examining deepfake detection found that between 27% and over 50% of survey respondents were unable to correctly identify video authenticity, with adults particularly vulnerable (Nature, 2023).


  1. Real-World Consequences: The 2024 Electoral Evidence


The 2024 global election cycle provided clear evidence that AI-driven deepfakes are operational tools rather than theoretical risks. Over 130 deepfakes were identified in elections worldwide since September 2023 (NPR, 2024). In the United States, fabricated robocalls featured AI-generated voice impersonations urging voters to abstain from voting, while deepfake audio clips circulated showing political figures speaking incoherently (NPR, 2024). 

State-sponsored deployment confirmed the threat's escalation. Russia, China, and Iran all demonstrated growing ability to create and disseminate AI-generated media during the 2024 U.S. presidential campaign (NPR, 2024). Analysis revealed AI was employed in more than 80% of countries with observable electoral processes (CIGI Online, 2025). 

Romania's 2024 presidential election results were annulled after evidence showed AI-powered interference using manipulated videos—a watershed moment demonstrating that deepfakes now threaten democratic outcomes rather than merely influencing discourse (CIGI Online, 2025).


  1. The Trust Collapse 


Public confidence in media and institutions is declining measurably. The 2025 Edelman Trust Barometer found 70% of respondents worry that journalists purposely mislead people (Fotoware, 2025). The Reuters Institute Digital News Report 2025 found only 40% of respondents globally maintain trust in news media, and 58% worry about content authenticity (Reuters Institute, 2025; Fotoware, 2025). 

More profoundly, deepfakes train human perception to doubt everything. When people encounter convincing fabrications, credibility assessments drop significantly across all audio-visual formats—even authentic content subsequently appears less trustworthy (SAGE Publishing, 2025). This creates generalized skepticism undermining information ecosystems themselves.


  1. The Regulatory Awakening


Governments have begun responding with transparency requirements for AI-generated content. The United States signed the TAKE IT DOWN Act into law in May 2025, marking the first federal law directly restricting harmful deepfakes (Regulaforensics, 2025). The European Union AI Act requires all companies to explicitly label AI-generated content by August 2, 2026 (Regulaforensics, 2025). 

China implemented the most comprehensive regime: the Measures for Labeling of AI-Generated Synthetic Content, effective September 1, 2025, requiring both explicit visible labels and implicit embedded metadata (China Law Translate, 2025; Inside Privacy, 2025).


  1. Why This Matters


We stand at an inflection point. AI has industrialized misinformation while simultaneously eroding the social infrastructure—media literacy, institutional trust, research capacity. But the very same technology shall be our weapon to fight against these bad actors. 

Institutions, social media giants, government organizations and many more are adopting AI to find and fight against these damages. It is our duty as a community to understand this and support these efforts. We believe this to be our first step towards understanding the dangers and creating the awareness, to protect our society. To dive deeper and to find out more on this, follow us on our social media pages and support our cause by sharing this newsletter. 


With heartfelt gratitude,

MI4People Team 


 
 
 

Comments


bottom of page