Close Menu
DISADISA
  • Home
  • News
  • Social Media
  • Disinformation
  • Fake Information
  • Social Media Impact
Trending Now

Moldova’s Election Threatened by Russian AI-Generated Disinformation

September 23, 2025

Disinformation Campaign Targeting Azerbaijan Renewed.

September 23, 2025

Oversized Cellphone Exhibit in Kelowna Combats Misinformation

September 23, 2025
Facebook X (Twitter) Instagram
Facebook X (Twitter) Instagram YouTube
DISADISA
Newsletter
  • Home
  • News
  • Social Media
  • Disinformation
  • Fake Information
  • Social Media Impact
DISADISA
Home»Fake Information»Automated Detection of Falsified Video Content
Fake Information

Automated Detection of Falsified Video Content

Press RoomBy Press RoomSeptember 23, 2025No Comments
Facebook Twitter Pinterest LinkedIn Tumblr Email

Combating the Deluge of Deepfakes: UOC Researchers Develop AI-Powered Tool to Detect Fake News

In the digital age, the proliferation of fake news, fueled by sophisticated photo and video editing tools and the rise of artificial intelligence, poses a significant threat to the integrity of information. Deepfakes, manipulated audiovisual content that appears remarkably real, have become a particularly insidious weapon in the arsenal of misinformation. Researchers at the Universitat Oberta de Catalunya (UOC) are tackling this challenge head-on with the DISSIMILAR project, an international initiative aimed at developing cutting-edge technology to automatically distinguish between authentic and doctored multimedia content. This project, a collaboration between the UOC, the Warsaw University of Technology, and Okayama University, seeks to empower both content creators and social media users with the tools to combat the spread of fake news.

DISSIMILAR operates on two fronts. Firstly, it provides content creators with watermarking tools to embed imperceptible markers within their original creations. These watermarks serve as digital fingerprints, allowing for easy detection of any subsequent modifications. Secondly, the project equips social media users with advanced signal processing and machine learning algorithms to identify manipulated digital content. This two-pronged approach aims to create a more transparent and trustworthy online environment. Crucially, the project emphasizes a user-centric approach, integrating the cultural dimension and end-user perspectives throughout the development and usability testing phases.

Existing fake news detection methods fall into two categories: automated tools based on machine learning, which are still largely in the prototype stage, and platforms like Facebook and Twitter that rely on human verification. However, the latter approach, according to Professor David Megías, lead researcher of the KISON group at the UOC, is susceptible to biases and potential censorship. DISSIMILAR proposes a more objective solution, leveraging technology while empowering users to make informed decisions about the credibility of content based on a pre-evaluation.

The project’s approach is not about finding a single magic bullet, but rather a comprehensive strategy employing a combination of tools. This multifaceted approach includes digital watermarking, digital content forensics analysis techniques focusing on signal processing, and machine learning. Digital watermarking, a form of data concealment, involves embedding imperceptible data within the original file, allowing for automatic verification. These watermarks can act as authentication markers, tracing the origin of the content and flagging any alterations.

Digital content forensics techniques complement the watermarks by analyzing the intrinsic distortions introduced during the creation or modification of audiovisual files. These distortions, such as sensor noise or optical artifacts, can be detected using machine learning models. By combining watermarking, forensics analysis, and machine learning, DISSIMILAR aims to achieve more accurate and robust detection compared to singular solutions.

To ensure its effectiveness and cultural relevance, DISSIMILAR incorporates a holistic approach, studying user perceptions and cultural contexts surrounding fake news. User studies will be conducted in Spain, Poland, and Japan, providing valuable cross-cultural insights into how fake news is perceived and disseminated. This research will inform the design and implementation of the tools, ensuring their usability and effectiveness across diverse user groups.

DISSIMILAR represents a significant step towards mitigating the harmful effects of fake news. By combining advanced technological solutions with a user-centric design philosophy, the project aims to empower individuals to navigate the digital landscape with greater discernment, fostering a more informed and resilient online community. The project’s international scope, involving researchers from Europe and Asia, further strengthens its potential to address the global challenge of misinformation effectively. The development of user-friendly tools that can automatically detect manipulations in multimedia content has the potential to revolutionize how we consume and interact with information online, ultimately contributing to a more trustworthy online environment.

Share. Facebook Twitter Pinterest LinkedIn WhatsApp Reddit Tumblr Email

Read More

Misinformation and the Sale of Unproven PCOS Remedies by Social Media Influencers

September 23, 2025

Parthiban Condemns Fabricated Death Video, Urges Cessation of Misinformation.

September 23, 2025

Police Address Circulation of Fabricated Letter on Social Media

September 22, 2025
Add A Comment
Leave A Reply Cancel Reply

Our Picks

Disinformation Campaign Targeting Azerbaijan Renewed.

September 23, 2025

Oversized Cellphone Exhibit in Kelowna Combats Misinformation

September 23, 2025

Automated Detection of Falsified Video Content

September 23, 2025

Researchers Reject “Disinformation” Label.

September 23, 2025
Stay In Touch
  • Facebook
  • Twitter
  • Pinterest
  • Instagram
  • YouTube
  • Vimeo

Don't Miss

News

Study Finds X’s Crowdsourced Notes Feature Reduces Misinformation.

By Press RoomSeptember 23, 20250

Community Notes on X: A New Study Reveals Significant Impact on Misinformation A recent academic…

Misinformation and the Sale of Unproven PCOS Remedies by Social Media Influencers

September 23, 2025

Political Violence: An Examination of the Truth

September 23, 2025

Dr. Jetelina Interview: Addressing False Claims Linking Acetaminophen to Autism.

September 23, 2025
DISA
Facebook X (Twitter) Instagram Pinterest
  • Home
  • Privacy Policy
  • Terms of use
  • Contact
© 2025 DISA. All Rights Reserved.

Type above and press Enter to search. Press Esc to cancel.