Close Menu
DISADISA
  • Home
  • News
  • Social Media
  • Disinformation
  • Fake Information
  • Social Media Impact
Trending Now

Emerson Students Decry Social Media Impact of Alleged Charlie Kirk Shooting Threat.

September 11, 2025

Russia Accused of Disinformation Campaign Following Alleged Ukrainian Drone Incursion into Polish Airspace.

September 11, 2025

Fatal Shooting of Activist Charlie Kirk Prompts Spread of Misinformation

September 11, 2025
Facebook X (Twitter) Instagram
Facebook X (Twitter) Instagram YouTube
DISADISA
Newsletter
  • Home
  • News
  • Social Media
  • Disinformation
  • Fake Information
  • Social Media Impact
DISADISA
Home»Fake Information»Few-Shot Fake News Detection through Synergistic Adversarial and Contrastive Self-Supervised Learning with a Focus on Truth Dissemination Consistency
Fake Information

Few-Shot Fake News Detection through Synergistic Adversarial and Contrastive Self-Supervised Learning with a Focus on Truth Dissemination Consistency

Press RoomBy Press RoomJanuary 28, 2025No Comments
Facebook Twitter Pinterest LinkedIn Tumblr Email

Revolutionizing Fake News Detection in the Data-Scarce Era: Introducing DetectYSF

The proliferation of fake news poses a significant threat to societal trust and informed decision-making. Traditional fake news detection methods often struggle in data-scarce environments, where labeled data is limited. This article delves into the groundbreaking DetectYSF model, a novel approach that leverages the power of pre-trained language models (PLMs) and innovative learning strategies to achieve superior fake news detection accuracy, even with minimal labeled data.

DetectYSF was rigorously evaluated using three established real-world datasets: FakeNewsNet (encompassing PolitiFact and GossipCop) and FANG. These datasets contain verified news articles alongside social media engagement data from Twitter, mirroring real-world news dissemination scenarios. The model’s performance was assessed under a few-shot learning setting, with varying levels of labeled data (16 to 128 shots), ensuring its effectiveness in resource-constrained situations.

To benchmark DetectYSF’s performance, it was compared against a spectrum of state-of-the-art (SOTA) baselines, categorized into "Train-from-Scratch" and "PLM-based" methods. The "Train-from-Scratch" methods employed conventional machine learning techniques like graph convolutional networks (GCNs) and graph attention networks (GATs), while the "PLM-based" methods leveraged the inherent knowledge of pre-trained language models like BERT and RoBERTa. DetectYSF consistently outperformed all baselines across all datasets and shot settings, demonstrating its superior accuracy and establishing its status as a leading solution for few-shot fake news detection.

The remarkable performance of DetectYSF stems from its innovative integration of three core strategies: sentence representation contrastive learning, sample-level adversarial learning, and veracity feature fusion based on neighborhood dissemination sub-graphs. Contrastive learning enhances model robustness by maximizing similarity between representations of similar sentences while minimizing similarity between dissimilar ones. Adversarial learning further strengthens the model by training it against adversarially generated examples, making it more resilient to deceptive inputs. Finally, veracity feature fusion leverages social context by incorporating veracity features from neighboring news nodes, aligning with the "news veracity dissemination consistency" theory, which posits that news articles shared by the same users are likely to possess similar veracity characteristics.

Ablation studies, involving the systematic removal of each component, confirmed the crucial role of each strategy. Removing contrastive learning led to substantial performance drops, particularly in low-shot scenarios, highlighting its importance in maximizing limited labeled data. Similarly, removing adversarial learning decreased accuracy, emphasizing its role in handling noisy and misleading information. The most significant performance decline, however, resulted from removing veracity feature fusion, underscoring the value of incorporating social context for enhanced veracity predictions.

Further investigations delved into the specific design choices within each component. In contrastive learning, cosine similarity emerged as the superior distance metric, consistently outperforming L1 and L2 distances in capturing semantic relationships between sentences. Within adversarial learning, the combination of Noise-MLP Engine and NegSeq-LMEncoder Engine proved most effective, with the NegSeq-LMEncoder demonstrating a greater contribution to robustness and generalization. Furthermore, the "feature matching" objective significantly enhanced the adversarial learning process, as evidenced by the decreased performance when it was removed.

Lastly, the veracity feature fusion component benefitted significantly from incorporating trustworthiness-driven adaptive weighted feature fusion. This approach, based on the frequency of news reposting, outperformed simpler averaging methods, reaffirming the importance of considering the strength of social connections. Experiments also revealed the optimal refinement level ((mu) = 0.5) for integrating neighbor node information, balancing the original news features with social context information.

In conclusion, DetectYSF stands as a significant advancement in fake news detection, particularly within low-resource environments. By integrating PLMs with innovative learning strategies and leveraging social context, DetectYSF achieves superior accuracy, demonstrating its potential to combat the spread of misinformation and bolster informed decision-making in the digital age. Its robustness and adaptability across various datasets and few-shot settings position it as a state-of-the-art solution, paving the way for more effective fake news detection in the face of ever-evolving online misinformation campaigns. Further research could explore the application of DetectYSF to other domains and investigate methods for incorporating additional contextual information, such as user profiles and historical posting patterns, for even more refined veracity predictions.

Share. Facebook Twitter Pinterest LinkedIn WhatsApp Reddit Tumblr Email

Read More

Ministry of Defense Rejects Social Media Disinformation

September 11, 2025

European Commission Rejects False Social Media Claims Regarding Sabah Election Dates

September 10, 2025

European Commission Refutes Social Media Claims Regarding Sabah Election Dates

September 10, 2025
Add A Comment
Leave A Reply Cancel Reply

Our Picks

Russia Accused of Disinformation Campaign Following Alleged Ukrainian Drone Incursion into Polish Airspace.

September 11, 2025

Fatal Shooting of Activist Charlie Kirk Prompts Spread of Misinformation

September 11, 2025

Emerson Students Decry Social Media Impact of Charlie Kirk Altercation

September 11, 2025

Halting the Dissemination of Dangerous Disinformation by Robert F. Kennedy Jr.

September 11, 2025
Stay In Touch
  • Facebook
  • Twitter
  • Pinterest
  • Instagram
  • YouTube
  • Vimeo

Don't Miss

News

False Report of Violence Against Charlie Kirk Prompts Lockdown at Linden Public Schools

By Press RoomSeptember 11, 20250

Linden Public Schools Lockdown Lifted After Online Misinformation Sparks Threats LINDEN, NJ – A wave…

Russian Disinformation Campaign Exploits Alleged Polish Drone Incursion Amidst Declining Public Confidence

September 11, 2025

Surge of Online Misinformation and Rumors Following the Death of Charlie Kirk

September 11, 2025

Russian Propaganda Disseminates False Reports of French Intelligence Chief’s Resignation

September 11, 2025
DISA
Facebook X (Twitter) Instagram Pinterest
  • Home
  • Privacy Policy
  • Terms of use
  • Contact
© 2025 DISA. All Rights Reserved.

Type above and press Enter to search. Press Esc to cancel.