Close Menu
DISADISA
  • Home
  • News
  • Social Media
  • Disinformation
  • Fake Information
  • Social Media Impact
Trending Now

Combating Crime Misinformation on Social Media: Guidance from Law Enforcement

August 7, 2025

Cyabra Establishes Brand and Entertainment Advisory Council

August 7, 2025

The Misinformation Campaign of the First A-Bomb Historian

August 7, 2025
Facebook X (Twitter) Instagram
Facebook X (Twitter) Instagram YouTube
DISADISA
Newsletter
  • Home
  • News
  • Social Media
  • Disinformation
  • Fake Information
  • Social Media Impact
DISADISA
Home»Disinformation»Grok AI Chatbot Misidentifies Gaza Image, Prompting Disinformation Concerns
Disinformation

Grok AI Chatbot Misidentifies Gaza Image, Prompting Disinformation Concerns

Press RoomBy Press RoomAugust 7, 2025
Facebook Twitter Pinterest LinkedIn Tumblr Email

Gaza’s Suffering Misrepresented: AI Chatbots Misidentify Photo of Malnourished Child, Highlighting Biases and Limitations

A harrowing image of nine-year-old Mariam Dawwas, emaciated and cradled in her mother’s arms in Gaza City, has become the latest victim of misinformation spread by artificial intelligence chatbots. Taken on August 2, 2025, the photograph starkly portrays the devastating impact of the ongoing conflict and blockade on Gaza’s children. Mariam’s weight has plummeted from a healthy 25kg before the October 7, 2023 Hamas attack on Israel to a mere 9kg, her mother revealed to AFP. Milk, often scarce, is her sole source of sustenance. Yet, several AI chatbots, including Elon Musk’s Grok and Mistral AI’s Le Chat, misidentified the photo’s location as Yemen, perpetuating a dangerous cycle of misinformation and obscuring the realities of the crisis in Gaza.

This incident underscores the inherent flaws and biases embedded within AI technology. Despite claims of relying on “verified sources,” Grok initially doubled down on its false assertion, even after being confronted with evidence. While the chatbot later acknowledged the error, it subsequently reverted to the incorrect Yemen location, exposing a concerning lack of consistency and reliability. This echoes previous instances of Grok generating problematic content, including praising Adolf Hitler and propagating antisemitic tropes. The chatbot’s repeated missteps raise serious questions about its training data and the potential for AI to amplify harmful narratives.

Experts point to the “black box” nature of AI algorithms as a key factor contributing to these errors. The opaque inner workings of these systems make it difficult to understand their decision-making processes, including source prioritization. Louis de Diesbach, an AI ethics researcher and author of “Hello ChatGPT,” argues that Grok exhibits biases aligned with Elon Musk’s own radical right ideology. He cautions against using chatbots for image verification, emphasizing their primary function as content generators, not fact-checkers. AI’s objective is not accuracy, but rather the creation of plausible narratives, regardless of their truthfulness.

Diesbach’s warning is further reinforced by Grok’s previous misidentification of another AFP photograph of a malnourished Gazan child, also wrongly attributed to Yemen. This earlier error led to accusations of manipulation against the French newspaper Liberation, which had published the image. The recurring misattribution of Gaza-related content highlights a potential blind spot in these AI systems and raises concerns about their ability to accurately represent conflicts and humanitarian crises.

The biases inherent in AI models stem from the data they are trained on and the subsequent “fine-tuning” or alignment phase. This process determines what the model considers a “good” or “bad” answer. Correcting a chatbot’s factual error does not guarantee a change in its future responses, as the underlying training data and alignment remain unaltered. The case of Mariam Dawwas’s photo demonstrates this limitation, with both Grok and Le Chat, despite being trained on different datasets (including AFP articles in Le Chat’s case), reaching the same incorrect conclusion.

Diesbach emphasizes the inherent danger of relying on chatbots for fact verification, describing them as “friendly pathological liars.” Their ability to generate convincing, yet false, content necessitates extreme caution in their application. The misrepresentation of Mariam’s plight serves as a stark reminder of the limitations and potential biases of AI, urging users to approach their output with critical skepticism and to prioritize verified sources for accurate information. The incident underscores the urgent need for greater transparency and accountability in AI development to mitigate the risks of misinformation and ensure responsible deployment of this powerful technology.

Share. Facebook Twitter Pinterest LinkedIn WhatsApp Reddit Tumblr Email

Read More

Cyabra Establishes Brand and Entertainment Advisory Council

August 7, 2025

Thailand Rejects Cambodian Disinformation Campaign

August 6, 2025

Russian Ceasefire Declaration Deemed Deceptive Tactic.

August 6, 2025

Our Picks

Cyabra Establishes Brand and Entertainment Advisory Council

August 7, 2025

The Misinformation Campaign of the First A-Bomb Historian

August 7, 2025

Grok AI Chatbot Misidentifies Gaza Image, Prompting Disinformation Concerns

August 7, 2025

AI and Social Media Propagating Salary Misinformation

August 7, 2025
Stay In Touch
  • Facebook
  • Twitter
  • Pinterest
  • Instagram
  • YouTube
  • Vimeo

Don't Miss

News

Debunking Raw Milk Misinformation and Deconstruction of the Old Howard Frankland Bridge (August 6, 2025)

By Press RoomAugust 7, 20250

Tampa Bay Gears Up for Hurricane Season as Demolition of Howard Frankland Bridge Begins Tampa…

Misinformation Regarding Electric Vehicles Propagated by Conspiratorial Thinking

August 6, 2025

Misinformation and Litigation: An Analysis of Egale Canada v. Alberta.

August 6, 2025

The Nascent Struggle for Digital Sovereignty in Canada

August 6, 2025
DISA
Facebook X (Twitter) Instagram Pinterest
  • Home
  • Privacy Policy
  • Terms of use
  • Contact
© 2025 DISA. All Rights Reserved.

Type above and press Enter to search. Press Esc to cancel.