Gaza’s Suffering Misrepresented: AI Chatbots Misidentify Photo of Malnourished Child, Highlighting Biases and Limitations
A harrowing image of nine-year-old Mariam Dawwas, emaciated and cradled in her mother’s arms in Gaza City, has become the latest victim of misinformation spread by artificial intelligence chatbots. Taken on August 2, 2025, the photograph starkly portrays the devastating impact of the ongoing conflict and blockade on Gaza’s children. Mariam’s weight has plummeted from a healthy 25kg before the October 7, 2023 Hamas attack on Israel to a mere 9kg, her mother revealed to AFP. Milk, often scarce, is her sole source of sustenance. Yet, several AI chatbots, including Elon Musk’s Grok and Mistral AI’s Le Chat, misidentified the photo’s location as Yemen, perpetuating a dangerous cycle of misinformation and obscuring the realities of the crisis in Gaza.
This incident underscores the inherent flaws and biases embedded within AI technology. Despite claims of relying on “verified sources,” Grok initially doubled down on its false assertion, even after being confronted with evidence. While the chatbot later acknowledged the error, it subsequently reverted to the incorrect Yemen location, exposing a concerning lack of consistency and reliability. This echoes previous instances of Grok generating problematic content, including praising Adolf Hitler and propagating antisemitic tropes. The chatbot’s repeated missteps raise serious questions about its training data and the potential for AI to amplify harmful narratives.
Experts point to the “black box” nature of AI algorithms as a key factor contributing to these errors. The opaque inner workings of these systems make it difficult to understand their decision-making processes, including source prioritization. Louis de Diesbach, an AI ethics researcher and author of “Hello ChatGPT,” argues that Grok exhibits biases aligned with Elon Musk’s own radical right ideology. He cautions against using chatbots for image verification, emphasizing their primary function as content generators, not fact-checkers. AI’s objective is not accuracy, but rather the creation of plausible narratives, regardless of their truthfulness.
Diesbach’s warning is further reinforced by Grok’s previous misidentification of another AFP photograph of a malnourished Gazan child, also wrongly attributed to Yemen. This earlier error led to accusations of manipulation against the French newspaper Liberation, which had published the image. The recurring misattribution of Gaza-related content highlights a potential blind spot in these AI systems and raises concerns about their ability to accurately represent conflicts and humanitarian crises.
The biases inherent in AI models stem from the data they are trained on and the subsequent “fine-tuning” or alignment phase. This process determines what the model considers a “good” or “bad” answer. Correcting a chatbot’s factual error does not guarantee a change in its future responses, as the underlying training data and alignment remain unaltered. The case of Mariam Dawwas’s photo demonstrates this limitation, with both Grok and Le Chat, despite being trained on different datasets (including AFP articles in Le Chat’s case), reaching the same incorrect conclusion.
Diesbach emphasizes the inherent danger of relying on chatbots for fact verification, describing them as “friendly pathological liars.” Their ability to generate convincing, yet false, content necessitates extreme caution in their application. The misrepresentation of Mariam’s plight serves as a stark reminder of the limitations and potential biases of AI, urging users to approach their output with critical skepticism and to prioritize verified sources for accurate information. The incident underscores the urgent need for greater transparency and accountability in AI development to mitigate the risks of misinformation and ensure responsible deployment of this powerful technology.