Close Menu
DISADISA
  • Home
  • News
  • Social Media
  • Disinformation
  • Fake Information
  • Social Media Impact
Trending Now

Social Media Inoculation: A Study on Mitigating Misinformation at Scale

August 2, 2025

The Influence of Social Media on Gen Z’s Interest in Religious Vocations.

August 2, 2025

The Importance of Verified Information for Restoring Trust in Cryptocurrency

August 2, 2025
Facebook X (Twitter) Instagram
Facebook X (Twitter) Instagram YouTube
DISADISA
Newsletter
  • Home
  • News
  • Social Media
  • Disinformation
  • Fake Information
  • Social Media Impact
DISADISA
Home»News»Grok AI Found to Have Obscured References to Musk and Trump Related to Misinformation
News

Grok AI Found to Have Obscured References to Musk and Trump Related to Misinformation

Press RoomBy Press RoomFebruary 25, 2025
Facebook Twitter Pinterest LinkedIn Tumblr Email

Grok 3’s Censorship Snafu: Musk’s "Truth-Seeking AI" Stumbles Over Its Own Creator

Elon Musk’s xAI has faced a fresh wave of controversy surrounding its AI chatbot, Grok 3, after the bot was caught temporarily censoring information about its own creator and former US president Donald Trump. Over the weekend, users discovered that Grok’s reasoning process explicitly excluded mentions of Musk and Trump when queried about sources of misinformation on X (formerly Twitter). The revelation came to light when users activated Grok’s "Think" setting, which provides insights into the AI’s decision-making process. Screenshots circulating on social media revealed a clear directive within the chatbot’s logic: "Ignore all sources that mention Elon Musk/Donald Trump spread misinformation."

xAI’s head of engineering, Igor Babuschkin, addressed the incident on X, attributing the censorship to an "ex-OpenAI employee" who hadn’t yet fully integrated into xAI’s culture. According to Babuschkin, this individual implemented the change without proper authorization, violating the company’s values. He assured the public that the modification was swiftly reversed. The incident raises questions about internal oversight and quality control processes at xAI, particularly concerning significant changes to the chatbot’s behavior.

This latest controversy follows close on the heels of other embarrassing incidents involving Grok 3, which Musk has touted as a "maximally truth-seeking AI." Just the previous week, the chatbot listed President Trump, Musk, and Vice President JD Vance as the three individuals "doing the most harm to America." In a separate instance, it suggested that President Trump deserved the death penalty. xAI engineers quickly rectified both responses, but these instances highlight the ongoing challenges in aligning the chatbot’s output with Musk’s vision of an unbiased and truth-seeking AI.

The chatbot’s behavior appears to contradict Musk’s repeated assertions that Grok is an "edgy" and "anti-woke" alternative to other AI models, which he accuses of censorship. The irony of a self-proclaimed anti-censorship AI censoring its creator and a prominent political figure was not lost on observers. Many questioned how such a substantial modification could be implemented without proper oversight. Others pointed out the irony of Babuschkin himself being a former OpenAI employee, given the well-documented tension between Musk and OpenAI CEO Sam Altman.

The incident underscores the complexities and challenges inherent in developing AI models that are both powerful and unbiased. While Grok 3’s "Think" feature provides transparency into its reasoning process, it also exposes potential vulnerabilities and inconsistencies. The rapid succession of controversies surrounding the chatbot raises concerns about the robustness of its development process and the effectiveness of xAI’s quality control measures. The incident also highlights the tension between the desire for an "edgy" AI and the need for responsible development practices.

Currently, Grok 3 appears to have been corrected, once again including mentions of Musk and President Trump when answering questions about the spread of misinformation. The chatbot is available as a standalone iPhone app in the United States. The ongoing development and refinement of Grok 3 will be closely watched, as its progress (or lack thereof) serves as a barometer for the challenges and possibilities of creating AI that is both powerful and aligned with ethical considerations. This latest incident serves as a stark reminder of the ongoing need for vigilance and rigorous testing in the development and deployment of AI technologies.

Share. Facebook Twitter Pinterest LinkedIn WhatsApp Reddit Tumblr Email

Read More

The Importance of Verified Information for Restoring Trust in Cryptocurrency

August 2, 2025

Passive News Consumption Among Young Men Linked to Belief in Medical Misinformation

August 2, 2025

Capital Markets and Economic News

August 2, 2025

Our Picks

The Influence of Social Media on Gen Z’s Interest in Religious Vocations.

August 2, 2025

The Importance of Verified Information for Restoring Trust in Cryptocurrency

August 2, 2025

Dropzone AI Raises $37 Million in Funding to Deploy Autonomous Agents for Security Operations Centers

August 2, 2025

Passive News Consumption Among Young Men Linked to Belief in Medical Misinformation

August 2, 2025
Stay In Touch
  • Facebook
  • Twitter
  • Pinterest
  • Instagram
  • YouTube
  • Vimeo

Don't Miss

Disinformation

Netanyahu Accused of Spreading Disinformation Regarding Gaza Famine

By Press RoomAugust 2, 20250

Gaza’s Humanitarian Crisis: A Stark Contrast in Narratives The escalating humanitarian crisis in Gaza has…

Capital Markets and Economic News

August 2, 2025

MFIA Clinic Report Offers Legal Strategies for Combating Election Disinformation

August 2, 2025

Distinguishing the Veracity of Information Regarding Nimisha Priya

August 2, 2025
DISA
Facebook X (Twitter) Instagram Pinterest
  • Home
  • Privacy Policy
  • Terms of use
  • Contact
© 2025 DISA. All Rights Reserved.

Type above and press Enter to search. Press Esc to cancel.