Italian Authorities Launch Investigation into Chinese AI Firm DeepSeek Over "Hallucination" Concerns
ROME – Italian regulators have initiated a formal investigation into Chinese artificial intelligence startup DeepSeek, alleging the company failed to adequately warn users about the potential for its AI system to generate false or misleading information, often referred to as "hallucinations." The probe, announced by the Italian Competition Authority (AGCM) on June 16, 2025, marks the latest regulatory challenge for DeepSeek in Italy, following a previous data privacy-related order in February 2025 that temporarily blocked access to its chatbot within the country.
The AGCM’s investigation centers on consumer protection concerns, specifically DeepSeek’s alleged failure to provide users with clear and readily understandable warnings about the possibility of encountering inaccurate or fabricated information generated by its AI platform. Regulators emphasize the importance of transparency and user awareness when interacting with AI systems, especially given the potential for these systems to generate content that is indistinguishable from factual information. The investigation highlights the growing scrutiny of AI technologies and the responsibility of developers to ensure users are fully informed about the limitations and potential risks associated with their use.
This recent investigation follows a previous action taken by Italy’s data protection authority in February 2025, which ordered DeepSeek to block access to its chatbot in Italy due to concerns regarding its privacy policies and data processing practices. The earlier order highlighted the importance of compliance with data protection regulations, especially in the context of AI systems that process vast amounts of user data. The combination of these two regulatory actions signals a growing trend of increased scrutiny of AI technologies by Italian authorities, focusing on both consumer protection and data privacy.
DeepSeek’s alleged failure to adequately address these concerns has led to this new investigation, raising questions about the company’s transparency and commitment to responsible AI development. The AGCM has not yet disclosed the potential penalties DeepSeek could face or the expected timeline for the investigation. As of the latest reports, DeepSeek has not issued a public response to the allegations or responded to media inquiries for comment. The lack of response from DeepSeek adds to the uncertainty surrounding the case and leaves many questions unanswered.
The investigation into DeepSeek underscores the growing global debate surrounding the regulation of AI technologies and the need for clear guidelines to ensure responsible development and deployment. As AI systems become increasingly sophisticated and integrated into various aspects of daily life, concerns about the potential for misuse, misinformation, and manipulation are intensifying. The case highlights the challenges facing regulators as they grapple with the rapid advancements in AI and strive to strike a balance between fostering innovation and protecting consumers.
The outcome of the AGCM’s investigation could have significant implications for the AI industry as a whole, potentially setting a precedent for future regulatory actions in Italy and other jurisdictions. The increased scrutiny of AI technologies by Italian authorities underscores the growing importance of responsible AI development and the need for companies to prioritize user safety, transparency, and compliance with data protection regulations. The case will likely be closely watched by other regulatory bodies around the world as they develop their own frameworks for overseeing AI technologies. The investigation’s outcome could shape the future of AI regulation and influence how companies develop and deploy these powerful technologies.