top of page

DeepSeek is investigated in Italy: the new regulatory crossroads for chatbots


ree

On June 16, 2025, Italy’s antitrust authority (AGCM) formally opened an investigation into DeepSeek, a prominent Chinese AI chatbot developer, for allegedly failing to provide users with sufficiently clear warnings regarding the risk of “hallucinations”—the generation of inaccurate or fabricated responses by AI. This move comes amid rising regulatory scrutiny on generative AI models across Europe and signals a tightening approach to consumer protection, transparency, and accountability for emerging AI platforms.


Background: DeepSeek Under the Microscope

DeepSeek has emerged as one of the most notable open-source AI chatbot challengers in 2024–2025, gaining popularity for its advanced natural language processing and multilingual capabilities. However, the company’s expansion into Europe has been met with regulatory headwinds.

  • February 2025: Italy’s privacy watchdog (Garante) ordered DeepSeek to be blocked nationally over non-compliance with data protection and transparency standards, particularly regarding how user data is processed and the clarity of its privacy policies.

  • Other EU actions: France, Ireland, the Netherlands, Belgium, and Luxembourg have also taken actions or expressed concerns about DeepSeek’s privacy practices and potential risks to consumers.


Details of the AGCM Investigation

The current probe, announced in June, centers on whether DeepSeek:

  • Provides adequate, understandable warnings to users about the possibility of hallucinations—errors where the chatbot produces plausible-sounding but incorrect or misleading information.

  • Fulfills obligations under Italian and EU consumer protection law, which increasingly treats misleading AI-generated content as equivalent to deceptive commercial practices.

  • Implements transparency controls: Regulators are examining whether DeepSeek has made visible changes to how it communicates AI risks since the Garante’s earlier block order.

The AGCM is specifically concerned that without prominent, easy-to-understand disclaimers, users might treat AI-generated answers as authoritative, potentially leading to harmful decisions in sensitive contexts (such as health, finance, or legal advice).


The Broader European Regulatory Context

Italy’s action reflects a larger shift in how Europe is handling the risks posed by generative AI:

  • GDPR and Beyond: European data protection authorities are expanding the scope of GDPR enforcement to include not only data processing but also the content quality and safety of AI outputs.

  • The EU AI Act: Set to come into force in 2025–2026, the AI Act will require “high-risk” AI systems—including chatbots with significant influence on users—to provide clear information about their functioning, limitations, and risk of inaccuracies.

  • Coordinated oversight: The European Data Protection Board (EDPB) is facilitating a coordinated approach among member states for AI oversight, particularly for cross-border services like DeepSeek.


Why “Hallucinations” Are a Regulatory Flashpoint

“Hallucinations” are a well-documented limitation of large language models, including DeepSeek, ChatGPT, Claude, and Gemini. These systems can generate convincing but factually false information without any intent or awareness.

  • Risk to consumers: When users rely on chatbots for information or advice, even a small chance of hallucination can lead to negative consequences, especially in high-stakes situations.

  • Expectation management: Regulators increasingly expect AI companies to proactively inform users about these risks and make it clear that outputs should be verified, not blindly trusted.


Global Implications and Industry Impact

DeepSeek is not alone in facing regulatory scrutiny. In 2025:

  • South Korea suspended DeepSeek downloads over privacy and accuracy issues.

  • Taiwan and other governments have blocked the app on public devices.

  • The United States and other countries are observing these regulatory trends, with possible future investigations.

These developments are forcing all major chatbot providers to revisit how they present risk disclosures and user education, especially when operating in tightly regulated markets like the EU.


What’s Next?

  • Potential outcomes in Italy: AGCM may require DeepSeek to display more prominent and intelligible risk warnings, impose fines, or limit the chatbot’s features until compliance is achieved.

  • Setting a precedent: The investigation is likely to influence how other AI providers address transparency and risk communication, especially as the EU AI Act rolls out.

  • Industry adaptation: Leading chatbots—such as ChatGPT, Gemini, Claude, and Meta AI—are monitoring the case, as similar standards may soon apply to them.


_______

FOLLOW US FOR MORE.


DATA STUDIOS


bottom of page