Commission launches whistleblower tool for EU Artificial Intelligence Act (AI Act)
- Antonis Hadjicostas
- Nov 24
- 2 min read

The European Commission has launched today a new whistleblower tool designed to support enforcement of the AI Act. In brief:
What is the new whistleblower tool
The tool offers a secure and confidential channel for individuals to report suspected breaches of the AI Act. Reports go directly to the European AI Office.
Reports can be submitted in any official EU language, in any format, which aims to maximize accessibility for whistleblowers across the EU.
The system uses certified encryption mechanisms to guarantee confidentiality and data protection. Reporters remain anonymous, yet they can receive secure follow-up communications: updates on the progress of their report and the possibility to answer additional questions from the EI AI Office - all without compromising anonymity.
Why this matters
The AI Act aims to foster innovation and adoption of artificial intelligence across the EU, while at the same time safeguarding health, safety, fundamental rights, public trust, and the rule of law.
Effective enforcement of the AI Act is key to ensuring compliance. The whistleblower tool empowers insiders, employees, collaborators, shareholders, or other stakeholders with knowledge about AI systems to report non-compliance early. This helps the EU AI Office detect and address risks before they escalate.
With this tool, the Commission leverages transparency and accountability as core mechanisms for AI governance, which is especially important in a regulatory environment where full oversight of complex AI systems is technically and practically challenging.
Practical points & Current Limitations
|Currently, the tool ensures confidentiality and anonymity, but statutory protection against retaliation (e.g., from employers) under the general EU whistleblower rules -namely the Whistleblower Directive - will only apply to AI Act-related reports from 2 August 2026 onwards. Until then, protections remain based on Commission assurances.
Despite those caveats, the tool represents a significant step forward. As noted by independent observers, early detection of AI-related risks (e.g. breaches of safety, privacy, non-discrimination) can meaningfully contribute to “safe, transparent and trustworthy” AI deployment across the EU.
