Brussels, November 24, 2025 – The European Commission on Monday officially launched a dedicated whistleblower tool under the Artificial Intelligence (AI) Act, providing a secure and confidential channel for individuals to report suspected violations of the regulation directly to the EU AI Office.
The new platform allows anyone — whether employees of AI companies, contractors, researchers, or concerned citizens — to submit reports anonymously in any of the 24 official EU languages and in virtually any file format. The system uses certified end-to-end encryption to guarantee the highest standards of confidentiality and data protection. Whistleblowers can track the status of their submission and communicate with investigators without ever revealing their identity, significantly reducing the risk of retaliation.
According to the Commission, early reporting of potential breaches is essential for effective enforcement and will help the AI Office detect and address violations before they cause widespread harm to health, safety, fundamental rights, or democratic processes.
The EU AI Act, which entered into force on 1 August 2024 and is the world’s first comprehensive horizontal AI law, pursues a risk-based approach. It bans unacceptable practices (effective since 2 February 2025), imposes transparency and documentation obligations on general-purpose AI models (effective since 2 August 2025), and will fully apply high-risk system requirements by August 2026–2027. The newly created EU AI Office, housed within DG CONNECT, serves as the central supervisory authority for general-purpose AI across the entire Union and coordinates with national market surveillance bodies.
The whistleblower platform complements existing protections under the 2019 EU Whistleblowing Directive, which will explicitly cover AI Act violations from August 2026 onward. Even before that date, reports concerning overlapping areas such as data protection, product safety, or discrimination law may already benefit from those safeguards. National reporting channels remain available and, where appropriate, forward cases to the EU level.
Civil society organizations have welcomed the launch, describing it as a critical safeguard in an industry where internal dissent has often been suppressed. Industry groups acknowledge the tool’s importance but continue to call for simplification of administrative requirements, especially for smaller companies and narrowly scoped AI applications. Recent proposals in the Commission’s Digital Omnibus Package aim to reduce certain compliance burdens, though privacy advocates warn that some adjustments could weaken core protections.
The launch comes at a time of rapidly increasing AI deployment across Europe, with public and private investment reaching €20 billion in 2025 alone. High-profile incidents involving biased recruitment algorithms, manipulative deepfakes, and unauthorized emotion-recognition systems have underscored the need for robust enforcement mechanisms.
Looking ahead, the AI Office plans to publish regular statistics on the use of the platform and to further integrate it with national reporting systems. Member States were required to designate their competent authorities by August 2025, and most have now done so. From 2026, whistleblowers will also benefit from strengthened legal protections, including access to free interim relief and psychological support in cases of retaliation.
For many observers, the new tool represents more than just another compliance instrument. In an ecosystem often characterized by opacity and concentrated power, it offers a direct line for insiders to alert regulators to dangerous practices before they escalate. As Europe seeks to establish itself as the global benchmark for trustworthy AI, the success of this whistleblower platform will be watched closely — both by those who see it as essential accountability infrastructure and by critics who fear excessive regulatory pressure on innovation.
With the next wave of AI Act obligations approaching rapidly — including mandatory fundamental-rights impact assessments for high-risk public-sector deployments and full conformity requirements for Annex I systems — the platform arrives at a decisive moment. Whether it becomes a rarely used formality or a cornerstone of proactive oversight will depend largely on public awareness, trust in its anonymity guarantees, and the EU AI Office’s ability to act decisively on credible reports.
