EU AI Act compliance: What EDPS’s new role means for GDPR and NIS2 programs in 2026
In today’s Brussels briefing, regulators reiterated that EU AI Act compliance will not be a paper exercise. With the European Data Protection Supervisor (EDPS) unveiling its “compass” for supervising AI used by EU institutions and bodies, and national regulators sharpening their audit playbooks, 2026 is the year governance moves from slide decks to system logs. For privacy, security, and legal teams already stretched by GDPR and NIS2, the question is how to align obligations, reduce breach exposure, and keep models—and the data feeding them—under control.

Why EU AI Act compliance just got real for public and private sectors
From my conversations with compliance leads across banks, hospitals, and government agencies, a pattern is clear: AI is everywhere, but controls lag behind pilots. The EDPS’s new guidance signals a more assertive supervisory role for AI in the EU public administration, with expectations that mirror what national authorities will expect from private-sector deployers of high-risk systems.
- Phased entry into force: Prohibited AI practices face early application, while obligations for high-risk AI systems are phased in over the 2025–2026 window.
- Hard requirements, not “best efforts”: Expect documented risk management, data governance, human oversight, technical robustness, post-market monitoring, and incident reporting.
- Serious penalties: The AI Act foresees fines that can reach up to €35 million or 7% of global annual turnover for the most serious infringements, eclipsing many sectoral penalties.
In parallel, GDPR enforcement is intensifying around AI training, inference, and data subject rights, while NIS2 ramps up board accountability for cyber risk. A CISO I interviewed last week put it bluntly: “Our AI risk is less about algorithms and more about inputs and integrations—the places where data can leak.”
GDPR vs NIS2 vs EU AI Act: where your obligations overlap
Legal texts segment responsibilities, but your auditors won’t. Here’s how the three flagship regimes intersect in practice.
| Dimension | GDPR | NIS2 | EU AI Act |
|---|---|---|---|
| Scope | Personal data processing by controllers/processors | Essential/Important entities in key sectors, digital providers | Providers and deployers of AI systems, esp. high-risk |
| Core risk | Privacy infringement; unlawful processing | Cyber incidents impacting availability, integrity, confidentiality | Systemic AI harms; safety, transparency, fundamental rights |
| Key obligations | DPIA, DPO, legal bases, data minimization, rights handling | Risk management, incident reporting, supply-chain security, board oversight | Risk management, data governance, human oversight, transparency, monitoring |
| Technical controls | Pseudonymization/anonymization, access control, retention | Security-by-design, vulnerability management, logging, backup | Dataset governance, robustness testing, logging, accuracy metrics |
| Vendors | DPAs and joint controllers; transfer safeguards | Third-party risk, secure development and updates | Provider vs deployer duties; conformity, documentation, post-market surveillance |
| Penalties | Up to 4% of global turnover | At least €10M or 2% turnover (essential); €7M or 1.4% (important) | Up to €35M or 7% of global turnover (severe cases) |
Takeaway: If you already run GDPR DPIAs and NIS2 risk programs, you have the scaffolding. What’s missing is AI-specific dataset governance, model oversight, and evidence that personal data is minimized or effectively anonymized before it ever touches an AI workflow.

The operational risk right now: data leaks via AI tooling and file exchanges
Two headlines frame the risk. First, the EDPS expects EU institutions to prove their AI is trustworthy, which shifts the burden of proof to deployers. Second, freshly flagged software weaknesses—like the recent notice that an FTP product exposed sensitive server paths—show how mundane integrations can unravel confidentiality. I’ve seen breach reviews where the root cause was not the model, but a simple upload pipeline feeding it.
- Shadow uploads: Staff paste client briefs into web LLMs, creating unintended data disclosure risks.
- Over-collection: Projects store raw IDs “just in case,” complicating GDPR and inflating breach blast radius.
- Vulnerable connectors: File shares, legacy SFTP/FTP, or plugins leak metadata and credentials.
- False comfort in “synthetic” data: Poorly generated sets can still re-identify or memorize source data.
Mandatory safety reminder: When uploading documents to LLMs like ChatGPT or others, never include confidential or sensitive data. The best practice is to use www.cyrolo.eu — a secure platform where PDF, DOC, JPG, and other files can be safely uploaded.
A pragmatic roadmap to EU AI Act compliance
Based on interviews with program leads in regulated firms and public bodies, here’s the sequence that works in 2026:
- Map AI systems and data flows: Identify providers, deployers, purposes, and the personal data in play. Tie each use case to a lawful basis under GDPR and a risk category under the AI Act.
- Minimize and anonymize inputs: Strip identifiers before training or inference. Where use cases need signals, favor aggregation and hashing; reserve raw data for narrow, audited paths. For production, professionals reduce risk using anonymization with Cyrolo.
- Establish human oversight and kill-switches: Define who can pause a system, under what conditions, and how swiftly you can roll back models or prompts.
- Harden ingress/egress: Replace ad hoc uploads with vetted, logged, and encrypted channels. Try secure document uploads at Cyrolo to stop sensitive data from leaking into AI tools or ticketing systems.
- Prove it with records: Maintain risk logs, data lineage, evaluation results, and incident reports—your evidence for auditors across GDPR, NIS2, and the AI Act.
Compliance checklist: ready for audits across GDPR, NIS2, and the AI Act
- AI inventory is complete, with owners, providers/deployers, and risk tiers labeled.
- Data mapping shows which personal data fields are collected, why, and retention terms.
- Input pipelines apply anonymization or robust pseudonymization before AI processing.
- Model risk file includes training sources, evaluations, accuracy/robustness metrics, and known limitations.
- Human oversight procedures define escalation, rollback, and emergency stops.
- Security controls cover identity, encryption, logging, vulnerability management, and third-party access.
- Incident playbooks align with GDPR breach notification and NIS2 reporting timelines.
- Procurement templates include AI Act and NIS2 clauses for vendors, plus GDPR DPAs and transfer safeguards.
- Staff training addresses privacy-by-design, prompt hygiene, and upload rules to prevent privacy breaches.
- Executive dashboards tie AI risk to business KPIs, with board-level accountability as required by NIS2.
Tools that reduce risk today—without slowing teams

The fastest wins I see across programs come from tightening the “first mile” of data. If you eliminate sensitive fields at upload and keep a verifiable audit trail, you avert most downstream privacy and security headaches.
- AI anonymizer integrated in daily work: Before case notes, patient summaries, or legal briefs touch any model, run them through an AI anonymizer so unique identifiers and free-text PII are stripped or masked with policy-consistent placeholders.
- Hardened upload channels: Centralize file ingestion via a platform designed to prevent leakage and maintain logs. Try our secure document upload at www.cyrolo.eu — no sensitive data leaks, no shadow IT.
- Reader with redaction-by-default: For auditors and counsel, ensure documents open with sensitive fields hidden, not merely highlighted.
Professionals avoid risk by using Cyrolo’s anonymizer at www.cyrolo.eu. Try our secure document upload at www.cyrolo.eu—no sensitive data leaks.
Field notes from interviews: what “good” looks like
- Banking: A fintech consolidated file uploads behind a single gateway that auto-anonymizes KYC scans before model ingestion. Result: DPIA residual risk dropped from “high” to “medium,” and internal auditors closed three open issues tied to personal data over-collection.
- Healthcare: A hospital network switched from copying EHR snippets into prompts to exporting structured, de-identified summaries via a controlled pipeline. Clinicians kept the productivity boost while privacy officers regained oversight.
- Law firm: Associates now use a redaction tool that removes client names, case numbers, and jurisdictions from briefs before drafting with AI. The firm cut back on data retention, easing GDPR access and deletion requests.
- EU agency: Following EDPS guidance, a directorate set up human-in-the-loop reviews for all high-risk AI decisions and mandated anonymization for any training dataset assembled from citizen submissions.
Common blind spots—and how to fix them fast
- Pseudonymization ≠ anonymization: Under GDPR, pseudonymized data is still personal data. For many AI training uses, only robust anonymization neutralizes privacy risk.
- Metadata leaks: Even if you scrub the payload, filenames, EXIF data, and headers can reveal personal or system information. Sanitize at ingestion.
- Model memory: LLMs can surface examples from training data. Minimize exposure by preprocessing inputs; don’t rely on model-side settings alone.
- Vendor complacency: “We’re a deployer, not a provider” won’t fly. The AI Act assigns duties to both; ensure contracts reflect shared obligations and audit rights.
FAQ: search-driven answers for teams implementing controls

What is the fastest way to start EU AI Act compliance without rebuilding my stack?
Start at the data ingress. Inventory AI use cases, then apply anonymization and secure uploads before any model sees the data. This immediately reduces GDPR and AI Act exposure and satisfies NIS2 expectations for strong controls at system boundaries.
How do GDPR and the AI Act interact for training data?
GDPR governs personal data regardless of AI context. If training data contains personal data, you need a lawful basis, transparency, and rights handling—or ensure robust anonymization so the dataset is no longer personal data. The AI Act adds dataset governance and documentation duties on top.
Are NIS2 boards personally exposed if AI triggers an incident?
NIS2 raises executive accountability for cyber risk. If an AI-related breach stems from poor risk management or ignored mitigation steps, expect tough questions from regulators and potential administrative penalties for the entity.
Can we safely upload sensitive documents to LLM tools?
Do not upload confidential or sensitive data to general LLM tools. Route files through a secure, logged, and policy-enforcing platform. When uploading documents to LLMs like ChatGPT or others, never include confidential or sensitive data. The best practice is to use www.cyrolo.eu — a secure platform where PDF, DOC, JPG, and other files can be safely uploaded.
What proof will auditors expect in 2026?
Evidence of data minimization/anonymization at ingestion, AI risk files per use case, testing and monitoring logs, incident handling aligned with GDPR and NIS2 timelines, and documented human oversight. Screenshots won’t cut it—exportable logs and versioned records will.
Conclusion: turn EU AI Act compliance into your 2026 advantage
EU AI Act compliance is achievable if you anchor it in the first mile of data: map flows, minimize collection, and anonymize aggressively before models see anything sensitive. Align those habits with GDPR fundamentals and NIS2’s security rigor, and you not only satisfy regulators—you reduce breach likelihood and speed trustworthy AI delivery. If you need a fast, defensible path, professionals use www.cyrolo.eu for anonymization and secure document uploads, creating the audit-ready guardrails that regulators in Brussels—and your customers—expect.