March 2026 – Two parallel developments suggest that Europe is drawing clearer boundaries around both AI ownership and data protection. One comes from the Council and the other from a German court, and both signal a more cautious approach toward redefining legal fundamentals in the name of innovation or administrative simplification.
1. Digital omnibus leak: Member States cut the core of the proposed GDPR reform
A leaked Council compromise draft removes entirely the proposed redefinition of “personal data” under the GDPR, and that alone underscores how controversial the Digital Omnibus has become.
What happened?
In November 2025, the European Commission launched the Digital Omnibus, with one of its central ambitions being to recalibrate how pseudonymised date is treated under the GDPR. The Commission proposed introducing what, in essence, is a principle of relativity. In practical terms: if an entity cannot reasonably identify the person behind certain information, that information would not qualify as personal data for that entity, even if another actor in the processing chain could identify them.
The justification? Aligning the law with European Court of Justice jurisprudence conforming that pseudonymised data may constitute personal data, depending on the circumstances and the means reasonably likely to be used for identification.
The leak
In February 2026, a leaked compromise draft from the Council tells a different story.
The text removes the proposed amendment in full and deletes the mechanism that would have allowed the Commission to adopt implementing acts defining when pseudonymised data ceases to be personal data.
Instead, the compromise acknowledges the ongoing work of the European Data Protection Board on updating its guidance on pseudonymisation.
The watchdogs
The Council’s reaction comes only weeks after the joint opinion issued by the European Data Protection Board (EDPB) and the European Data Protection Supervisor (EDPS). While broadly welcoming the Omnibus’ simplification objectives, they were sharply critical of redefining personal data. Their key objections were:
- the definition should say what personal data is, not define it by what it is not, since a negative framing risks increasing legal uncertainty;
- the Commission should not be empowered to decide by implementing acts to determine what no longer qualifies as personal data after pseudonymisation, as that directly affects the scope of EU data protection law;
- the EDPB is already finalising an updated pseudonymisation guide, and that process should be allowed to run its course.
Why it matters?
The stakes are not technical.
The current GDPR definition is objective: if a person can be identified by any means reasonably likely to be used, the data is personal. The Commission’s proposal would have shifted toward a relative, entity-based approach, where qualification depends on the perspective and capabilities of each actor.
For those implementing the AI Act, that distinction is not academic. AI risk assessments, training data governance, and documentation duties all hinge on whether data falls within or outside the scope of GDPR.
By deleting the proposed paragraph, the Council signals reluctance to recalibrate core privacy principles. Yet the trialogue process between the European Commission, the Council and the European Parliament has only just begun.
In the meantime, businesses must design compliance frameworks without knowing whether the regulatory perimeter will remain broad or become narrower.
2. No copyright for AI-generated logos: Munich court draws the line
The case
A local court in Munich (Amtsgericht München) has refused copyright protection for three logos created using a generative AI tool.
The facts were straightforward: the plaintiff generated the logos using concise, generic prompts via a paid premium subscription and later discovered that the defendant was using those logos on his website.
The court held that the time and money spent (including subscription fees) were legally irrelevant, as was the sheer length of the prompts (in one case, 1,700 characters).
The legal test
The court acknowledged that copyright protection for AI-generated output is conceptually possible, but only when prompts are sufficiently detailed and specific that the final result reflects the prompter's personal intellectual creation, with the AI acting merely as a tool to realise a precise, pre-existing idea.
In this case, the prompts, partially quoted in the decision, did not meet that threshold.
The court also noted, with some scepticism, that the parties may have been acquainted and that the case could have been staged as a test case to obtain a legal opinion on AI copyrightability. That observation, in itself, reflects the current uncertainty: businesses are looking for definitive answers on how AI-generated content fits within existing IP frameworks.
Why it matters?
This ruling aligns with a broader international trend, including US Copyright Office guidance and the UK's ongoing legislative review converging on the same principle: prompt engineering alone does not create authorship. For businesses relying on AI-generated brand assets, the message is clear: generic prompts produce output that may not be legally protected, regardless of what was paid for the tool.
As institutions debate definitions and courts test boundaries, companies are already implementing the AI Act. Businesses are waiting for clarity on where the lines will ultimately be drawn, yet governance frameworks, vendor assessments, and risk classifications cannot be put on hold.
In the meantime, careful structuring and well-documented decisions remain the safest course.