The EU AI Act has generated thousands of articles about model safety, prohibited systems, and high-risk classification. Almost none of them address Article 50—the provision that applies most directly to the agencies, freelancers, and brands producing AI content every day. Enforcement begins August 2, 2026. If you deploy AI-generated images, copy, audio, or video in a professional capacity, this article is about you.
Disclaimer
This article is for informational purposes only and does not constitute legal advice. Numonic is not a law firm and does not provide legal counsel. Laws and regulations regarding AI-generated content vary by jurisdiction and are subject to change. You should conduct your own research and due diligence, and consult with qualified legal counsel in your jurisdiction before making compliance decisions.
Article 50 of the EU AI Act imposes transparency obligations on “deployers”—anyone who puts an AI system to use in a professional context. Unlike the high-risk system provisions that dominate most coverage, Article 50 is not conditional on risk classification. It applies to AI-generated text presented as human-written, synthetic imagery, deepfakes, and AI-manipulated audio or video. These are the everyday outputs of modern content creation workflows.
This article is a translation exercise. We have read the legal text, the recitals, and the guidance published by the European AI Office, and we have converted the obligations into concrete steps that content creators, agencies, and in-house marketing teams can actually execute before the enforcement clock starts.
Who Counts as a “Deployer”?
Article 3(4) of the EU AI Act defines a deployer as “any natural or legal person, public authority, agency or other body using an AI system under its authority except where the AI system is used in the course of a personal non-professional activity.” This is deliberately broad, and the phrase “personal non-professional activity” carve-out is narrower than most creators assume.
If you run a creative agency and generate images with Midjourney for a client campaign, you are a deployer. If you are a freelance designer using Stable Diffusion for paid client work, you are a deployer. If you are a brand's in-house marketing team using Adobe Firefly to produce social content, you are a deployer. If you are a solo creator using AI tools commercially—selling prints, licensing images, producing content for sponsors—you are a deployer.
The exemption covers genuinely personal use: generating art purely for yourself, with no professional or commercial dimension. The moment money or professional obligation enters the picture, the deployer definition applies.
Importantly, being a deployer does not require that you built or trained the AI system. Deployers use systems built by “providers” (the model developers and tool companies). Article 50 sits at the deployer level precisely because regulators recognized that the people using AI in daily commercial workflows are not the model developers—they are the agencies and creators producing the content that consumers actually see.
What Qualifies as an “AI System”?
Before any obligation attaches, there must be an AI system in the picture. Article 3(1) defines an AI system as “a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.”
The critical word is infers. An AI system does not follow fixed, human-defined rules to produce output. It draws its own conclusions from input—a prompt, an image, a data set—and generates something new. Midjourney, Stable Diffusion, DALL·E, Adobe Firefly, Runway, and Sora all clearly meet this definition: they accept a prompt and infer images, video, or audio that did not exist before.
Software that merely stores, organises, converts, or distributes files—without itself performing inference—is not an AI system under Article 3. A traditional file manager, a cloud storage service, or a basic digital asset management (DAM) tool that catalogs files by filename and folder does not qualify.
Using AI Output vs. Deploying an AI System
A question that the Act does not fully resolve—and that the Code of Practice on AI-Generated Content Transparency (still being finalised, expected June 2026) will need to clarify—is what happens downstream.
Consider a VFX studio that generates a set of AI backgrounds in Stable Diffusion and sells the rendered frames to a film production. The VFX studio is a deployer: it used the AI system. But is the film production also a deployer? It never touched the AI tool—it bought finished image files. Under the current text, the obligations in Article 50 attach to the entity that “uses an AI system under its authority,” which suggests the film production is not itself a deployer of the image generator. However, it inherits a practical obligation: if the AI provenance metadata has been stripped from the files it received, the transparency chain is broken and neither party can demonstrate compliance to a regulator.
The same logic applies to a video game company that commissions AI-generated concept art or environmental textures. The studio generating the assets is the deployer; the game publisher is the downstream recipient. But if the game publisher knowingly publishes AI-generated assets and strips or ignores the metadata, it may face scrutiny under separate consumer-protection and advertising-transparency rules, even if it falls outside the strict Article 50 deployer definition.
The practical takeaway: if you create AI content, you are the deployer and you carry the Article 50 obligations. If you receive AI content from a supplier, you are not a deployer of the AI system, but you have a commercial interest in ensuring the provenance metadata survives the handoff—because your own regulatory exposure (advertising standards, sector regulations, client contracts) may still require transparency about AI involvement.
The Artistic Work Exception
Article 50(4) provides a carve-out for content that forms part of an “evidently artistic, creative, satirical, fictional or analogous work or programme.” This is not an exemption from disclosure—it is a lighter-touch obligation. Where the exception applies, you must still disclose that AI-generated or manipulated content was used, but the disclosure must be done “in an appropriate manner that does not hamper the display or enjoyment of the work.”
In practice, this means a film, video game, or art installation that uses AI-generated assets does not need to plaster a watermark across every frame. A credit-roll disclosure, an IPTC metadata field, or a small on-screen icon shown for at least five seconds (as the draft Code of Practice proposes for deepfakes in artistic works) may suffice. The key word is “evidently ”—the artistic nature must be obvious to the audience. A brand using AI-generated product photos on an e-commerce site would struggle to claim the artistic exception; a gallery exhibition of AI-generated art would not.
What does not qualify commercially: marketing visuals generated by AI for a brand campaign. Product renders created with AI for an e-commerce site. Social media content produced with AI tools for commercial distribution. Stock images generated by AI and licensed to buyers. In all of these cases, the commercial context removes the artistic shield. The phrase “evidently artistic” is intentionally subjective, and interpretation will vary across EU member states' National Competent Authorities. Until case law clarifies the boundaries, the safe assumption for any commercial creative team is that the exemption does not apply to your work.
Where Do AI-Native DAM Tools Fit?
Digital asset management platforms occupy an interesting position in the Article 50 framework. The answer depends on what the platform actually does.
A DAM that only stores, organises, tags (using fixed rules), and distributes files is not an AI system. It does not infer outputs from inputs—it follows deterministic logic. Traditional DAM platforms like basic folder-based systems, simple tagging databases, or file-sharing services fall outside Article 3 entirely.
An AI-native DAM is different. Platforms like Numonic use machine-learning models to auto-generate titles, descriptions, and tags from asset content—that is inference, and those specific features qualify as AI system components. Under the Act's framework, Numonic is a provider of those AI features (it develops and offers them) and its users are deployers when they invoke those features on their assets.
However, Numonic's core storage, organisation, search, and export functions that operate on user-supplied metadata and deterministic rules are not AI systems. The Act applies to the AI components, not to the entire platform.
What makes AI-native DAMs particularly relevant to Article 50 is not their own AI features but their role in the metadata supply chain. When a deployer generates an image in Midjourney, Article 50(2) requires that the output carry machine-readable provenance markers. But those markers are only useful if every tool in the post-creation workflow preserves them. A DAM that strips EXIF, XMP, or C2PA data on import or export breaks the transparency chain—even though the DAM itself has no Article 50 obligation as a non-AI tool.
This is why Numonic's privacy-aware export system (preserving IPTC 2025.1 fields and C2PA Content Credentials through configurable presets) exists: not because Numonic is legally required to preserve that metadata under Article 50, but because its users—the deployers—need the metadata to survive so they can meet their own transparency obligations. A DAM that silently strips provenance data makes compliance harder for everyone downstream.
The Four Article 50 Obligations
Article 50 contains four distinct obligations, each targeting a different content scenario. Understanding which obligation applies to which type of output is the first step in building a compliant workflow.
Article 50(2): Machine-Readable Marking of AI-Generated Content
Article 50(2) requires that AI systems generating synthetic audio, image, video, or text content mark that content in a machine-readable format, “detecting that the output is artificially generated or manipulated.” For deployers, this means ensuring that assets leaving your workflow carry detectable AI provenance signals—not just as documentation in your records, but embedded in or cryptographically bound to the file itself.
The regulation does not mandate a single technical standard, but the European AI Office has signaled that two approaches are technically compliant: C2PA (Content Provenance and Authenticity) manifests and machine-readable watermarks. C2PA is the more robust option because it creates a cryptographically verifiable chain of provenance that survives most export and distribution transformations. Watermarks offer a lightweight alternative but are more easily stripped.
The practical implication for deployers: your delivery workflow must produce files with embedded machine-readable provenance. If your current stack exports assets and strips metadata (which many standard compression tools and social upload APIs do by default), you are already in violation of Article 50(2) from the moment you distribute that content in the EU.
Multi-Layered Marking: What “Robust” Means
The EU AI Office has signaled through guidance documents that compliance requires more than a single metadata field. The word “robust” in the legislation implies marking that survives common transformations. In practice, this points to a multi-layered approach:
- Embedded metadata (IPTC 2025.1 or XMP fields) that travels inside the file. This is the minimum viable layer.
- Cryptographic provenance (C2PA Content Credentials) that binds a manifest of assertions to the file's content hash. This provides tamper-evidence.
- Invisible watermarking or fingerprinting that survives re-encoding, screenshots, and format conversion. This is the fallback when metadata is stripped.
No single layer is sufficient on its own. Metadata is easily stripped. C2PA manifests break when the file is re-encoded. Watermarks can be degraded by heavy post-processing. The three layers together create what the EU AI Office calls a “defense in depth” approach to Article 50(2) compliance.
Article 50(3): Disclosure to Natural Persons That Content Is AI-Generated
Deployers of AI systems that generate or manipulate image, audio or video content constituting a deep fake shall disclose that the content has been artificially generated or manipulated in a way that is clear and distinguishable to the persons exposed to it.
— EU AI Act, Article 50(3)Article 50(3) requires human-readable disclosure—a notice that the people consuming the content can actually read and understand. This is distinct from the machine-readable marking in 50(2). Both are required; they serve different audiences.
The disclosure must be “clear and distinguishable.” Regulators have indicated that this means the disclosure cannot be buried in terms of service, hidden in image file properties, or presented in font sizes that make it effectively invisible. For social media content, a disclosure in the caption qualifies. For display advertising, a label on the creative unit is the expected approach. For editorial content, an author note or in-article disclosure at the point of publication is required.
There is an important exception in Article 50(3) for legitimate artistic or satirical expression, provided that appropriate disclosure is still made “in an appropriate and unambiguous manner.” This exception does not eliminate the disclosure requirement—it simply acknowledges that the format of disclosure can flex for artistic contexts. Satire still requires disclosure; it simply does not require the same prominence as a commercial campaign.
Article 50(4): Public Disclosure for Deepfakes and Synthetic Media
Article 50(4) addresses synthetic media involving real people, real places, or real events that could be mistaken for authentic content. The Article defines a “deep fake” broadly as AI-generated or manipulated image, audio, or video content “that resembles existing persons, objects, places, or other entities or events and would falsely appear to a person to be authentic or truthful.”
For deployers, this creates the strictest disclosure obligation: content falling under this definition must carry a prominent public disclosure that it is not authentic. This is not an internal record-keeping requirement—it is a requirement that the disclosure be visible to anyone who encounters the content.
The practical scope of Article 50(4) is significant. It covers AI-generated spokesperson videos where the person speaks words they never said, synthetic imagery of real brand locations used without disclosure, AI-generated audio of real voices, and manipulated photographs of real events. For agencies producing brand content, user-generated content campaigns, and influencer marketing, this obligation requires a systematic content review step before any AI-generated material goes live.
Article 50(5): Information to Downstream Deployers in the Value Chain
Article 50(5) is the least-discussed but operationally significant obligation: when an AI system is used to generate content that will be passed to another deployer in the production chain, the upstream deployer must provide sufficient information for the downstream deployer to meet their own Article 50 obligations.
This creates an information duty throughout the creative supply chain. If an agency generates AI assets and delivers them to a client who will then distribute them, the agency must provide the client with the metadata, provenance records, and disclosure classifications that the client needs to remain compliant. Passing an AI-generated asset to a downstream party with no provenance documentation is itself a violation of Article 50(5), even if the downstream party ultimately fails to disclose correctly.
For agencies, this means AI content deliverables must include a provenance package alongside the creative assets themselves. The days of delivering a folder of PNGs with no metadata are over for content produced in or distributed to EU audiences.
Penalties: What Non-Compliance Actually Costs
Article 99 of the EU AI Act establishes the penalty structure for violations. For Article 50 transparency obligation failures, the maximum fine is €15 million or 3% of global annual turnover, whichever is higher. For violations of prohibited practices (Article 5), the ceiling is €35 million or 7% of global annual turnover.
For context: the first GDPR fine issued by the Irish Data Protection Commission against Meta was €17 million. Subsequent fines escalated dramatically as enforcement matured—Meta received a €1.2 billion fine in 2023, Amazon received €746 million. The pattern with GDPR was consistent: early enforcement targeted organizations that were clearly non-compliant on obvious requirements. Article 50 failures—AI-generated content with no disclosure, no machine-readable provenance, no downstream documentation—represent exactly the kind of clear violation that generates early enforcement action.
For a mid-size agency with €10 million in annual revenue, a 3% fine is €300,000. For a large agency group with €500 million in global turnover, it is €15 million. These numbers are not theoretical. The European AI Office has enforcement powers, and EU member state supervisory authorities have been directed to begin enforcement activities from August 2, 2026.
Beyond direct fines, the contractual exposure is equally significant. Enterprise clients have begun inserting EU AI Act compliance warranties into Master Services Agreements. A demonstrable Article 50 violation discovered by an enterprise client triggers not just regulatory exposure but contract breach claims that can exceed the regulatory penalty.
Beyond Europe: California SB 942
If you work with American clients or distribute content in the United States, you face a parallel obligation. California's SB 942, which took effect January 1, 2026, requires providers of generative AI systems to embed “latent disclosures” in AI-generated content that are “extraordinarily difficult to remove.”
The two laws share the same deadline year but differ in important ways. SB 942 is provider-centric—it places the primary obligation on the company that builds the AI system, not the person who uses it. The EU AI Act creates obligations for both. SB 942 penalties are $5,000 per violation per day, enforced by the California Attorney General. The EU penalties are percentage-of-turnover, enforced by 27 different National Competent Authorities.
EU AI Act vs. California SB 942
| EU AI Act (Article 50) | California SB 942 | |
|---|---|---|
| Effective date | August 2, 2026 | January 1, 2026 |
| Obligation model | Shared (provider + deployer) | Provider-centric |
| Maximum penalty | €15M or 3% global turnover | $5,000/violation/day |
| Enforcement | 27 National Competent Authorities | California Attorney General |
| Artistic exemption | Yes (narrow, "evidently artistic") | No explicit exemption |
For creative teams, the practical effect is the same: your AI-generated content needs embedded metadata declaring its origin. If you build a compliance workflow that satisfies the EU AI Act, you will almost certainly satisfy SB 942 as well—the European requirements are the stricter of the two.
A Five-Step Compliance Checklist
Article 50 compliance is not a single project. It is an ongoing operational posture. But the foundation can be established in five structured steps, each of which produces a defensible artifact if a regulator or client asks for evidence of compliance.
Five-Step Article 50 Compliance Checklist
AI Governance Policy Template
A ready-to-customize policy framework that covers Article 50 obligations, three-tier content classification, disclosure templates, and audit documentation requirements.
Download free (email required)Step 1 in Depth: The Tool Stack Audit
The tool stack audit is the most revealing exercise because it forces organizations to confront the gap between the tools they sanction and the tools employees actually use. A typical creative team uses three to seven AI tools. Sanctioned enterprise tools (Adobe Firefly, Getty Generative AI) typically produce better provenance metadata. Consumer tools (personal Midjourney subscriptions, free Stable Diffusion instances) typically produce none.
During your audit, assign each tool to one of three categories:
- Green (C2PA support): Adobe Firefly, OpenAI DALL·E 3, and Microsoft Designer produce C2PA manifests natively. Output enters the workflow with a provenance record that can be preserved and extended.
- Yellow (partial metadata): Some tools embed generation parameters in PNG text chunks or EXIF fields that can be translated into IPTC format. Output requires metadata enrichment at ingestion.
- Red (no provenance): Midjourney, Stable Diffusion, and Flux produce no provenance metadata as of February 2026. Output from these tools requires manual documentation at ingestion and is your highest compliance risk. Consider restricting to approved enterprise versions or implementing ingestion workflows that capture context from the user at upload time.
Step 3 in Depth: Disclosure Templates by Channel
Disclosure language needs to be channel-specific because “clear and distinguishable” means different things in different contexts. A caption disclosure appropriate for Instagram does not work in a display ad. Here are baseline templates:
- Social media caption: “[AI-generated image]” or “Created with AI assistance” at the beginning or end of the caption. The IAB recommends the disclosure appear before truncation on any platform that truncates captions.
- Display advertising: An “AI” label in the corner of the creative unit, using minimum 8pt font. More prominent placement is required for synthetic content depicting real people.
- Editorial content: An author note or end-of-article disclosure: “Images in this article were generated using [tool name].”
- Video content: A text overlay or lower-third disclosure visible for at least 3 seconds at the start of the content and, for longer pieces, at regular intervals.
- Client deliverables: A provenance cover sheet accompanying the asset package, documenting the AI tools used, the generation context, and the required disclosures for downstream distribution.
The Enforcement Timeline
The EU AI Act does not come into force all at once. Different provisions activate on different dates, and understanding the timeline matters for prioritizing compliance investment.
Already in Force (February 2025)
Article 5, which prohibits certain AI practices outright (manipulation of persons, social scoring, real-time biometric identification in public spaces), has been in force since February 2, 2025. Penalties for Article 5 violations—up to €35 million or 7% of global turnover—are already active.
If your organization uses AI for any purpose that could be characterized as manipulative or involves biometric processing, those prohibitions are not future obligations. They are current law.
August 2, 2026: Article 50 Goes Live
This is the critical date for content creators. Article 50 transparency obligations—machine-readable marking, human-readable disclosure, deepfake public labeling, and downstream information provision—all become enforceable on August 2, 2026.
Six months sounds like significant lead time. In practice, the infrastructure changes required for Article 50 compliance take longer to implement than most organizations expect. Embedding C2PA provenance in a multi-tool creative workflow requires integrations that cannot be purchased off the shelf and activated overnight. Disclosure templates require legal review. Team training requires scheduling time with people who are already fully allocated. Organizations that start in June 2026 will not be ready by August.
August 2, 2027: General Purpose AI Model Obligations
The obligations on providers of general-purpose AI models (the companies building the foundation models) reach full effect in August 2027, with code of practice requirements beginning earlier. For deployers, the significance of this date is indirect: as model providers implement their own transparency obligations, the metadata and provenance infrastructure embedded in the models themselves will improve, making deployer compliance easier. But deployers cannot wait for upstream compliance to materialize. Article 50 obligations sit at the deployer level regardless of what the model provider does.
Ongoing: Member State Enforcement Activity
Each EU member state designates a National Competent Authority (NCA) responsible for AI Act enforcement. As of early 2026, key designations include: Ireland has formally designated its NCA (the AI Advisory Board under the Department of Enterprise, Trade, and Employment)—significant given that Ireland is home to the European headquarters of many international agencies and platforms. France has assigned enforcement to CNIL, leveraging its existing data-protection infrastructure. Germany has distributed enforcement across its federal structure, consistent with its approach to GDPR. Other member states are still designating their authorities.
The pattern from GDPR enforcement suggests that early cases will target obvious, demonstrable violations rather than edge-case interpretations. AI-generated content with zero disclosure and zero machine-readable metadata is the clearest possible violation. It is also the current state of most agency content production.
What to Do This Week
Article 50 is not a compliance project for later. The infrastructure decisions you make in the next few months determine whether your organization crosses the August 2026 deadline with confidence or scrambles to retrofit compliance into workflows that were never designed for it.
The work is manageable if it starts now. The tool stack audit takes a day. Drafting disclosure templates takes a week. Selecting and implementing a metadata-aware DAM takes a month. Training your team takes a day. The only expensive option is waiting.
The complete AI Content Compliance guide covers the full regulatory landscape, including California SB 942, IPTC 2025.1 metadata standards, and C2PA technical implementation. The governance policy template gives you a ready-to-customize framework that covers all four Article 50 obligations with specific disclosure templates, audit log formats, and training outlines.
The agencies that will own the enterprise creative market after August 2026 are the ones that turned Article 50 from a compliance burden into a trust signal. That window is open now.
Key Takeaways
- Article 50 of the EU AI Act applies to deployers—anyone using AI professionally, including agencies, freelancers, and in-house teams. The personal non-professional exemption is narrow.
- The four obligations are: machine-readable provenance marking (50(2)), human-readable disclosure to audiences (50(3)), public labeling for deepfakes and synthetic media (50(4)), and provenance documentation for downstream deployers (50(5)). Compliant marking requires multiple layers: embedded metadata, cryptographic provenance, and invisible watermarking.
- Penalties reach €15 million or 3% of global annual turnover for Article 50 violations. Early GDPR enforcement history suggests clear violations are prioritized.
- Enforcement begins August 2, 2026. The infrastructure required for compliance—metadata embedding, disclosure workflows, team training—takes months to implement.
- California SB 942 creates parallel obligations for US distribution. Building for EU AI Act compliance satisfies both jurisdictions.
- The five-step checklist (audit, embed, template, train, document) produces a defensible compliance record. Start with the tool stack audit this week.
Build Your Article 50 Compliance Stack
Numonic automates C2PA provenance, IPTC 2025.1 field injection, and privacy-aware export so your team meets Article 50 requirements without adding manual steps to every creative workflow.
See How It Works