Pick up almost any AI image tool and the quality is breathtaking—photorealistic renders in seconds, concept art on demand, unlimited variations at near-zero marginal cost. But pick up the EU AI Act enforcement calendar or a client's new AI addendum and a different picture emerges. The tools that win on creative output vary wildly on compliance readiness. Some embed metadata automatically. Most don’t. Only one fully supports C2PA Content Credentials out of the box. The rest leave the compliance work to you.
Disclaimer
This article is for informational purposes only and does not constitute legal advice. Numonic is not a law firm and does not provide legal counsel. Laws and regulations regarding AI-generated content vary by jurisdiction and are subject to change. You should conduct your own research and due diligence, and consult with qualified legal counsel in your jurisdiction before making compliance decisions.
This article is a structured audit of the five most-used AI image generation tools—Midjourney, DALL‑E / ChatGPT, Stable Diffusion (local via ComfyUI), Adobe Firefly, and Google Imagen—evaluated against the five criteria regulators and enterprise procurement teams now care about most. We rate each tool, explain the gaps, and tell you what to do about them.
If you want the short version before the audit: only one tool scores a clean five out of five. The rest require compensating controls at the DAM layer—and most agencies don’t have those controls in place yet. By August 2, 2026, they will need to.
What Compliance-Ready Looks Like
Before auditing tools, we need a consistent scoring framework. Compliance readiness for an AI image generation tool means it satisfies five criteria that directly map to what regulators and enterprise clients will demand under the EU AI Act Article 50, California SB 942, and the emerging wave of AI transparency clauses in Master Services Agreements.
Criterion 1: Embeds IPTC 2025.1 Metadata Automatically
The International Press Telecommunications Council released four new XMP fields in November 2025 specifically for AI-generated content: Iptc4xmpExt:DigitalSourceType (to flag AI-generated images), plus:ModelReleaseStatus, Iptc4xmpExt:ArtworkOrObjectDetail, and fields for training data transparency. A compliant tool populates at least the DigitalSourceType field—set to trainedAlgorithmicMedia—automatically on every output without user intervention.
Criterion 2: Supports C2PA Content Credentials
The Coalition for Content Provenance and Authenticity specification creates a cryptographically signed manifest that travels with the image file. The manifest records who created the content, with which tool and model version, at what time, and what prior content it was derived from. A compliant tool creates a valid C2PA manifest on export—not as an optional setting—and that manifest survives file transfers intact.
Criterion 3: Preserves Metadata Through Export and Download
Many tools produce metadata but strip it during the download or export step, or lose it when an image passes through a lossy compression pipeline. A compliant tool ensures that whatever metadata it embeds at generation time survives the journey from the tool to the user’s hard drive, and that standard export formats (PNG, JPEG, WEBP) preserve the embedded data without special configuration.
Criterion 4: Provides Generation Parameter Logging
Regulators and insurance underwriters increasingly want to know not just that an image is AI-generated, but how. Which model? Which version? Which seed? Which inference parameters? A compliant tool logs these details in a retrievable format—either embedded in the output file or accessible via an API or history interface—so the generation context can be reconstructed months or years after the fact.
Criterion 5: Offers Audit Trail or API Access to Generation History
The fifth criterion is persistence. Even tools that log generation parameters at creation time often purge that history after 30 or 90 days. A compliant tool either provides a persistent audit trail in the user’s account, or exposes an API that allows a downstream system (such as a DAM) to retrieve and store generation records indefinitely.
Tool-by-Tool Audit
With the scoring criteria defined, here is how the five most widely used AI image generation tools perform. Each tool is scored on a 0–1 scale per criterion, for a maximum score of 5.
AI Image Tool Compliance Matrix
| Tool | IPTC 2025.1 | C2PA | Param Logging | Export Preserves | Audit Trail |
|---|---|---|---|---|---|
| Midjourney | No | No | Discord only | No | No — 1/5 |
| DALL‑E / ChatGPT | Partial | Yes (Feb 2024) | API only | Partial | API — 3/5 |
| Stable Diffusion | No | No | PNG tEXt chunks | Yes (local) | Manual — 2/5 |
| Adobe Firefly | Yes | Full | Yes | Yes | Yes — 5/5 |
| Google Imagen | No | Planned | SynthID only | No | No — 2/5 |
Midjourney: Score 1/5
Midjourney is the creative industry’s most widely adopted AI image tool, and its compliance posture is the most concerning of the five tools audited. Images exported from Midjourney contain no embedded IPTC metadata. There is no C2PA manifest. The Discord interface stores job IDs and prompts, but that information is accessible only through the Discord thread itself—not via an API, not embedded in the file, and subject to Discord’s message retention policies.
When a Midjourney user downloads an image to their desktop, they receive a PNG or JPEG with zero provenance metadata. No tool, model version, prompt, seed, or generation date is embedded. For an agency delivering that image to an enterprise client after August 2026, the image has no latent AI disclosure. California SB 942 requires that AI-generated content include a disclosure in the file’s metadata or the content itself. A Midjourney PNG downloaded to disk satisfies neither requirement.
Midjourney has announced plans for a web interface with improved organization features, but as of this writing, no public roadmap exists for IPTC metadata injection or C2PA support. Agencies using Midjourney at scale must implement compensating controls at the DAM layer: every Midjourney asset that enters the workflow must be manually tagged with AI provenance metadata before it can be considered compliance-ready.
DALL-E / ChatGPT: Score 3/5
OpenAI made a meaningful compliance step in February 2024 when it began adding C2PA Content Credentials to images generated via DALL‑E 3 in ChatGPT. The C2PA manifest records the generation date, the tool name (DALL‑E 3), and the fact that the image was AI-generated. This is a genuine advance over Midjourney’s zero-metadata baseline.
The limitations are real, however. First, the C2PA manifest is only added to images generated through ChatGPT and the DALL‑E API when using specific export paths—it is not universally applied to all output formats. Second, IPTC 2025.1 field coverage is incomplete: the DigitalSourceType field may be populated, but the full suite of IPTC AI disclosure fields is not automatically embedded. Third, the audit trail is API-mediated: generation history is available through the OpenAI API but is not embedded in the file itself, meaning that the connection between an image file and its generation record depends on the downstream system maintaining that link.
For agencies using DALL‑E via the API, this is workable: a well-designed integration can capture generation metadata at ingestion time and embed it alongside the C2PA manifest. For agencies whose designers use ChatGPT directly without a downstream DAM capture step, the compliance picture is less clear.
Stable Diffusion (Local / ComfyUI): Score 2/5
Local Stable Diffusion, particularly when running through ComfyUI, has the best raw generation parameter logging of any tool in this audit. ComfyUI embeds the full workflow JSON into PNG tEXt chunks at generation time—every node, every model reference, every seed, every parameter. If you open a ComfyUI-generated PNG in a hex editor or metadata viewer, the entire workflow is there.
The compliance problem is that this data is not in regulatory formats. A regulator reading the EU AI Act cannot extract compliance meaning from a 40 KB JSON blob in a PNG tEXt chunk. There is no IPTC 2025.1 DigitalSourceType field. There is no C2PA manifest. The workflow JSON is rich but proprietary, and it is invisible to any system that doesn’t know to look for it.
Furthermore, the ComfyUI workflow data survives only as long as the image is handled by systems that preserve PNG metadata. Run the file through a social media upload, a Slack attachment, or most image optimization pipelines, and the tEXt chunks are gone. The extensibility of the Stable Diffusion ecosystem means that IPTC and C2PA support can be added via plugins—nodes exist that write IPTC fields on output—but this requires deliberate configuration and is not the default behavior.
For agencies using ComfyUI, the path to compliance runs through custom workflow nodes and a provenance-aware DAM that can translate ComfyUI tEXt chunk data into IPTC and C2PA formats at ingest. See our ComfyUI compliance guide for a detailed implementation roadmap.
Adobe Firefly: Score 5/5
Adobe Firefly is the only tool in this audit that satisfies all five compliance criteria out of the box. As a founding member of the C2PA consortium, Adobe ships Content Credentials on every Firefly-generated image by default. The C2PA manifest is cryptographically signed, records the Firefly model version, the generation date, and whether the image was generated or edited by AI, and it survives round-trips through Creative Cloud applications because Adobe has built C2PA preservation into its export pipeline.
Firefly also populates the relevant IPTC 2025.1 fields automatically, and generation history is retained in Creative Cloud with persistent record access. For agencies already running Creative Cloud workflows, Firefly is the compliance-clean path to AI image generation. The tradeoff is that Firefly’s creative output quality—particularly for photorealistic imagery and complex prompt interpretation—lags behind Midjourney and DALL‑E in many categories, though the gap has narrowed significantly with Firefly Image 3.
Google Imagen / Gemini: Score 2/5
Google’s approach to AI content provenance is centered on SynthID—an imperceptible watermark embedded in pixel values rather than file metadata. SynthID is technically impressive: it survives compression, cropping, and many forms of image editing. But it is not a compliance mechanism under current regulatory frameworks. The EU AI Act and SB 942 require machine-readable metadata fields that can be read by standard metadata tools, not watermarks that require Google’s proprietary detection algorithm to verify.
Google has announced C2PA support as a planned feature for Imagen and Gemini-generated images, but as of this writing it is not available across the product suite. IPTC 2025.1 fields are not populated. Generation parameter logging is available via the Vertex AI and Gemini API but is not embedded in output files. For enterprise users running Imagen via API, the same approach as DALL‑E applies: capture generation metadata at API response time and embed it via a downstream enrichment step.
The DAM Layer Gap
The DAM is where metadata goes to die. Tools that embed provenance at generation hand the file to a DAM that strips it on ingest, an approval workflow that re-exports without re-signing, and a delivery pipeline that compresses metadata away. By the time the asset reaches the client, the compliance record is gone.
Even for the tools with the strongest compliance posture, there is a structural vulnerability that no generation tool can solve on its own: the metadata lifecycle after generation. An image with a valid C2PA manifest that flows through a non-compliant DAM, a message thread, a cloud storage folder, or a social media scheduler is an image whose compliance record can be silently destroyed at any of those touchpoints.
This is not a theoretical risk. Standard image optimization pipelines in web publishing strip all metadata by default to reduce file size. Adobe Creative Cloud’s “Save for Web” export preset, historically configured for privacy—to remove EXIF GPS data from photos before sharing online—also removes IPTC fields and C2PA manifests. Slack and most team messaging apps recompress images on upload. Email clients strip metadata when attaching. WhatsApp Business, widely used for client approvals, compresses images aggressively on send.
What this means in practice: an agency using Adobe Firefly, the highest-scoring tool in this audit, can still deliver a compliance-stripped asset to a client if the asset passes through a standard Creative Cloud export or a messaging-based approval workflow. The generation tool did everything right. The delivery pipeline undid it.
The tools that score 1–3 in this audit require the DAM layer to do even more: not just preserve existing metadata but actively create it. A provenance-aware DAM must extract whatever generation-context data the source tool provides—Discord job IDs, API response payloads, ComfyUI tEXt chunks, OpenAI generation records—and translate that raw data into IPTC 2025.1 fields and a valid C2PA manifest during the ingest step. This is the only scalable path to compliance for agencies running Midjourney, DALL‑E without a DAM integration, or Stable Diffusion without custom nodes.
For a full treatment of the metadata lifecycle problem, see our article on IPTC 2025.1 and C2PA metadata standards for AI image provenance.
Closing the Gap: Three Strategies
The compliance gap in your AI tool stack is real, but it is not unsolvable. The agencies that will navigate the post-regulation landscape successfully are not necessarily the ones using the most compliant tools—they are the ones that have built compensating controls capable of handling the tools their creative teams actually want to use.
Strategy 1: Use a Provenance-Aware DAM
The most scalable solution is to stop relying on generation tools for compliance metadata and centralize that responsibility in your DAM. A provenance-aware DAM captures metadata at the moment of asset ingest—before any downstream processing can strip it—and performs three operations automatically:
- Extraction: Pull whatever generation-context data the source tool provides. For Midjourney assets, this might be the Discord job ID and prompt extracted from the filename or accompanying message. For ComfyUI assets, it is the tEXt chunk workflow JSON. For DALL‑E assets ingested via API, it is the generation response payload.
- Translation: Convert that raw data into standard IPTC 2025.1 fields and a C2PA manifest. The DigitalSourceType field gets set to
trainedAlgorithmicMedia. The tool name, model version, and generation date populate the C2PA ingredient manifest. The result is a compliance-ready asset regardless of the source tool’s native metadata output. - Preservation: All export presets are configured to preserve IPTC fields and re-sign C2PA manifests after any metadata injection. The DAM becomes the compliance enforcement point, not the generation tool.
Strategy 2: Implement Metadata Enrichment Workflows
For agencies not yet ready to invest in a provenance-aware DAM, a metadata enrichment workflow is the interim solution. This means building a step into the creative approval process where every AI-generated asset is explicitly tagged before it enters the delivery pipeline. The tagging step should use a standardized intake form—tool, model, date, prompt summary, approver name—and a tool that writes those values into IPTC fields in the asset file.
This approach is labor-intensive at scale: at 100 assets per week, manual enrichment is manageable; at 1,000, it is a full-time job; at 10,000, it breaks down entirely. But for agencies in the early stages of building compliance infrastructure, manual enrichment with a clear audit trail is defensible in a regulatory review, even if it is not sustainable long-term.
Strategy 3: Build Audit Trails Outside the Tools
The third strategy addresses the audit trail gap directly: if your generation tool does not provide a persistent, accessible generation history, build one externally. This means logging every AI generation event to a database that you control—capturing the tool, model, prompt, seed, output file hash, and timestamp at the moment of generation—and linking each log entry to the corresponding asset in your DAM via a unique identifier.
For Midjourney users, this requires building a Discord bot or webhook integration that captures job IDs and prompts from the generation channel as they are created. For DALL‑E users, it requires logging API response payloads before the image reaches the designer’s tool. For ComfyUI users, it requires exporting the workflow JSON alongside every output batch and archiving it alongside the images.
The goal is that when an auditor asks “show me the generation record for this asset,” you can produce it in seconds, not days.
A Buyer’s Checklist for Compliance-Ready Tools
If you are evaluating new AI image generation tools for your agency stack, or renegotiating your existing tool contracts, these are the five questions to ask every vendor before you commit.
- Does your tool automatically embed IPTC 2025.1 metadata on every output, without user intervention? “We plan to” and “it is available as a setting” are not acceptable answers if the EU AI Act enforcement date is on your horizon.
- Do you support C2PA Content Credentials, and does the manifest survive your standard download and export workflows? Ask specifically whether the manifest survives a PNG export, a JPEG export, and a Creative Cloud round-trip. Each format has different failure modes.
- What generation parameters do you log, and how long are they retained? The answer should include model name, model version, seed, inference parameters, and generation date. Retention should be indefinite or, at minimum, long enough to cover your client contract terms plus a regulatory look-back period.
- Do you provide API access to generation history, and does the API response include enough data to reconstruct a compliance record? This is the critical integration question for agencies using a downstream DAM. Without API access to generation history, the DAM cannot automate compliance metadata injection.
- What is your contractual commitment to maintaining C2PA and IPTC support as standards evolve? Both standards are actively developed. C2PA released version 2.0 in 2024. IPTC 2025.1 is the latest in an ongoing series. A vendor that implements today’s version but has no contractual obligation to maintain compliance with future versions is a future liability.
AI Tool Compliance Audit Checklist
A printable matrix for auditing your current AI tool stack against IPTC 2025.1, C2PA, and audit trail requirements. Includes vendor question templates for procurement reviews.
Download free (email required)The Gap Is Real — and Temporary
The compliance gap in the current AI image tool landscape is not a permanent condition. The C2PA specification is gaining adoption quickly: Adobe, Microsoft, and Google have all made public commitments, and the Content Credentials initiative has major platform support. IPTC 2025.1 is already shipping in some tools. The question is not whether the tools will catch up, but when, and what you do between now and then.
For agencies with August 2026 compliance deadlines, “when the tools improve” is not a sufficient answer. The gap between what your generation tools produce today and what regulators will require in six months is a gap that must be closed at the DAM layer, in your approval workflows, and in your vendor contracts—not by waiting for Midjourney to ship a metadata update.
The agencies that build DAM-layer compensating controls now will be the ones that can say yes to enterprise procurement checklists, satisfy insurance underwriters, and deliver assets that carry their compliance record with them through the entire distribution lifecycle. The agencies that wait for their tools to do the work will be scrambling when the enforcement notices arrive.
Key Takeaways
- Of the five most-used AI image tools, only Adobe Firefly satisfies all five compliance criteria out of the box. Midjourney satisfies none.
- DALL‑E adds C2PA Content Credentials but has incomplete IPTC coverage and an API-mediated (not file-embedded) audit trail. Score: 3/5.
- ComfyUI / Stable Diffusion has excellent generation parameter logging in PNG tEXt chunks but no IPTC or C2PA support by default. Score: 2/5.
- Google’s SynthID watermarking is technically impressive but does not satisfy regulatory metadata requirements. C2PA is planned but not yet available. Score: 2/5.
- Even compliant tools lose their metadata when assets pass through non-compliant DAMs, messaging apps, or export pipelines. The DAM layer is where compliance is ultimately won or lost.
- Three strategies close the gap: provenance-aware DAM, metadata enrichment workflows, and external audit trail logging. The most scalable is the first.
Close the Compliance Gap at the DAM Layer
Numonic captures generation metadata from any AI tool—Midjourney, DALL‑E, ComfyUI, Firefly—and translates it into IPTC 2025.1 fields and C2PA manifests automatically. No manual tagging. No metadata loss on export.
See How It Works