You have generated an image using ComfyUI with a custom workflow, a specific LoRA, and a prompt you spent an hour refining. Now you want to share it. The image file contains everything — your complete workflow graph, every parameter, the model names, the seed. Anyone who opens it in a metadata viewer can see your entire creative process. So you strip the metadata before sharing.
Part of our AI-Native DAM Architecture
But the EU AI Act requires that AI-generated content published in the EU carry machine-readable disclosure metadata from August 2, 2026. California's SB 942 requires similar disclosures. Stripping metadata for privacy removes the compliance trail that regulations mandate. You cannot have full privacy and full provenance simultaneously. This is not a feature gap — it is a fundamental tension that the architecture must resolve.
The Forces at Work
The paradox emerges from four legitimate but conflicting concerns:
- Creative privacy: Artists protect their techniques. A custom LoRA, a refined prompt, a novel workflow pattern — these represent competitive advantage or personal creative identity. Revealing them in shared images is the equivalent of publishing proprietary source code with every compiled binary. The desire to strip metadata is rational and legitimate.
- Regulatory compliance: The EU AI Act Article 50 and California SB 942 require that AI-generated content carry machine-readable markers indicating its synthetic origin. The specific metadata fields (IPTC 2025.1 AI fields, C2PA manifests) encode provenance information that must survive the export process. Stripping all metadata violates these requirements.
- Attribution and credit: When sharing work in a portfolio or delivering to clients, creators often want attribution metadata preserved — their name, their studio, the creative tool used. This is a subset of the full metadata that serves a different purpose than either privacy or compliance.
- Operational auditability: Internal teams need full provenance records for quality control, consistency tracking, and institutional knowledge. An archive of the studio's complete generation history — including all parameters — is an operational asset. But this archive should never leak externally.
The Problem
The four concerns create a 2x2 matrix that no single export setting satisfies:
The Privacy-Provenance Matrix
| Export Context | Privacy Need | Compliance Need | Conflict |
|---|---|---|---|
| Social media sharing | Strip creative process | Preserve AI disclosure | Direct contradiction |
| Client delivery | Strip proprietary methods | Preserve attribution | Partial overlap |
| Portfolio display | Strip generation details | Preserve credit | Moderate tension |
| Internal archive | None (full access) | Full audit trail | No conflict |
The first row is the crux. When sharing AI-generated images on social media, the artist wants to strip all generation metadata (prompts, models, parameters) for creative privacy. But compliance regulations require that the image carry machine-readable AI disclosure. These two requirements directly contradict each other — unless the architecture distinguishes between types of metadata and handles them separately.
Most tools today offer a binary choice: share with all metadata or strip all metadata. This forces users to choose between privacy and compliance. Some artists choose privacy (stripping metadata, risking non-compliance). Some choose compliance (leaving metadata, sacrificing creative privacy). Neither choice is correct because the question is wrong. The right question is: which metadata should travel with each export context?
The binary of "strip all" or "keep all" is a failure of imagination. The real question is which metadata should travel with which context — and that requires treating metadata as a composable structure, not an all-or-nothing blob.
The Solution: Context-Aware Metadata Profiles
The resolution is to decompose metadata into categories that can be independently controlled per export context. This is the privacy-tiered export pattern applied specifically to the compliance tension:
Metadata Categories
Not all metadata serves the same purpose. Separating it into categories enables fine-grained control:
- AI disclosure fields: Machine-readable markers indicating synthetic origin — IPTC 2025.1
digitalSourceType,softwareAgent, and C2PA content credentials. These are the fields that regulations require. They disclose that AI was used, nothow. - Generation parameters: Prompts, seeds, model names, workflow graphs, LoRA references, ControlNet configurations. These are the creative process details that artists want to protect. They describe how the image was made, not just that it was AI-generated.
- Attribution fields: Creator name, studio, copyright notice, licensing terms. These serve credit and ownership purposes. They identifywho made the work.
- Technical fields: Resolution, color space, bit depth, GPS coordinates (from mobile workflows). These are operational details that may or may not be relevant depending on context.
Export Profiles as Policy
Each export context maps to a profile that specifies which metadata categories to include, strip, or transform:
Export Profiles by Context
| Category | Social Share | Client Delivery | Portfolio | Archive |
|---|---|---|---|---|
| AI disclosure | Preserve | Preserve | Preserve | Preserve |
| Generation params | Strip | Strip | Strip | Preserve |
| Attribution | Optional | Preserve | Preserve | Preserve |
| Technical | Strip GPS | Preserve | Strip GPS | Preserve |
The critical insight is that AI disclosure and generation parameters are independent. You can tell the world “this image was generated by AI” without revealing which model, which prompt, or which workflow produced it. The disclosure satisfies the regulatory requirement. The parameter stripping preserves creative privacy. Both concerns are honored simultaneously.
The Provenance Ledger
Regardless of what metadata leaves the system with any particular export, the original provenance record persists internally. The DAM maintains a complete audit trail: every generation parameter, every export event (what was stripped, when, by whom, for what context), and every compliance decision. This internal ledger satisfies the auditability requirement without leaking details externally.
If a regulatory authority requests proof that an image was AI-generated, the system can produce the full provenance chain from the internal ledger — even if the exported image carries only the minimal disclosure fields. The ledger provides accountability; the export profiles provide privacy.
Consequences
- Metadata becomes structured: The system must parse, categorize, and selectively strip metadata fields — not treat metadata as a monolithic blob. This requires understanding the semantics of each metadata format (EXIF, PNG text chunks, XMP, IPTC, C2PA manifests) well enough to separate disclosure fields from generation parameters. Understanding where metadata lives in each file format is the prerequisite.
- C2PA re-signing workflow: If an image carries a C2PA Content Credential (a cryptographic manifest), modifying any metadata invalidates the signature. Stripping generation parameters and preserving compliance fields requires re-signing the C2PA manifest after the stripping operation. This adds computational cost and requires access to signing keys.
- Export becomes an explicit operation: Rather than simply copying a file, export becomes a transformation pipeline — read the original, apply the profile-specific metadata rules, re-sign if necessary, and produce the output. This changes the user experience: export is a deliberate choice with visible consequences, not a transparent file copy.
- Regulatory future-proofing: As regulations evolve, export profiles can be updated to include new required fields. The architecture separates policy (which fields to include per context) from mechanism (how to read, strip, and write metadata). Policy changes do not require code changes — only profile configuration updates.
Related Patterns
- Privacy-Tiered Export provides the framework for defining context-specific export profiles.
- Metadata Persistence explains why naive stripping attempts often fail — metadata lives in more places than most tools check.
- Cross-Tool Provenance shows how the provenance ledger extends across tool boundaries.
- EU AI Act Article 50 details the specific compliance requirements that drive the disclosure preservation side of the paradox.
