Best Practices

The 90-Day AI Compliance Roadmap for Creative Agencies

August 2026 is closer than most agency leaders realize. EU AI Act Article 50 and California SB 942 enforcement begins in months, not years. This week-by-week roadmap gives your agency a concrete plan to achieve compliance before the deadline—without disrupting the creative work that pays the bills.

February 202611 min readNumonic Team
Abstract visualization: Low-poly multi-arm drone illustration

The agencies that will win enterprise clients after August 2026 are the ones that started their compliance programs before they had to. That window is still open—but not for long. EU AI Act Article 50 transparency obligations and California SB 942 disclosure requirements both reach enforcement this year. If your agency produces AI-assisted creative work and you do not yet have a compliance program, this 90-day roadmap is your starting point.

Disclaimer

This article is for informational purposes only and does not constitute legal advice. Numonic is not a law firm and does not provide legal counsel. Laws and regulations regarding AI-generated content vary by jurisdiction and are subject to change. You should conduct your own research and due diligence, and consult with qualified legal counsel in your jurisdiction before making compliance decisions.

Ninety days is enough time to build a defensible compliance program if you structure the work correctly. It is not enough time to improvise. The agencies we see scramble in the final weeks before a regulatory deadline are those that treated compliance as a single project rather than a phased operational change. This roadmap is built around three phases that mirror how successful compliance programs actually work: foundation, implementation, and optimization.

Before beginning, a calibration note: this roadmap applies to agencies deploying AI tools to produce content for clients or distribution. If your agency is in the EU or serves EU clients, EU AI Act Article 50 applies directly. If your agency operates in California or distributes AI-generated content to California residents, SB 942 applies. Many agencies face both simultaneously. The roadmap below addresses both regulatory frameworks with a unified workflow, since the operational requirements substantially overlap.

Month 1: Foundation (Days 1–30)

The first month has one job: establish an accurate picture of where you are. Compliance programs built on assumptions about tool usage, content volumes, or team behavior consistently fail at the implementation stage because the assumptions turn out to be wrong. Month 1 replaces assumptions with data.

Weeks 1–2: Audit Current AI Usage Across All Teams

The AI tool audit is the most important single exercise in the entire compliance program, and it is almost always more revealing than agency leaders expect. The gap between sanctioned AI tools and tools teams actually use is consistently wider than reported. Research across agencies consistently finds that employees use more AI tools than their employers know about—what compliance professionals call shadow AI.

Conduct your audit in two parts. First, send a structured survey to every team member who touches creative production: designers, copywriters, art directors, motion designers, video editors, strategists. Ask specifically: which AI tools do you use in your work, how often, and for what type of outputs? Make clear the survey is anonymous and the goal is to build a complete picture, not to audit individual behavior.

Second, review your tool and software spend. Check company card statements, expense reports, and software license records for the past six months. AI tool subscriptions are often expensed individually and do not appear in centralized IT inventories.

For each tool identified, document four things:

  • Output types: Does this tool produce images, video, audio, text, or combinations?
  • Metadata produced: Does the tool embed C2PA manifests, IPTC AI fields, or any provenance metadata in its output files?
  • Client exposure: Do outputs from this tool go to clients or into distributed content?
  • Compliance risk tier: Green (metadata-producing), Yellow (partial metadata), or Red (no metadata).

The tool compliance gap analysis published separately covers how the five most popular AI image tools—Midjourney, DALL-E, Stable Diffusion, Adobe Firefly, and Google Imagen—perform against compliance requirements. Use it as a reference when categorizing tools in your audit.

Week 3: Adopt a Governance Policy

With your audit complete, you have the data to draft a governance policy that reflects your agency's actual AI usage rather than an idealized version of it. A governance policy that bans tools your team uses every day creates compliance theater, not compliance. The goal is a policy that your team will actually follow.

Your policy should cover four areas:

  • Approved tools list: Which AI tools are sanctioned for client work, which require approval, and which are prohibited for specific output types.
  • Content classification framework: How outputs are categorized by disclosure requirements (informational, promotional, deepfake-adjacent).
  • Disclosure requirements by channel: Specific language and placement requirements for social, display, editorial, email, video, and client deliverables.
  • Documentation obligations: What records must be kept, for how long, and in what format.
Email Required

AI Compliance Audit Checklist

A structured checklist for auditing your agency's AI tool stack, content classification, disclosure workflows, and documentation against EU AI Act Article 50 and SB 942 requirements.

Download free (email required)

Week 4: Assign Roles

Compliance programs without named owners fail. In the fourth week, assign two roles that will carry the program through implementation and beyond:

Governance Lead: The person accountable for the compliance program overall. In agencies under 50 people, this is typically the managing director or a senior producer. In larger agencies, a dedicated operations or legal role. The Governance Lead owns the policy, manages the quarterly review cadence, and is the first point of contact for any client compliance questions.

Metadata Champion: The person responsible for the technical implementation of metadata workflows. This is typically a senior producer, a DAM administrator, or a technically inclined creative lead. The Metadata Champion owns the ingestion pipeline, ensures tools and workflows are configured correctly, and is the go-to resource when team members have questions about specific assets.

These are not full-time roles in most agencies. They are designated responsibilities with allocated time—typically two to four hours per week during the implementation phase, dropping to one to two hours per week during steady-state operation.

Month 2: Implementation (Days 31–60)

Month 2 is where the policy becomes operational. The foundation work of Month 1 gives you a complete picture of what needs to change. Month 2 is the work of actually changing it.

Weeks 5–6: Configure Approved Tools and Metadata Workflows

The metadata workflow is the technical core of your compliance program. It answers the question: how does an AI-generated asset acquire the provenance record it needs before it reaches a client or goes live in a campaign?

For tools in your Green tier (those producing C2PA manifests or IPTC AI fields natively), your workflow needs to ensure that metadata survives the journey from generation to delivery. Many standard compression tools, CDN pipelines, and social media upload APIs strip file metadata by default. Map the export path for each Green-tier tool and identify where metadata is at risk of being removed. Configure your export and delivery pipelines to preserve or re-embed metadata after any stripping step.

For tools in your Yellow and Red tiers, you need an ingestion protocol: a defined process for capturing provenance context at the moment an AI-generated asset enters your managed environment. At minimum, this means recording the tool used, the generation date, the prompt or generation context (where practical), and the operator (which team member generated the asset). This information becomes the seed data for IPTC AI field embedding.

A digital asset management system with AI provenance support automates this process at scale. Manual ingestion protocols work for low-volume shops but break down quickly as AI usage grows. If your team generates more than fifty AI assets per week across all projects, a manual ingestion protocol will not be sustainable.

Week 7: Update Client Contracts

Client contracts are the commercial layer of compliance. Enterprise clients are increasingly inserting AI compliance warranties into their Master Services Agreements, requiring agencies to represent that their AI-generated deliverables meet applicable regulatory requirements. Agencies that cannot make this representation face contract renegotiation at best and liability exposure at worst.

Work with your legal counsel to add three provisions to your standard client contract:

  • AI tool disclosure clause: A representation that AI-generated content will be identified as such in deliverables, with provenance documentation provided on request.
  • Compliance warranty: A warranty that AI-generated deliverables meet the disclosure requirements of applicable law in the client's distribution markets, and that the agency maintains internal records supporting this representation.
  • Downstream distribution obligations: A clause clarifying which party is responsible for disclosure when the client further distributes agency-produced AI content. This directly addresses the Article 50(5) information chain requirement.

The AI contract clauses guide covers specific language for each of these provisions, including negotiation guidance for situations where enterprise clients push back on the warranty scope.

Week 8: Train Your Team

Training is the step most agencies underinvest in, and the gap shows up during compliance audits. A policy that lives in a document but has not been communicated is not a compliance defense. Regulators look for evidence that the policy was understood and applied, not just written.

Conduct a structured training session covering four areas:

  • What triggers disclosure: Which types of AI outputs require disclosure under EU AI Act Article 50 and SB 942, and how to classify edge cases (AI-assisted versus AI-generated, substantially modified versus minimally modified).
  • How to apply disclosure templates: Walk through the channel-specific disclosure language from the governance policy with real examples from recent projects.
  • The metadata ingestion workflow: A practical walkthrough of how team members are expected to document AI-generated assets at the point of generation, with live demonstration of the tool or form they will use.
  • What to do with third-party AI assets: When a client, contractor, or stock provider delivers an asset with no provenance documentation, what is the correct internal response?

Record the training session and maintain a log of attendees. If a compliance inquiry occurs, the training log is evidence that the agency made reasonable efforts to ensure team-wide understanding.

Month 3: Optimization (Days 61–90)

Month 3 is where theory meets reality. Your policy is written, your workflows are configured, and your team is trained. Now you find out what actually works when the program runs against real projects under real deadline pressure.

Weeks 9–10: Run a Compliance Audit on Recent Projects

Select five to ten recent client projects that involved AI-generated content. For each project, apply your compliance checklist retrospectively: does the project file contain provenance documentation for all AI-generated assets? Were disclosures applied correctly in deliverables? Is there a record of which team members generated which assets and with which tools?

This retrospective audit does two things. First, it identifies specific gaps in your workflow—steps that were skipped under deadline pressure, tools that produced assets outside the ingestion protocol, or disclosure language that was applied inconsistently. Second, it gives you baseline data for the compliance scorecard described later in this article.

Document every gap you find. For each gap, identify whether it reflects a policy problem (the policy was unclear), a workflow problem (the process was too cumbersome), or a behavior problem (the policy was clear but not followed). This distinction drives different responses.

Week 11: Refine Workflows Based on Team Feedback

After the retrospective audit, convene a working session with the team members who used the new workflows most heavily. Ask specifically: where did the process create friction? What steps took longer than expected? Where did you find yourself making judgment calls that the policy did not cover?

The compliance workflow guide identifies the five friction points where agency workflows most commonly break down under compliance requirements, with specific redesign patterns for each. Use these as a reference when refining your Month 3 workflows.

Common refinements at this stage include:

  • Simplifying the metadata ingestion form to reduce required fields at the point of generation (capture the minimum required for compliance, not the maximum that would be ideal).
  • Adding compliance checkpoints to existing project management workflows (Asana, Monday.com, Notion) rather than requiring team members to use a separate compliance tool.
  • Clarifying edge cases in the content classification framework that team members encountered on real projects but the policy did not address.
  • Adjusting disclosure template language based on client or platform feedback received during the implementation phase.

Week 12: Establish a Quarterly Review Cadence

The final week of the 90-day roadmap is about making compliance sustainable. Regulations evolve, tools change, team members turn over, and client requirements shift. A compliance program that was current in February 2026 may have gaps by November 2026 without a structured review process.

Schedule a quarterly compliance review meeting in your calendar now. Each quarterly review should cover:

  • Review of the approved tools list: have new tools entered widespread use? Do any previously approved tools now have known compliance issues?
  • Review of the compliance scorecard: are key metrics trending in the right direction?
  • Policy update check: have there been any regulatory guidance updates, member state enforcement actions, or industry standard updates that require policy changes?
  • Team refresh: do any team members need a refresher on disclosure requirements, or has the team grown with new members who need training?

What “Done” Looks Like: The Compliance Scorecard

At the end of 90 days, you should be able to answer yes to each of the following questions. Together, these form the compliance scorecard that demonstrates a defensible program to regulators, clients, and procurement teams.

Foundation Scorecard

  • We have a complete inventory of every AI tool in use across all teams, including unsanctioned tools identified through the survey process.
  • Every tool in the inventory is classified by compliance risk tier (Green, Yellow, Red) with documented justification.
  • We have a written governance policy covering approved tools, content classification, disclosure requirements, and documentation obligations.
  • Named roles are assigned for Governance Lead and Metadata Champion, with written scope of responsibility.

Implementation Scorecard

  • Every AI-generated asset that enters our managed environment has a provenance record capturing tool, date, operator, and generation context.
  • Metadata is preserved through our standard export and delivery pipeline, with no stripping occurring at distribution.
  • Client contracts include AI tool disclosure, compliance warranty, and downstream distribution clauses.
  • All team members who touch creative production have completed compliance training, with a training log on file.

Optimization Scorecard

  • A retrospective compliance audit of recent projects has been completed, with gaps documented and remediation actions assigned.
  • Workflow refinements based on team feedback have been implemented and documented.
  • A quarterly compliance review cadence is scheduled on the agency calendar with named participants.
  • We can produce a compliance package for any AI-generated deliverable produced in the last 90 days within 24 hours of a client or regulator request.

The Competitive Advantage of Starting Now

The agencies beginning this work in February 2026 will enter the August 2026 enforcement window with a mature compliance program and 90 days of operational experience. The agencies that start in July 2026 will enter the enforcement window still figuring out their metadata workflow.

The compliance advantage extends beyond regulatory risk. Enterprise procurement processes are already changing. RFPs from Fortune 500 marketing departments are beginning to include AI governance questionnaires. Agencies that can provide a completed compliance scorecard, governance policy, and provenance documentation sample win those evaluations. Agencies that cannot are increasingly disqualified before the creative review begins.

The complete AI Content Compliance guide covers the full regulatory landscape behind this roadmap, including detailed analysis of EU AI Act Article 50 obligations, California SB 942 requirements, IPTC 2025.1 metadata standards, and C2PA technical implementation. The AI compliance audit checklist provides the structured worksheet for running the retrospective audit described in Month 3 of this roadmap.

Ninety days is enough time to build something that works. The question is whether you start the clock now or wait until the deadline is close enough to smell.

Key Takeaways

  • EU AI Act Article 50 and California SB 942 enforcement begins in August 2026. Agencies have roughly 90 days to build a defensible compliance program if they start now.
  • Month 1 focuses on foundation: a complete AI tool audit, governance policy adoption, and role assignment. Shadow AI discovery is typically the most revealing step.
  • Month 2 is implementation: configuring metadata workflows, updating client contracts with AI compliance provisions, and training every team member who touches creative production.
  • Month 3 is optimization: running a retrospective compliance audit on recent projects, refining workflows based on team feedback, and establishing a quarterly review cadence.
  • The compliance scorecard provides a concrete test of whether the program is defensible. If you can produce a provenance package for any AI-generated deliverable within 24 hours of a request, you are ready.
  • Compliance is increasingly a commercial advantage, not just a regulatory requirement. Enterprise clients are evaluating AI governance capability before awarding contracts.

Start Your 90-Day Compliance Clock Today

Numonic automates the metadata workflows, provenance tracking, and compliance documentation at the center of this roadmap—so your team can stay focused on creative work while the compliance infrastructure runs in the background.

See How It Works