The EU AI Act gets the international headlines. But if your agency creates AI-generated content for US audiences— advertising, branded social, political campaigns, news media—two California statutes are about to matter just as much. SB 942 (the California AI Transparency Act) and AB 853 both take effect on August 2, 2026. Violations start at $5,000 per day. This article explains exactly what each law requires, how they differ from the EU framework, and what your agency must do before the deadline.
Disclaimer
This article is for informational purposes only and does not constitute legal advice. Numonic is not a law firm and does not provide legal counsel. Laws and regulations regarding AI-generated content vary by jurisdiction and are subject to change. You should conduct your own research and due diligence, and consult with qualified legal counsel in your jurisdiction before making compliance decisions.
California is the world's fifth-largest economy and the home of nearly 40 million consumers. Any agency or platform that generates revenue touching California residents is subject to these laws—regardless of where your business is headquartered. The reach is broader than most compliance teams currently assume.
This is Article 3 in the AI Content Compliance cluster. If you have not yet read the EU AI Act Article 50 explainer or the governance bottleneck analysis, those provide useful context for the compliance landscape as a whole. This article focuses specifically on the California framework and the practical differences that change your compliance approach for US content.
SB 942: What the California AI Transparency Act Requires
Senate Bill 942, signed by Governor Newsom in September 2024 and effective August 2, 2026, creates disclosure obligations for large AI system providers and their deployers. The law has two primary mechanisms: a detection tool requirement and a content disclosure requirement.
The Detection Tool Requirement
SB 942 is distinctive among AI regulations globally in what it demands of providers: any generative AI system that creates text, images, audio, or video content must make a publicly accessible detection tool available. This tool must allow users to determine whether content was produced by that AI system.
The implication for agencies is significant. If you use an AI tool that does not have a public detection mechanism, you're relying on a provider that is not in compliance with California law. That dependency becomes your liability at delivery: if a client or regulator cannot verify AI origin through an official detection tool, the burden of proof falls on the deployer—your agency.
The Content Disclosure Requirement
For content-generating AI systems with more than one million monthly users, SB 942 requires that AI-generated content carry a clear and conspicuous disclosure. The law specifies two acceptable disclosure mechanisms:
- Latent disclosure: A machine-readable signal embedded in the content itself—metadata, watermark, or cryptographic content credential—that persists through distribution.
- Manifest disclosure: A visible label, caption, or overlay stating that the content was created using AI tools, displayed in a manner that a reasonable person would notice.
For agencies, the latent disclosure requirement maps directly onto IPTC 2025.1 and C2PA standards. A properly tagged asset with the four IPTC AI disclosure fields and a valid C2PA manifest satisfies the machine-readable signal requirement. What SB 942 adds that the EU AI Act does not explicitly require at the same level is the detection tool availability—the infrastructure to verify the disclosure, not just embed it.
Who SB 942 Covers
SB 942 applies to developers of “large AI systems”— defined as generative AI systems deployed commercially to more than one million California users. This directly covers Midjourney, Adobe Firefly, OpenAI (DALL-E, ChatGPT), Stability AI, and similar platforms. It does not apply to open-source models run locally or to AI systems used exclusively for internal business operations with no public output.
However, agencies sit in an important middle position. When you use a large AI system to generate content for client delivery, you become a deployer under the law. The provider must furnish the detection infrastructure; the deployer must ensure the content carries the required disclosure before it reaches the public.
AB 853: AI Disclosure in Advertising and Political Content
Assembly Bill 853 operates in a more targeted domain than SB 942 but carries equally sharp teeth. Where SB 942 creates a broad framework for AI-generated content generally, AB 853 focuses on two high-stakes categories: political advertising and commercial synthetic media.
Political Campaign Content
AB 853 extends California's existing deepfake election law (AB 602 and AB 730, passed in 2019) to cover the current generation of generative AI tools. Any AI-generated or AI-materially-altered depiction of a candidate, elected official, or political figure used in campaign advertising requires an explicit, prominent disclosure. The disclosure must be audio-visual where the content is audio-visual, and must appear before, during, or immediately after the content.
For agencies that handle political advertising—even locally, for city council campaigns or ballot measures—this creates a zero-tolerance requirement. There is no materiality threshold. Any AI involvement in creating or modifying a political figure's likeness, voice, or statement triggers the disclosure obligation.
Commercial Synthetic Media
For commercial advertising, AB 853 requires disclosure when synthetic media is used to represent real, identifiable people. This specifically targets:
- AI-generated likenesses of real individuals used in product advertising without their consent
- Voice synthesis of real people's voices for commercial endorsements
- Digitally altered video that makes real people appear to say or do things they did not say or do
If your agency uses AI to create a synthetic spokesperson that resembles, or could be confused with, a real person, AB 853 requires explicit disclosure on the advertising creative. This is a significant expansion from the 2019 deepfake laws, which focused primarily on electoral contexts.
How SB 942 Differs from the EU AI Act
Compliance teams with experience building EU AI Act programs may assume SB 942 is simply the American equivalent. The frameworks share goals but differ substantially in mechanism, burden allocation, and enforcement posture.
EU AI Act vs SB 942 vs AB 853: Key Differences
| Dimension | EU AI Act Art. 50 | SB 942 | AB 853 |
|---|---|---|---|
| Scope | All AI-generated content distributed in EU | AI systems with 1M+ CA users | Political ads & synthetic commercial media |
| Who is covered | Providers and deployers equally | Providers (detection tools) + Deployers (disclosure) | Advertisers, agencies, campaigns |
| Penalties | €35M or 7% global turnover | $5,000/violation/day | Up to $1,000/violation + injunctive relief |
| Disclosure type | Latent metadata (IPTC/C2PA) | Latent + detection tool availability | Visible/audio-visual label required |
| Detection burden | On deployers to embed metadata | On providers to offer detection tools | N/A — visible disclosure mandated |
Where the Burden Falls
The most meaningful difference between SB 942 and the EU AI Act is where primary compliance responsibility sits. The EU framework places the heaviest burden on deployers: you must embed the required metadata, maintain audit trails, and ensure disclosed content reaches its intended audience without metadata stripping. The provider's obligation is secondary.
SB 942 inverts this in one key way: it places a direct obligation on the AI provider to make detection infrastructure publicly available. This means California is betting that verifiable disclosure requires more than just metadata in a file—it requires a live verification mechanism. For agencies, this shifts some risk to your tool vendors. If Midjourney cannot point to a public SB 942- compliant detection tool by August 2, 2026, using Midjourney for California-distributed content carries provider-side risk that may flow through to your client contracts.
The EU AI Act and SB 942 share a goal but differ in their theory of enforcement. The EU trusts metadata. California does not trust metadata alone — it requires a verification infrastructure that anyone can use to check the claim.
— Numonic compliance research, February 2026Enforcement Posture
The EU AI Act is enforced through national market surveillance authorities with a graduated risk-based framework. High-risk AI systems face more scrutiny; general-purpose systems like image generators fall under lighter requirements in most scenarios. Enforcement is expected to be methodical and regulatory in character.
California's approach is more immediate. SB 942 violations can be pursued by the California Attorney General, district attorneys, and city attorneys. The per-day structure of the $5,000 penalty means that a single undisclosed campaign running for ninety days could generate $450,000 in liability before anyone files a complaint. There is no minimum harm threshold and no safe harbor for good-faith compliance attempts.
Practical Compliance Steps for Agencies
Compliance with California's AI transparency laws is not a one-time audit. It requires changes to workflow, vendor selection, metadata infrastructure, and client communication. Here are four concrete actions to take before August 2, 2026.
Step 1: Map Which Content Reaches California Audiences
SB 942 and AB 853 apply to content distributed to California residents—not content created in California. If you run national campaigns, serve e-commerce brands with California customer bases, or manage social media accounts with California followers, you are in scope regardless of your agency's location.
Begin with an audience mapping exercise: for each active client, identify whether California residents are a material portion of the target audience. Any client with national distribution, any brand with California brick-and-mortar locations, and any campaign running on Meta or Google with geo-targeting that includes California should be flagged for full SB 942 compliance.
Step 2: Implement Provenance Metadata at Ingestion
The fastest path to SB 942 latent disclosure compliance is the same path that satisfies EU AI Act Article 50: IPTC 2025.1 and C2PA metadata injected at the point of asset ingestion. The four IPTC AI fields—Iptc4xmpExt:DigitalSourceType, plus:ModelReleaseStatus, Iptc4xmpExt:ArtworkOrObject, and the AI generation flag—must be populated and must survive the full delivery pipeline.
The critical implementation requirement: metadata injection must happen before any export or delivery step. If you inject metadata only on final export, you have no protection against intermediate distribution—a creative brief attachment, a Slack preview, a Dropbox share. Inject at ingestion, preserve through every handoff.
Step 3: Create California-Specific Disclosure Language
For content categories covered by AB 853—political advertising and synthetic commercial media—latent metadata alone is not sufficient. You need visible disclosure language. Standard templates to prepare:
- Static image disclosure: A visible label reading “Created with AI” or “AI-generated imagery” positioned consistently within the creative frame.
- Video and audio disclosure: An audible and visible statement at the beginning and end of the content for political ads; beginning only is acceptable for commercial content.
- Digital ad unit disclosure: Platform-native disclosure labels where available (Meta's AI label toggle, Google's AI content disclosure), supplemented by creative-embedded disclosure where platform labels are optional.
Step 4: Document Your Compliance Process for Audit
California's enforcement posture means that when a complaint is filed, your defense is your documentation. Maintain a per-campaign compliance record that includes:
- Which AI tools were used to generate or materially alter each piece of content
- Evidence that the AI provider has a publicly accessible detection tool (screenshot or URL, dated)
- The metadata schema applied to each delivered asset
- The disclosure language used and where it appeared in the creative
- Sign-off from the client acknowledging AI content disclosure requirements
AI Compliance Audit Checklist
A structured audit checklist covering SB 942, AB 853, and EU AI Act Article 50 requirements — with per-campaign documentation templates.
Download free (email required)The Dual Compliance Challenge
For agencies with both European and North American clients, the enforcement of SB 942 and the EU AI Act on the same day creates a dual compliance challenge. The good news: a metadata-first approach substantially satisfies both frameworks from a single implementation.
The IPTC 2025.1 standard was updated in November 2025 specifically to address the disclosure fields required by both the EU AI Act and California's AI transparency legislation. A compliant IPTC implementation covers the latent disclosure requirement under SB 942 and the machine-readable disclosure requirement under EU AI Act Article 50. A valid C2PA manifest provides the cryptographic provenance chain that satisfies audit trail requirements in both jurisdictions.
Where the Frameworks Diverge
The metadata layer alone is not sufficient for full dual compliance. Three areas require jurisdiction-specific attention:
- Detection tool verification (SB 942 only): California requires that the AI provider offer a public detection mechanism. The EU AI Act does not. Verify that every AI tool in your stack is on the path to SB 942 detection tool compliance, or have a documented mitigation plan.
- Political content thresholds (AB 853 only): The EU AI Act has no equivalent to AB 853's political advertising rules. For US-facing political work, build a separate AB 853 compliance track with visible disclosure requirements that the EU framework does not need.
- Audit trail retention (EU AI Act only): The EU AI Act explicitly requires deployers to maintain audit trails for a defined retention period. SB 942 does not specify retention periods, but California privacy law (CCPA/CPRA) may impose separate data retention constraints. A unified 24-month retention policy satisfies both frameworks conservatively.
The overlap is large enough that a single compliance infrastructure can serve both markets. The divergences are specific enough that they should be addressed as discrete policy addendums rather than requiring separate programs. Build once, configure for jurisdiction.
Vendor Assessment for Dual Compliance
Not every AI tool in your stack is on the same compliance trajectory. Adobe Firefly has published its C2PA roadmap and commits to Content Credentials on all generated output. OpenAI has indicated plans for C2PA metadata on DALL-E outputs. Midjourney, as of early 2026, still embeds minimal provenance metadata. Stability AI's compliance posture varies by model version.
For SB 942 compliance specifically, assess each provider against two criteria: does the provider currently embed any machine-readable AI disclosure in output, and has the provider committed to a public detection tool by August 2026? Providers that cannot answer yes to both questions represent a compliance gap that your agency's metadata infrastructure will need to compensate for through enhanced ingestion-side processing.
Key Takeaways
- SB 942 and AB 853 both take effect August 2, 2026, with penalties starting at $5,000 per violation per day under SB 942.
- SB 942 applies to any agency deploying AI content for California audiences, regardless of the agency's physical location.
- SB 942 uniquely requires AI providers to offer public detection tools—a burden-shifting mechanism not present in the EU AI Act.
- AB 853 targets political advertising and commercial synthetic media with mandatory visible disclosure requirements that latent metadata alone cannot satisfy.
- A metadata-first compliance infrastructure using IPTC 2025.1 and C2PA simultaneously addresses EU AI Act Article 50 and SB 942 latent disclosure requirements from a single implementation.
- Detection tool availability, political content disclosure, and audit retention rules require jurisdiction-specific policy addendums beyond the shared metadata layer.
Prepare for August 2, 2026
Numonic automates IPTC 2025.1 injection and C2PA provenance at ingestion—satisfying both SB 942 latent disclosure and EU AI Act Article 50 from a single metadata pipeline.
See How It Works