Please ensure Javascript is enabled for purposes of website accessibility
What to Know About California’s Executive Order on AI
d8a347b41db1ddee634e2d67d08798c102ef09ac
By The New York Times
Published 2 hours ago on
March 31, 2026

California Gov. Gavin Newsom on Monday, March 30, 2026, issued a first-of-its-kind executive order requiring safety and privacy guardrails from artificial intelligence companies that contract with the state. (Max Whittaker/The New York Times/File)

Share

Getting your Trinity Audio player ready...

WASHINGTON — California Gov. Gavin Newsom on Monday issued a first-of-its-kind executive order requiring safety and privacy guardrails from artificial intelligence companies that contract with the state.

California has been a leader in tech lawmaking, and was the first state to pass a law mandating safety and transparency from the biggest AI companies. Newsom, a Democrat, signed the order partly as a message to President Donald Trump, who has been trying to bat down state attempts to regulate AI.

Here’s what’s in his executive order.

Contractor Vetting

Companies vying for government contracts will first have to explain their safety and privacy policies around AI. The state will look carefully at policies on how the companies prevent exploitation of individuals, including the spread of child sexual abuse materials.

The government will also consider whether AI models, the technology that powers chatbots and other tools, are used to monitor individuals or are used to block certain speech. Companies should also explain how they are avoiding bias in their systems.

Independence From Federal Contracting Standards

If the federal government designates a company a supply chain risk, which the Pentagon has recently done with AI startup Anthropic, California will conduct its own assessment. If the company isn’t determined to be a risk, the state may allow it to remain a contractor.

This is significant because the Pentagon’s legal tussle with Anthropic, which had provided the Defense Department with AI technologies for use on classified systems, has exposed a rift in the administration’s pursuit of AI for war use. The Pentagon terminated its contract with Anthropic after the company said the government could not use its models for mass domestic surveillance and the deployment of autonomous weaponry.

Watermarking Requirement

The governor also called on state officials to begin watermarking AI-generated or manipulated videos that they create.

The technique is aimed at guarding against the spread of misinformation. It would also allow consumers to tell the difference between human-generated and AI-generated images produced by the state.

This article originally appeared in The New York Times.

By Cecilia Kang/Max Whittaker

c.2026 The New York Times Company

 

RELATED TOPICS:

Search

Keep the news you rely on coming. Support our work today.

Send this to a friend