OpenAI and Common Sense Media merge rival California AI kids-safety proposals
Serge Bulaev
OpenAI and Common Sense Media joined forces to create new rules to keep kids safe when using AI chatbots in California. They combined their competing plans into one, called the Parents & Kids Safe AI Act, which could become state law if enough people vote for it in 2026. The new rules would make chatbots use age checks, stop ads aimed at kids, and block dangerous topics like self-harm or adult content. This unusual partnership is making people pay attention, and other states might follow California's lead. Now, the groups are working hard to get support so these strong safety rules can become real.

In a landmark collaboration for AI regulation, OpenAI and Common Sense Media merged rival California AI kids-safety proposals to create the Parents & Kids Safe AI Act. This ballot initiative aims to establish the nation's most stringent child-safety rules for AI chatbots as a potential constitutional amendment in November 2026, reshaping the debate on youth protections in artificial intelligence.
Why the Ballot Route Now
The Parents & Kids Safe AI Act is a proposed California constitutional amendment establishing detailed safety rules for conversational AI used by minors. It mandates age verification, bans targeted ads, and requires independent safety audits, setting a potential national precedent for regulating artificial intelligence interactions with children.
The move to a ballot initiative follows legislative gridlock. After Governor Gavin Newsom vetoed a related bill in October 2025, Common Sense Media and OpenAI filed competing proposals. Their January compromise, first detailed in an Axios report, resolves the standoff. To qualify for the ballot, backers must collect 546,651 valid signatures by June 25, 2026, or they will pivot back to a legislative strategy.
Key Provisions of the Parents & Kids Safe AI Act
The 38-page proposal specifically targets "companion chatbots" that interact with users under 18. Its core requirements include:
- Age-Assurance Systems: Technology to estimate a user's age and automatically enable stricter protections for minors.
- Data & Ad Restrictions: A ban on targeted advertising to children and on selling or sharing their data without parental consent.
- Independent Audits: Mandated safety audits submitted to the California Attorney General for enforcement and potential fines.
- Content Guardrails: Rules preventing chatbots from encouraging self-harm, displaying sexual material, or faking sentience or romantic interest.
The act exempts business-focused AI, video game characters, and smart speakers. A key feature, noted in a Politico analysis, is that enforcement will be handled by the state Attorney General rather than private lawsuits - a structure favored by the tech industry.
From Rivalry to Cooperation
This partnership marks a significant turnaround. Throughout 2025, Common Sense Media labeled chatbots "fundamentally unsafe" for teens, while OpenAI faced lawsuits over harmful ChatGPT outputs. Now, both organizations present the joint proposal as a vital compromise. Common Sense Media CEO Jim Steyer called the draft "seat belts for kids," while OpenAI strategist Chris Lehane affirmed that "parents know best." However, some lawmakers like Sen. Steve Padilla, in an IJPR interview, still prefer traditional legislation, which is easier to amend than a constitutional measure.
What Happens Next
The coalition is now focused on gathering signatures while simultaneously lobbying lawmakers in Sacramento for a parallel legislative solution. Other technology firms are expected to weigh in after the Attorney General publishes the final ballot language this spring. As one of the few states allowing constitutional amendments via petition, California's vote could serve as a national bellwether, influencing AI policy for children across the country.