OpenAI and Common Sense Media, which both previously filed separate ballot initiatives in California related to the use of AI chatbots by minors, announced on Jan. 9 that they had filed a joint ballot initiative.
The campaign has titled the initiative the Parents & Kids Safe AI Act. The initiative would amend state law to adopt regulations for AI companion chatbots, including:
- requiring AI chatbot developers to restrict content when used by minors,
- prohibiting the sale of a minor's data without parental consent,
- requiring independent audits of chatbot technology for minor safety risks reported to the state attorney general, and
- prohibiting the promotion to minors of social isolation or romantic relationships by AI chatbots.
Common Sense Media Founder and CEO James P. Steyer said, “Rather than confuse voters with competing measures, we're working together to enact strong protections for kids, teens, and families. This is the strongest measure of its kind in the United States. At this pivotal moment for AI, we can't make the same mistake we did with social media, when companies used our kids as guinea pigs and helped fuel a youth mental health crisis in the U.S. and around the world. Kids and teens need AI guardrails now. That's why we will pursue every avenue, from the legislature to the ballot.”
California requires legislative changes to ballot initiatives to be approved by voters unless the initiative waives the requirement. The initiative authorizes changes consistent with the law by a two-thirds vote of the state legislature and the governor's signature.
Common Sense Media filed its original initiative on Oct. 22. It included provisions similar to those in the recently proposed measure. It would have prohibited minors from using certain AI-powered chatbots, prohibited smartphone use during the school day, established statutory damages for actual harm caused by AI chatbots or social media, and prohibited the sale or sharing of data from minors without their consent.
OpenAI filed its initial initiative on Dec. 5. The initiative would have required chatbots to disclose to users under the age of 18 that the chatbot is AI; required developers to have a protocol to prevent AI chatbots from promoting suicidal ideation, suicide, or self-harm content to the user; and required developers to report to the Office of Suicide Prevention suicide prevention protocols.
Three other initiatives were filed that also propose various regulations on artificial intelligence and its developers. An initiative filed by the Coalition for AI Nonprofit Integrity (CANI) would create the Charitable Research Oversight Board, an independent board within the state's Department of Justice, tasked with overseeing charitable research organizations that meet certain criteria. The board would be authorized to reverse an organization's conversion from a nonprofit to a for-profit organization, including organizations that have done so on or after Jan. 1, 2024, such as OpenAI.
Two other initiatives were filed on Dec. 1 by Alexander Oldham that would establish an AI Safety Commission and a Public Benefit AI Accountability Commission. The first would be an independent body with the power to license AI companies, evaluate companies’ protection plans, process or deny capability expansion, impose civil penalties, conduct audits, and adopt implementing regulations. The initiative would also require each AI company to create and maintain a Protection Plan with the objectives of ensuring displaced workers and the public benefit from AI; establishing means of human monitoring the behavior of AI and capabilities to shut it down; and distributing governance over AI systems. The second commission would be within the state's Department of Justice. The initiative requires AI companies to file public benefit plans to be approved by the commission that describe how an AI company will fulfill its commitment to serve humanity or the public interest.
All of the AI initiatives are statutes that require 546,651 signatures (5% of the votes cast in the last gubernatorial election). The deadline for signature verification is June 25, 2026. However, the secretary of state recommends a signature deadline of Jan. 12, 2026, for initiatives requiring a full check of signatures, and April 17 for initiatives requiring a random sample of signatures to be verified.


