California voters could be the first in the nation to decide on ballot initiatives regulating artificial intelligence in 2026. Five proposed initiatives to regulate AI and its developers have been filed with the attorney general and are awaiting ballot titles before signature gathering can begin.
Jim Steyer, the chief executive officer of Common Sense Media, filed the first initiative on Oct. 22. The initiative would prohibit the use of certain artificial intelligence-powered chatbots by minors, prohibit the use of smartphones during the school day, establish statutory damages for actual harm caused by the use of AI chatbots or social media, and prohibit the selling or sharing of data from minors without consent. Currently, 28 states ban or limit cellphones in classrooms.
The campaign website argues, “When a product hurts our kids – whether it’s a toy, car seat, or crib – we expect action from the manufacturer. AI should be no different. This proposal finally makes Big Tech companies responsible for putting kids’ safety, mental health, and well-being first, and imposes significant legal consequences if they fail to protect kids.”
A competing initiative to Steyer’s was filed on Dec. 5 by OpenAI, the developer behind ChatGPT. It similarly addresses chatbot interactions with minors by requiring AI companion chatbots to disclose to users under 18 that the chatbot is AI. It would also require developers to have a protocol to prevent AI chatbots from promoting suicidal ideation, suicide, or self-harm content to the user, and require developers to annually report to the Office of Suicide Prevention on systems put in place to detect, remove, and respond to suicidal ideation by users.
Both initiatives contain provisions stating that any other measures related to AI safety on the 2026 general election ballot would be null and void if approved by a greater number of votes.
Steyer’s initiative allows changes to the law following voter approval by a law passed by a majority of the state legislature and signed by the governor. The OpenAI initiative also provides that it can be amended by a law passed by a two-thirds vote of the state legislature and signed by the governor. California requires legislative changes to ballot initiatives to be approved by voters unless the initiative waives the requirement.
A third initiative was filed by Poornima Ramarao, the mother of Suchir Balaji, a former OpenAI employee who died in November 2024, and the Coalition for AI Nonprofit Integrity (CANI). The initiative would create the Charitable Research Oversight Board as an independent board within the state's Department of Justice tasked with overseeing charitable research organizations that meet certain criteria. The board would be authorized to reverse an organization's conversion from a nonprofit to a for-profit organization, including organizations that have done so on or after Jan. 1, 2024. The initiative would also establish charitable encumbrance, meaning all assets held, developed, or transferred by a charitable research organization must be used in a manner consistent with the charitable purposes for which the asset was originally intended.
The campaign’s website has an open letter to "Elon Musk, Mark Zuckerberg, Vitalik Buterin, Jeff Bezos, and All Other Well Endowed Individuals In a Position to Act” asking for financial support for the initiative. The average cost of a ballot initiative campaign in California in 2024 was nearly $8.5 million, with an average cost per required signature of $15.08.
Two other initiatives were filed on Dec. 1 by Alexander Oldham that would establish an AI Safety Commission and a Public Benefit AI Accountability Commission. The first would be an independent body with the power to license AI companies, evaluate companies’ protection plans, process or deny capability expansion, impose civil penalties, conduct audits, and adopt implementing regulations. The initiative would also require each AI company to create and maintain a Protection Plan with the objectives of ensuring that displaced workers and the public benefit from AI; establishing means for human monitoring of AI behavior and the ability to shut it down; and distributing governance over AI systems.
The second commission would be within the state's Department of Justice. The initiative would require AI companies to file public benefit plans, approved by the commission, that describe how they will fulfill their commitment to serve humanity or the public interest.
All five AI initiatives are statutes that require 546,651 signatures (5% of the votes cast in the last gubernatorial election). The deadline for signature verification is June 25, 2026. However, the secretary of state recommends a signature deadline of Jan. 12, 2026, for initiatives requiring a full check of signatures, and April 17 for initiatives requiring a random sample of signatures to be verified.


