California Governor Gavin Newsom has signed a set of groundbreaking laws to regulate artificial intelligence (AI), with a focus on preventing the use of deepfakes in elections and protecting the rights of actors. These new measures, introduced on Tuesday, make California a leader in AI regulation, especially in addressing concerns about the manipulation of media using AI technologies.
Targeting AI Election Deepfakes
One of the new laws specifically targets deepfake videos and images, which are often used to mislead voters during elections. It is now illegal to create and distribute such content within 120 days before and 60 days after an election. Courts will also have the power to stop the spread of deepfakes and impose civil penalties on violators. This law is seen as crucial to protecting the integrity of elections in a world where AI-generated disinformation is becoming more common.
Governor Newsom emphasized the importance of these laws in maintaining public trust in elections. “Safeguarding the integrity of elections is essential to democracy,” Newsom said. He noted that these new measures are part of the state’s efforts to foster transparent and responsible use of AI.
Additionally, large social media platforms such as Facebook and X (formerly Twitter) will be required to label or remove AI-generated election content. This ensures that users are aware when AI is involved, reducing the risk of confusion and manipulation. The law also establishes channels for reporting misleading content, allowing political candidates to take legal action if platforms do not comply.
Strengthening Disclosure in Political Campaigns
Another key law focuses on transparency in political advertising. Moving forward, campaigns will be required to disclose when AI has been used to alter images, videos, or other materials in their advertisements. This effort comes in response to concerns that AI could be used to create false endorsements or misleading campaign materials, which could influence voters.
Hollywood Protections for Actors
In addition to its focus on elections, California’s new AI regulations also offer protections for actors in the entertainment industry. Two laws, supported by the Screen Actors Guild (SAG-AFTRA), prevent studios from using AI to create replicas of actors without their consent. The first law requires permission from actors before their likeness or voice can be digitally reproduced. The second law prohibits the creation of AI-generated replicas of deceased performers without consent from their estates.
These measures aim to protect actors from exploitation in an industry increasingly reliant on digital effects. As AI becomes more prevalent in media production, actors are concerned that their image or voice could be used without their knowledge, potentially for purposes they don’t support.
Looking Ahead
These AI laws mark a significant step in regulating the use of emerging technologies in sensitive areas like politics and entertainment. California has long been at the forefront of AI development, and now, with these laws, it is also leading in AI regulation. The state’s lawmakers hope that these measures will serve as a model for other regions to follow.
Governor Newsom has expressed a commitment to balancing innovation with protection, ensuring that AI is used ethically while addressing the risks it presents. With 38 more AI-related bills under consideration, California’s role in shaping the future of AI regulation is far from over.