California Governor Gavin Newsom signed a series of significant artificial intelligence (AI) bills into law on Tuesday, marking a bold step in addressing the growing concern over deepfakes during elections and protecting Hollywood performers from unauthorized AI-generated replicas. The legislation comes amid escalating fears about the misuse of AI in critical areas such as election integrity and the entertainment industry.
As deepfakes become increasingly sophisticated, experts warn that their potential to spread disinformation during the 2024 elections is more alarming than ever. At the same time, the entertainment sector has been grappling with AI’s capacity to replicate actors’ voices and likenesses without their consent—a contentious issue highlighted during last year’s historic actors’ strike.
California, home to 32 of the world’s 50 leading AI companies, is at the epicenter of this technological revolution. Governor Newsom’s administration recognizes the delicate balance between fostering innovation in AI and safeguarding the public’s welfare, as underscored by his office’s statement on the legislation.
“Safeguarding the integrity of elections is essential to democracy, and it’s critical that we ensure AI is not deployed to undermine the public’s trust through disinformation — especially in today’s fraught political climate,” Newsom said.
Protecting Election Integrity
Among the newly signed measures is A.B. 2655, which targets large online platforms, requiring them to either remove or label deceptive content altered by AI that is related to elections during key periods leading up to and following elections. Another key bill, A.B. 2839, expands the timeframe during which entities are prohibited from knowingly sharing AI-generated or manipulated election content. Additionally, A.B. 2355 mandates that election advertisements disclose whether they contain AI-generated or significantly altered material.
These legal measures aim to prevent the spread of misinformation, particularly during election seasons when the public’s trust is most vulnerable. In an era of increasingly convincing AI-generated content, ensuring transparency around manipulated media has become a focal point for lawmakers.
The need for these safeguards became particularly evident in July when tech billionaire Elon Musk retweeted an AI-altered Kamala Harris campaign ad. In response, Governor Newsom publicly stated that such deceptive manipulation should be illegal and pledged to take legislative action to prevent similar occurrences.
AI Accountability and Industry Resistance
While the newly signed bills represent a significant stride in curbing AI misuse, not all AI-related legislation has been finalized. One bill, S.B. 1047, remains pending. This controversial proposal would hold AI companies accountable for any harm resulting from their technologies. Despite fierce opposition from the tech industry, which argues that such laws could stifle innovation, Senator Scott Wiener, the bill’s author, insists the legislation only seeks to codify the ethical commitments many AI companies have already made.
Venture capitalists and tech founders have voiced concerns, warning that such regulations could impede technological progress by placing excessive responsibility on developers for unintended uses of their creations. Critics argue that AI’s unpredictable nature makes it impossible to foresee all possible harmful applications of the technology.
Nevertheless, the debate underscores the ongoing tension between promoting innovation and ensuring that technological advancements are not weaponized or misused in ways that could harm society.
Hollywood Actors and AI Protections
Beyond the realm of elections, the new AI laws also address a pressing issue in the entertainment industry: the use of digital replicas of actors without their consent. Two bills, A.B. 2602 and A.B. 1836, aim to protect performers from unauthorized AI reproductions of their voices or likenesses. A.B. 2602 requires that contracts specify how AI-generated replicas of a performer’s likeness or voice may be used, providing greater transparency and control for actors over their digital likenesses. Meanwhile, A.B. 1836 prohibits the commercial use of digital replicas of deceased performers without the consent of their estates.
The use of AI in entertainment has been a topic of heated debate in recent years. Some prominent examples include the consensual replication of James Earl Jones’s iconic Darth Vader voice, but there have also been instances where celebrities have raised alarms about AI-altered images of themselves circulating online without their approval.
The issue took center stage last year during the SAG-AFTRA strike, in which actors pushed for stronger protections against AI. The new laws now enshrine some of the safeguards that the union successfully fought for, including requiring actors to give informed consent and receive fair compensation for the use of their digital replicas.
SAG-AFTRA President Fran Drescher welcomed the new legislation, praising the governor for expanding AI protections that performers had “fought so hard for last year.”
“No one should live in fear of becoming someone else’s unpaid digital puppet,” added Duncan Crabtree-Ireland, SAG-AFTRA’s national executive director.
The Future of AI in California
The passage of these AI-related laws reflects California’s leadership in both technological innovation and regulatory oversight. As AI continues to develop at a rapid pace, the state’s efforts to manage its impact on elections and the entertainment industry could serve as a model for other jurisdictions grappling with similar challenges.
While the legislative framework marks a crucial step toward accountability and transparency in AI, the ongoing debate over S.B. 1047 highlights the complexities involved in regulating a technology that evolves faster than the legal system can often respond. California’s approach will likely continue to shape the national conversation on how to balance the benefits of AI with the need to prevent its misuse.
As Governor Newsom noted, ensuring AI is used responsibly is essential to protecting democratic processes and upholding the rights of individuals in the face of transformative technologies. The laws signed into effect are a testament to California’s commitment to navigating the challenges posed by AI while supporting its promise for innovation.