MINNEAPOLIS – A new law in Minnesota bans the use of doctored video, images and audio – “deepfakes” – designed to sway elections.
The rules, which went into effect this summer, are among the first state regulations of its kind in the nation, said Minnesota Secretary of State Steve Simon, and address growing concerns among officials that artificial intelligence poses a threat to elections.
The law prohibits the use of this AI-generated content if it’s created without the consent of the person depicted and with the intent to harm a candidate or influence an election within 90 days of Election Day.
A deepfake can look and sound identical to the person whose likeness it’s meant to represent, but it’s created without the person’s knowledge by artificial intelligence software.
“It’s really the same old poison in a different bottle, which means we’re looking at disinformation and dishonesty and lies about the electoral system and how that can be spread,” Simon said in an interview. “And this is a new way of spreading it.”
He described next Tuesday’s local elections in Minnesota as a “dress rehearsal” for the high-stakes and intense 2024 presidential election. Four years ago, people cast doubt on the results, contributing to lies that the election was stolen and rigged for President Joe Biden.
Minnesota’s new law, he believes, will help build confidence that elections will be run fairly.
“It’s not a different threat, but it’s a new way of amplifying the old threats, and we want people to have that basic level of confidence going into the 2024 election,” Simon explained.
Minnesota joins just seven other states with laws regulating deepfakes. The amendment passed almost unanimously in the state legislature last spring.
A person could face jail time and fines for violating the new rules, which also extend to the distribution of AI-generated content depicting sexual acts without the consent of the person whose likeness appears in the video or photo.
President Biden signed an executive order on Monday setting new privacy and safety standards for AI. It directs the US Department of Commerce to develop guidelines for authenticating and clearly labelling AI content.
Dr Manjeet Rege, director of the Centre for Applied Artificial Intelligence at the University of St Thomas, said such safeguards to regulate AI are crucial now that the technology is widely available.
“To preserve our democracy, I think it’s extremely important to have AI laws so that both the good guys and the bad guys know the consequences of misusing AI,” he told WCCO. “AI can do a lot of good, but we don’t want AI in the wrong hands where it can do a lot of harm.”