Elon Musk And X Notch Court Win Against California Deepfake Law

Sarah
By
Sarah
Sarah James is a tech writer at National Diplomat, specializing in technology, cybersecurity, and social media. She concentrates on the industrial and policy aspects of cybersecurity....
5 Min Read
One of the country’s strictest bans on election deepfakes was defeated by a challenge from the tech billionaire.

SACRAMENTO, California A federal Judge, John Mendez, on Tuesday barred a California law curbing deepfake content focused on AI and generated during election periods, one of the most restrictive laws of its kind in the country. This ruling is a legal victory for Elon Musk and his X platform.

The judge bypassed free speech and First Amendment arguments, which were the cornerstone of the plaintiffs’ case, and instead based his ruling on Section 230 of the Communications Decency Act. Mendez also expressed his intention to strike down a second law, which mandates labeling digitally modified campaign materials and advertisements, saying it would infringe on free speech rights.

All of the Judge’s rulings from Tuesday undermined Governor Celeb Newsom, who enacted the laws last year as a retort to Musk, vowing to act after the technology billionaire and then-Trump supporter shared a doctored video of the former vice president Kamala Harris ahead of the election.

The initial regulation would have prevented online platforms from carrying misleading, AI-generated content concerning elections before they occur. This was during a period of increased worry regarding the proliferation and accessibility of technology, where users could effortlessly fabricate images and videos, as well as the possible effects on politics.

But like Musk, opposing voices pointed out that the restrictions may violate freedom of expression.

Initially, the challenge was brought by the video’s creator, Christopher Kohls, on First Amendment claims. X subsequently joined the suit after Musk claimed the measures were “designed to make computer-generated parody illegal.” Rumble and the Babylon Bee joined the suit as well.

The video aired, and the defendant claimed to have “the ultimate diversity hire,” self-referencing her roles in various organizations.

Mendez stated that the legislation from Democratic state Assemblymember Marc Berman violates the oft-cited Section 230 of the federal Communications Decency Act that shields online platforms from facing legal action for the content posted by its users. Mendez argued, “They don’t have anything to do with these videos that the state is objecting to,” referring to platforms like X that host deepfakes.

But the judge did not consider the First Amendment defenses raised by Kohls, explaining that it was not needed to rule on the law’s elimination on Section 230 grounds. Mendez remarked, “I’m simply not reaching that issue,” to the plaintiffs’ counsel.

Tara Gallegos, spokesperson for Newsom, mentioned that the governor’s office is still evaluating the election, but claimed that “commonsense labeling requirements for deep fakes are significant to the election’s integrity.”

A spokesperson for Bonta’s office claimed that they, too, were still evaluating the judge’s decisions. Commenting on Berman’s office made no comments, and neither did the office of Assemblymember Gail Pellerin, the Democrat who put forward the second law, who did not respond to the comments request.

In Berman’s case, Kristin Liska, who is with the California attorney general’s office, mentioned that the Berman law only concerned large entities with one million users, which made her appeal unreasonable. Liska argued that Mendez ought to limit his order to only the plaintiffs X and Rumble.

After the hearing, Kohl’s attorney, Theodore Frank, informed POLITICO that he would coordinate with Liska to ensure that his client would not be exposed on other non-party sites, such as Facebook and YouTube.

Surely, Liska’s arguments were not her only ones. Mendez grilled her on the second deepfake law that Liska, alongside other plaintiffs, was challenging, which mandates the labeling and removal of deepfake videos of politicians during election periods.

“I think the statute just fails miserably in accomplishing what it would like to do,” Mendez remarked, saying he would formally retract an opinion he issued on the law in the next few weeks.

Laws that control speech must meet very strict criteria. Mendez seemed to be saying that the criteria that were not likely to suppress free speech would be better. “It’s become a censorship law, and there is no way that is going to survive,” Mendez remarked further. Source

Share This Article
Sarah James is a tech writer at National Diplomat, specializing in technology, cybersecurity, and social media. She concentrates on the industrial and policy aspects of cybersecurity. Sarah holds a master’s degree in IT with a specialization in artificial intelligence, during which she developed an AI-based cricket umpire. With 15 years of experience, she has worked with startups, corporations, consultancies, government agencies, and universities.