A federal appeals court in California upheld part of a district court ruling on Friday, effectively blocking a key provision of a pioneering online safety law designed to protect children.
The Ninth Circuit Court of Appeals panel focused on a specific requirement of the California Age-Appropriate Design Code Act, which mandates that online businesses “assess and mitigate the risk” that children could be exposed to harmful or potentially harmful content online. The court ruled that this provision “facially violates the First Amendment,” leading to the continuation of a preliminary injunction against that section of the law and related components.
However, the court remanded another aspect of the law back to the lower court for further consideration, vacating the remainder of the preliminary injunction. The judges indicated that it is still “too early” to determine if the rest of the law infringes upon the First Amendment and whether the unconstitutional portions can be separated from the statute.
Judge Milan Smith Jr., who authored the ruling, particularly criticized the law’s Data Protection Impact Assessment (DPIA) requirement. This provision would require online companies to produce reports evaluating whether their designs could harm children and to develop a “timed plan” to mitigate or eliminate those risks. Judge Smith argued that this requirement would likely fail to pass First Amendment scrutiny, suggesting that California could have pursued less restrictive methods to achieve its protective goals. He cited potential alternatives such as voluntary content filtering incentives, educational programs for children and parents, and stricter enforcement of existing criminal laws.
Smith further noted that the law effectively delegates the contentious issue of determining what content may “harm children” to private companies, thereby indirectly censoring online material available to minors.
This ruling could have broader implications for similar legislation, such as the recently passed Kids Online Safety Act (KOSA), which requires online platforms to take reasonable steps to protect children from various harms, including mental health issues like anxiety and depression. Despite this, the judges acknowledged that other sections of the California law might not violate the First Amendment in every application. For instance, the ruling pointed to provisions that prohibit “dark patterns”—designs that manipulate users into providing more personal information than necessary. Smith emphasized that it remains unclear whether banning dark patterns always triggers First Amendment scrutiny, a question that the lower court had not fully addressed.
Additionally, the ruling suggested that the district court should have more thoroughly evaluated whether other parts of the law could be upheld when applied to non-social media companies covered by the legislation.
This decision marks another milestone in NetChoice’s ongoing legal battles against state-level internet regulations, particularly those aimed at protecting children online. Courts have frequently sided with NetChoice, a group representing major companies like Meta and Google, on First Amendment grounds.
The ruling also comes in the wake of the Supreme Court’s decision in Moody v. NetChoice earlier this year, which confirmed that content moderation and curation by online platforms are protected forms of speech. The justices were cautious about allowing facial challenges—those that claim a law is unconstitutional in all possible applications—under the First Amendment. Nonetheless, Judge Smith concluded that the case against the DPIA requirement is facially unconstitutional, as it consistently raises First Amendment concerns in every application to a covered business.
The California attorney general’s office did not immediately respond to requests for comment. Chris Marchese, director of the NetChoice Litigation Center, hailed the ruling as “a victory for free expression, online security, and Californian families,” asserting that “California’s government cannot commandeer private businesses to censor lawful content online or restrict access to it.”