Nov 2025: The Algorithmic Update is Back
After a short pause, The Algorithmic Update is back — refreshed, refocused, and ready for what’s already shaping up to be an increasingly complex era of tech policy.
Starting today, this newsletter will land in your inbox on the first Monday of the month, spotlighting five recent key legislative or regulatory developments driving policy discussions across AI, privacy, youth, safety, health, biometrics, and more. Because the boundaries between these areas continue to blur, this newsletter aims to help you navigate the most significant state and federal developments, while also connecting you to the broader policy debates and puzzle pieces across AI, privacy, and emerging technologies.
By sharing information more holistically, I hope more stakeholders come to the table with a deeper understanding—not just of the issues themselves, but of the careful balances needed to make tech policy work in practice. Recent events and the noise that too often surrounds these discussions have only underscored how impactful and necessary these nuances and perspectives remain.
It should go without saying, but I’ll reiterate that covering legislative or regulatory updates doesn’t imply authorship, endorsement, or opposition.
So, here we go again. New format. Same mission. Let’s dive in.
1. California Enacts SB 53 Governing Frontier Models
One of the biggest developments this past month was Governor Gavin Newsom signing SB 53, the Transparency in Frontier Artificial Intelligence Act (TFAIA), following a second legislative effort by Sen. Wiener (D). The law mostly applies to large frontier model developers that meet a certain revenue and computational power threshold. These developers must:
Maintain and publish a governance framework explaining how they oversee their models, including which standards they follow, how they assess and mitigate risks, and how they monitor for safety incidents.
Publish a transparency report each time a new model is deployed, describing its intended uses, restrictions, and any assessment of catastrophic risks;
Submit catastrophic risk assessments quarterly to the Office of Emergency Services; and
Report critical safety incidents (defined broadly as deaths, bodily injuries, or catastrophic risks resulting from a frontier model that provided assistance) within 15 days of discovery. Other frontier model developers who don’t meet the financial and computational power thresholds must also comply with this requirement.
My colleague Justine Gluck provides a full analysis of the new law here. With Governor Newsom’s signature, it now seems increasingly likely that New York Governor Hochul will sign similar frontier model legislation introduced by Asm. Alex Bores (D) and passed earlier this year in New York (see also, Justine’s comparative chart of these laws)—solidifying the frontier-model framework as a leading approach to AI regulation in the U.S.
In his signing statement, Governor Newsom said the law will help California better monitor and respond to critical safety incidents affecting public safety, cybersecurity, and national security, while also aligning with federal efforts to establish technical standards that companies can use to demonstrate compliance. It’s not surprising that, amid growing tension between state and federal lawmakers on the issue, and Newsom’s recent statement about his potential 2028 presidential run, he framed this law as a balanced, pragmatic approach that both protects California consumers and businesses and aligns with emerging federal leadership on AI.
Yet, some experts remain skeptical that regulating AI at the model layer is the most effective approach. Smaller or localized systems, such as some AI agents or models like DeepSeek, may pose comparable (and even heightened) safety risks yet fall outside this framework. Others question whether these models present unique catastrophic risks beyond those already possible in today’s digital environment. The bill does, however, acknowledge this nuance, clarifying that “catastrophic risk” does not include foreseeable and material risks from information already publicly available. As a result, much of the enforcement and compliance strategy is likely to hinge on how regulators interpret and apply that exception.
2. Chatbots are Everything, Everywhere, All at Once
Governor Newsom also signed SB 243, authored by Sen. Padilla (D), requiring operators of companion chatbots to disclose certain information to users, maintain safety protocols for users at risk of self-harm, and implement additional safeguards when operators know a user is a minor.
This legislation is part of a broader national trend. Similar bills have passed in Maine (LD 1727 - Rep. Kuhn(D)), New York (S-3008), and Utah (SB 452 - Rep. Moss(R)), alongside a new bipartisan chatbot proposal introduced by Sen. Hawley (R-MO) and Blumenthal (D-CT). Both the FTC and Congress have also launched investigations into chatbot governance, signaling that policymakers view this space as increasingly urgent.
The sudden focus on chatbots likely stems from both their growing popularity among teens and a series of high-profile lawsuits filed by parents who allege that AI companions encouraged self-harm or suicide (see, e.g, Garcia v. Character Technologies Inc.; Raine v OpenAI). This focus on chatbot safety, particularly around mental health incidents and youth use, raise difficult but familiar policy questions:
How should companies detect and intervene in moments of crisis without overstepping privacy boundaries? If laws require operators to identify at-risk users and connect them to crisis resources, companies will need to monitor chat logs, collect sensitive personal data, or infer users’ mental health conditions—raising serious privacy and data protection concerns.
If existing law already applies, does it do enough, or are new proactive safeguards needed? Many of these lawsuits are grounded in product liability theories, which are designed not only to hold entities accountable for harm but also to incentivize them to ensure products are safe. Yet these cases have not satisfied parents seeking more proactive measures—many of whom have testified in support of laws like California’s SB 243. They argue that companies should be required to prevent foreseeable harms, not just face liability after the fact—a dynamic that could define the next wave of AI safety regulation.
If 2025 was the year of “frontier models,” 2026 may be the year chatbots become the central battleground where debates over youth safety, privacy, and AI accountability collide.
3. Texas AG Continues Its Aggressive Privacy Enforcement Streak
Just a few days ago, on October 31 (fittingly enough), the Texas Attorney General’s Office announced a $1.375 billion settlement with Google, resolving several lawsuits filed in 2022. The lawsuits alleged that the company misled consumers through its product design and data practices, violating both the Texas Deceptive Trade Practices Act (DTPA) and the Capture or Use of Biometric Identifier Act (CUBI). The AG claimed that Google continued collecting precise location data even when users disabled location services and that “Incognito” mode gave users a false sense of privacy while still tracking their activity. A separate complaint also alleged that Google collected biometric data through products like Google Photos and Google Assistant without informed consent.
Beyond the billion-dollar headline, this case sends two important signals for the privacy landscape:
Design and Disclosure Matter: The allegations behind the settlement highlight how interface and design choices—including so-called dark patterns—can trigger major enforcement actions (see also, CPPA 2024 enforcement advisory). As regulators focus scrutiny on how companies communicate privacy settings and data collection practices, companies should take note that design and disclosure can be just as consequential as the data itself.
State Enforcement Is Driving the Next Wave of Privacy: Between this settlement, Texas’s record $1.4 billion case against Meta in 2024, and the emergence of a new multistate enforcement consortium, it’s becoming clear that the next phase of U.S. privacy governance may be driven less by new laws and more by state enforcement.
4. Florida Enters the Privacy Enforcement Chat
Speaking of enforcement, the Florida AG issued its first enforcement action under the Florida Digital Bill of Rights, alleging that video streaming hardware platform Roku unlawfully collected and sold sensitive personal data of users, including location data and personal data from known minors. Notably, the complaint argues that Roku “willfully disregarded” users’ ages by failing to implement user profiles that could distinguish children from adults despite clear age signals—such as streaming content labeled “Made for Kids.”
However, what’s particularly novel is the third claim—focused on de-identification. The AG alleges that although Roku shared “de-identified” data with third parties, it failed to include contractual safeguards to prevent recipients from re-identifying users. As a result, those recipients, particularly Kochava (still under FTC scrutiny for similar practices), allegedly used the data to build detailed profiles tied to individuals’ precise geolocation information.
Previously not considered by either FPF or IAPP as a “comprehensive” privacy law (given its limited applicability due to high revenue thresholds and a focus on smart devices and platforms) (see, Cobun’s LinkedIn post), this enforcement action now puts Florida’s law on the map and warrants greater attention, particularly given its above-average penalty of $50,000 per violation, one of the highest among state privacy laws.
5. California Requires Age Assurance in AB 1043
Beyond SB 53 and several other relevant bills signed by Governor Newsom last month (including bills on browser opt-outs, data brokers, and data provenance), the enactment of AB 1043, the Digital Age Assurance Act, stands out for its novel approach to the ongoing age assurance debates and its broader implications for privacy compliance programs. The law requires device makers and operating systems to verify users’ ages and make that information available to apps, while app developers must request and process users’ age ranges. By assigning responsibilities to both layers, AB 1043 seeks to find a middle ground in the debate over who should verify age—establishing a shared, ecosystem-wide framework for age signaling and liability across platforms.
Beyond potential constitutional issues (see, e.g. a recent 9th Circuit Decision declining to assess the constitutionality of age verification provisions until they become effective in 2027), the new law extends age assurance obligations to a far broader range of online services. Now, general-purpose apps—such as food delivery, retail, or fitness platforms—will gain actual knowledge of users’ ages, effectively pulling them into a new realm of privacy compliance and triggering obligations under children’s privacy laws, age-appropriate design codes, and other youth protection frameworks that previously may not have applied.
That’s all for now, thank you for reading. See you in December!
The Algorithmic Update is a monthly newsletter highlighting key developments in privacy, AI, and tech policy. FPF members receive weekly legislative updates with deeper analysis and tracking—learn more here or contact membership@fpf.org.
Tatiana Rice is the Senior Director of Legislation at the Future of Privacy Forum (FPF).
