February 2026: Emboldened Lawmakers Race Ahead
Early 2026 is showing clear legislative trends in kids' online safety, data-driven pricing, and chatbots
Happy Black History Month and State Legislative Season.
With only 32 days into 2026, the state legislatures are keeping us busier than ever. Currently, the FPF Legislation team is tracking over 300 key bills in the states on privacy, AI, and youth safety. After a year in which state lawmakers struggled to advance broader, comprehensive proposals beyond targeted areas, early 2026 marks a shift—with lawmakers on both sides of the aisle responding more assertively to current events and driving a faster-moving, more ambitious legislative environment.
Against that backdrop, January brought notable movement across several fronts, including the passage of South Carolina’s AADC, growing momentum behind Florida’s AI Bill of Rights, and the persistence of app store accountability proposals despite the Texas injunction in December. Lawmakers and regulators also intensified scrutiny of data-driven pricing, while the FTC signaled growing interest in age assurance. With that, let’s dive into last month’s most significant developments.
1. South Carolina Nears Enactment of Long-Developing AADC Framework
On January 21, the South Carolina legislature enrolled HB 3431, the “Social Media Regulation Act,” an Age-Appropriate Design Code Act. Though the bill has a long history (first prefiled in December 2024 and extensively amended throughout 2025), the final House and Senate amendments were quickly agreed upon in the first few weeks of the 2026 session, and the bill has now been approved for ratification. Once transmitted to Governor McMaster, he will have five days to sign or veto it, after which the law will take effect immediately. Most experts expect the Governor to sign, particularly given the involvement of co-sponsor Representative Guffey, a well-known Republican online child safety advocate whose work has been shaped by the tragic loss of his oldest son following online harassment and sextortion (see, Guffey’s recent testimony to Congress on the topic).
At a high level, like many AADCs, HB 3431 would regulate businesses that provide online services, products, or features reasonably likely to be accessed by children (u18). Covered entities would be required to exercise reasonable care both in the use of a minor’s personal data and in the design and operation of the service to prevent enumerated categories of harm, provide users and parents certain safety features and tools, adhere to specific data minimization standards (including prohibiting targeted advertising and limiting profiling), and conduct third-party audits. Some novel elements to highlight:
Individual Liability: The bill would permit the AG to enforce against not only covered entities, but also “officers and employees” for willful and wanton violations—an extremely unique enforcement provision that would significantly raise the stakes for corporate internal decision-makers and compliance personnel.
Preventing vs. Mitigating Harms: While South Carolina is not the first to incorporate a duty of care into its framework, it goes further by requiring services to take steps to prevent specified harms to minors. Combined with the broader, more open-ended types of harms that must be considered, it is possible this duty of care places a much higher burden on companies than the ones seen in Connecticut or Colorado.
Third-Party Audits: One of the most contested elements of the California and Maryland AADCs has been each law’s inclusion of risk assessment requirements regarding the use of minors’ data and their experiences of the service, which NetChoice has argued (and some courts have agreed) constitutes unconstitutional compelled speech. South Carolina lawmakers appear to have sought a workaround by not requiring that companies assess the risks of their services, but instead requiring them to obtain reports from an independent third-party auditor, which must be transmitted to the Attorney General and will be made publicly available. In their letter to the Governor, NetChoice outlines several grounds on which they argue HB 3431 remains unconstitutional — likely foreshadowing additional litigation if the bill is signed.
2. Florida AI Bill of Rights Passes Committee
On January 21, the Florida Senate Commerce Committee unanimously advanced the AI Bill of Rights (aptly named considering the state’s existing “Digital Bill of Rights” privacy framework). Proposed by Governor Ron DeSantis in December, the bill is wide-ranging, aiming to establish new consumer rights and business obligations, combining a smorgasbord of familiar policy issues and compliance frameworks around transparency, privacy, chatbots, and kids’ safety, while also incorporating several novel provisions and approaches.
Consumer Rights: The bill would grant consumers several rights related to AI use, including the right to use AI to improve their own lives in accordance with the law (potentially similar to Montana’s Right to Compute), to supervise and control minor children’s use of AI (a more novel approach), to know whether they are communicating with a human being or AI system/chatbot (similar to provisions in Utah AI Policy Act, Colorado AI Act, and various chatbot laws), and the right to transparency regarding the collection of personal data, along with expectations that AI companies protect and deidentify such data.
Kids’ and User Safety Requirements for Companion Chatbot Platforms: Companion chatbot platforms must prohibit minors from having accounts on the platform unless parental consent is given, in which case other safety features must be provided (similar to those in AADCs), provide certain disclosures to users, and take reasonable measures to prevent the chatbot from producing or sharing harmful content.
Data Privacy Requirements for AI Companies: In addition to rights related to de-identified data, the AI Bill of Rights gives consumers the right to know whether AI companies collect personal information and to expect that data to be protected, while also imposing affirmative obligations on companies to de-identify personal information before it is shared or sold. Once data is de-identified, companies are prohibited from attempting to re-identify it and must contractually require any downstream recipients to refrain from re-identification as well. Although similar obligations already apply to covered controllers under Florida’s Digital Bill of Rights, these provisions may aim to fill gaps in that law’s limited scope, which is viewed by FPF and IAPP as non-comprehensive due to its’ narrow applicability (see, FPF’s overview). By defining “AI companies” broadly to include nearly any business that develops or implements AI, the AI Bill of Rights extends comparable privacy obligations to a much wider set of entities.
If Governor DeSantis is correct that the bill would not conflict with the Trump Administration’s AI Executive Order to preempt state AI laws, state legislatures may increasingly turn to AI measures framed around narrower concerns—such as child safety or chatbot governance—as a pragmatic way to address AI-related harms. As this bill demonstrates, however, those narrower framings can still support far-reaching and substantial regulatory obligations beneath the surface. Similar approaches are beginning to emerge in traditionally red states such as Oklahoma (SB 2085), which appears based on the Florida AI Bill of Rights (and has advanced similar legislation through one chamber in previous years), as well as Nebraska (LB 1083) and Utah (HB 286) where lawmakers have introduced frontier model proposals drawing in part from California’s SB 53 and New York’s RAISE Act, but packaged under the banner of chatbot safety.
3. Growing Scrutiny of Data-Driven Pricing Practices
Data-driven pricing (variously labeled dynamic, personalized, algorithmic, or “surveillance” pricing) has quickly emerged as one of the most active and cross-cutting policy trends heading into 2026, sitting at the intersection of consumer protection, privacy, and AI governance (see, my colleague Jameson Spivack’s issue brief and policy overview). Recent developments underscore the need for entities to reassess the sufficiency of their disclosures and the extent to which their pricing practices align with applicable privacy requirements:
New York AG Investigation and UDAP Priority: On January 8, NY AG Letitia James sent a letter to Instacart, requesting information regarding the company’s use of algorithmic pricing and compliance with the NY Algorithmic Pricing Disclosure Act, which went into effect in November 2025. Specifically, the letter alleges that the company’s disclosure, contained in fine print, was not sufficiently conspicuous and was not included on individual product pages. Notably, on December 19, New York Governor Hochul signed the “FAIR Act”, which expands the state’s consumer protection statute to prohibit not only deceptive, but also “unfair” and “abusive” business practices. Insiders report that the AG’s office is likely to use this expanded authority to investigate and enforce against data-driven pricing practices, consistent with Attorney General James’s public statements prioritizing scrutiny of algorithmic pricing.
California AG Investigation: On January 27, California AG Rob Bonta announced an investigative sweep focused on companies’ use of personal information to set individualized prices for consumers, termed “surveillance pricing.” According to the announcement, such practices may fall outside consumers’ reasonable expectations and therefore conflict with the purpose-limitation requirements of the CCPA regulations. The investigation is focusing on sectors such as retail, groceries, and hotels, with businesses receiving requests for information on whether and how they use data like shopping and browsing history, location, demographics, and inferred characteristics to determine pricing.
Legislative Surge: The FPF Legislation team is tracking more than forty data-driven pricing bills, including approximately two dozen bills introduced since December 1, including Arizona HB 2489, Hawaii SB 2036, Maryland HB 148, Nebraska LB 1006, Tennessee SB 1807, Virginia SB 615, and Washington HB 2481. Of note, Pennsylvania is looking to be the first state to ban data-driven pricing in HB 1942, which is scheduled for a hearing this week; and Maryland Governor Wes Moore recently announced the Protection from Predatory Pricing Act (text forthcoming) as part of his 2026 legislative agenda, specifically regulating the use of individualized pricing in grocery stores.
4. Alabama App Store Bill Speeds Forward Despite Texas Injunction
Despite Texas’ App Store Accountability Act being enjoined on First Amendment grounds late last year (a decision now on appeal) App Store Accountability Act–style frameworks have not stalled in the states. To the contrary, several legislatures appear to be pressing forward, testing revised approaches that seek to address at least some of the constitutional concerns raised in the Texas litigation.
Alabama’s HB 161 offers a clear example. The bill moved unanimously through the House and has already advanced out of Senate committee, positioning it for a Senate floor vote as early as this week. Like Texas’s ASAA, HB 161 would impose age assurance obligations at the app store level, but it diverges in several notable ways. The bill would require age verification not only for new users but also for existing accounts, impose parental consent requirements for pre-installed applications, and, notably, omit application-level exemptions—an issue that featured prominently in the court’s analysis enjoining the Texas law. If enacted, the bill would take effect on January 1, 2027.
As the bill advanced, the House adopted several amendments that further refine its scope. Representative Prince Chestnut (D) introduced an amendment addressing in-app purchases, intended to avoid requiring parental consent for each individual low-risk transaction—a change Chestnut described as the “Chick-fil-A amendment.” The House also added a safe harbor provision for app store providers, shielding them from liability where they employ a commercially reasonable age-verification process, exercise due care in verifying user age, and make reasonable efforts to resolve discrepancies with parents regarding age data.
Taken together, these changes suggest a deliberate effort to recalibrate ASAA-style frameworks rather than abandon them, underscoring that, even amid ongoing constitutional challenges, app store–centric age assurance remains an active and evolving legislative strategy in 2026.
5. FTC Hints at Tackling Age Assurance
Lastly, speaking of age assurance…At the FTC’s January 28 workshop on age assurance technologies, Commission officials strongly emphasized that age assurance technologies can play an important role in protecting children online and, notably, suggested the agency may look to address perceived tensions between age assurance and compliance with the Children’s Online Privacy Protection Act (COPPA).
In opening remarks, Chair Ferguson framed what he described as a paradox in COPPA enforcement: while age assurance tools could strengthen protections for minors, some approaches require the collection of personal data that may implicate COPPA’s restrictions. Similarly, in closing remarks, Christopher Mufarrige, Director of the FTC’s Bureau of Consumer Protection, echoed this theme, emphasizing that COPPA “should not be an impediment to the most child-protective technology to emerge in decades.” Together, the remarks signaled potential FTC interest in re-opening COPPA rulemaking or utilizing other policy tools, such as non-enforcement policies, to facilitate broader deployment of age assurance technologies.
These signals are not fully surprising, especially given the Commission’s recent Disney settlement, which required the company to review whether YouTube content should be designated as “Made for Kids” unless YouTube implements age assurance technologies capable of determining users’ age, age range, or age category. Perhaps slightly more surprising were FTC officials’ comments regarding recent First Amendment challenges to age-verification and age-assurance laws, suggesting that the Commission may not view existing and ongoing First Amendment litigation as meaningfully constraining the Commission’s approach in this area.
The workshop also featured a range of speakers who offered notable insights, including Michael Murray of the UK Information Commissioner’s Office, who raised the possibility that certain age-assurance tools may qualify as regulated automated decision-making under the GDPR; Amelia Vance, Founder and President of the Public Interest Privacy Center, who provided a comprehensive overview of the evolving age-assurance legal and litigation landscape; and FPF Senior Fellow Jim Siegl, who debuted FPF’s updated Unpacking Age Assurance infographic.
Final Thoughts
Across states and at the federal level, child safety has emerged as a powerful policy lens through which lawmakers are advancing expansive regulatory frameworks, even as broader federal opposition to AI regulation—rooted in innovation, preemption, and constitutional arguments—continues to shape unresolved governance debates. As this dynamic plays out, efforts last year to slow or resist legislative action may prove less effective in 2026, as policymakers increasingly recalibrate their approaches rather than retreat from regulating perceived tech-related harms.
That’s all for now, thank you for reading. May the odds be ever in our favor this state legislative season, and see you in March!
The Algorithmic Update is a monthly newsletter highlighting key developments in privacy, AI, and tech policy. FPF members receive weekly legislative updates with deeper analysis and tracking—learn more here or contact membership@fpf.org.
Tatiana Rice is the Senior Director of Legislation at the Future of Privacy Forum (FPF).


Phenomenal roundup here. The South Carolina individual liability provision for officers seems like a total gamechanger tbh - can't recall seeing anything like that before. Gonna be intresting to see if other states follow that model or if it gets struck down early. Also curious if the data-driven pricing wave will actualy slow algorithmic personalization or just make disclosures longer.