Happy New Year and welcome to the inaugural 2025 edition of The Algorithmic Update! This first Friday, we dive into the latest legislative developments and key discussions shaping the future of AI policy in the year ahead.
As will be the case for future posts, this post is broken down into three sections: (1) Top Legislative/Regulatory Developments, where we highlight the top legislative actions from the past month; (2) Bills to Watch, where we spotlight noteworthy AI legislation, including this month, bills that have already been filed, carried over, and what bills we can expect to be coming soon; and (3) The Landscape, where we explore the ongoing policy debates surrounding AI and highlight the pressing questions and perspectives that are influencing the AI policy space today.
TLDR:
Top Legislative Regulatory Developments: A revised and pared down version of the Texas Responsible AI Governance Act was filed by Rep. Capriglione and the Oregon Attorney General issued guidance on AI governance under the states’ existing unlawful trade practices, data privacy, and anti-discrimination laws.
Bills to Watch: Several AI-related bills are already on the table from Virginia and Texas. Other states to watch include Connecticut, where a revised AI bill is expected, Colorado, where amendments to the Colorado AI Act are in the works, and California, where key AI bills that failed last session will be reintroduced.
The Landscape: A key question that we’ll hopefully be set to see this year is how will Republicans (both state and federal) approach AI policy, but more importantly whether their positions will be aligned or divided.
🏛️ Top Legislative/Regulatory Developments
1. Texas Representative Capriglione Files Revised Responsible Artificial Intelligence Governance Act
On December 23, Texas State Representative Giovanni Capriglione (R) filed HB 1709, the “Texas Responsible AI Governance Act” (TRAIGA). The press release outlines the bill’s goal to create a robust framework for ethical AI development and use in Texas, incorporating transparency and accountability measures, guidelines to reduce bias and misuse, enhanced data privacy protections, AI workforce training initiatives, and content moderation standards designed to safeguard free speech and lawful political discourse.
At a high level, TRAIGA appears to draw inspiration from various existing AI frameworks, including the Colorado AI Act, the EU AI Act, and the Utah AI Policy Act. However, it represents a distinct model that incorporates elements from these laws while diverging in significant ways and creating its own specific provisions tailored to Texas's unique policy goals. The bill’s regulatory requirements for businesses using AI are organized into the following key sections:
Duties for Developers and Deployers of AI used in Consequential Decisions: Sections 551.001–.012 establish responsibilities for companies—excluding those classified as small businesses under Small Business Administration standards—developing , distributing, or deploying AI in consequential decision-making contexts that significantly affect individuals' livelihoods and opportunities, such as hiring, education, and criminal sentencing. These responsibilities include ensuring transparency between parties and consumers, conducting impact assessments, providing certain consumer rights to explanation and appeal, and maintaining a risk management program. While this section appears strongly influenced by the Colorado AI Act framework, it features notable differences, including a more limited scope of applicable businesses and technologies, a new category of “distributers,” additional obligations for digital service providers and social media platforms, and the introduction of a regulatory sandbox program (discussed below).
Prohibited Uses of AI: Sections 551.051–.056 (Subchapter B) prohibit the development and use of AI in specific contexts, including human manipulation, social scoring, and generating non-consensual intimate images. Additionally, it bans the creation of biometric identification systems using web scraping and AI-created inferences of sensitive personal attributes from biometric data. While this section closely resembles the EU AI Act’s “prohibited use” categories, it diverges by not banning the use of AI for predicting criminality or emotion recognition (though emotional recognition was a prohibited use in the initial TRAIGA discussion draft).
AI Regulatory Sandbox Program: Sections 552.001–.102 create a regulatory sandbox that enables businesses to test and develop AI systems in a controlled environment, exempt from full regulatory compliance and protected from enforcement actions. During their participation, businesses are required to submit specific metrics and reports to the Texas Department of Information Resources. This framework is similar to the regulatory sandbox introduced by the Utah AI Policy Act.
Amendments to the State’s Comprehensive and Biometric Data Privacy Laws Regarding AI: Sections 541.051 - .104 propose amendments to the Texas Data Privacy and Security Act (TDPSA) (enacted in 2023 by Rep. Capriglione and effective in 2024) to address risks associated with personal data used in or to train AI systems. The amendments require data controllers to disclose the use and sharing of personal data for AI purposes, respond to consumer inquiries about whether their data is used in AI systems and for what purpose, allow consumers to opt out of the sale of their data for AI purposes, and ensure reasonable data security practices for AI systems handling personal data. If enacted, the TDPSA would be the first U.S. privacy law to be amended to include provisions regarding AI (beyond “profiling” and/or “automated decisionmaking”).
Since Rep. Capriglione introduced the initial TRAIGA discussion draft in the fall of 2024—sparking controversy as arguably one of the farthest-reaching AI proposals (especially coming from a Republican lawmaker)—the filed version has been revised to substantially narrow the scope in several key areas, while the bill's breadth of provisions remain substantial. A redline comparison, created using Draftable, is available here here.
While we're still analyzing the nuances of the bill changes, some key revisions include:
The definition of "high-risk AI system" was revised to include any AI use that is a substantial factor in making a consequential decision, aligning more closely with the Colorado AI Act. However, unlike Colorado, Texas narrows “substantial factor” to situations where the AI output “weighs more heavily than any other factor” in the decision.
The definition of “algorithmic discrimination” was simplified to any deployment of an AI system that creates unlawful discrimination in violation of state or federal law.
The requirement for deployers to maintain human oversight was removed.
Developer documentation to deployers must include metrics used to evaluate system performance, including metrics set forth in the National Institute of Standards and Technology AI Risk Management Framework regarding accuracy, explainability, transparency, reliability, and security.
A provision was added allowing an impact assessment conducted to comply with a similar law to satisfy TRAIGA’s requirements, similar to the approach in the Colorado AI Act.
Emotion recognition was removed from the list of prohibited uses.
Removal of the consumer private right of action.
What this could mean for the year ahead: The unexpected scope of a Republican-led proposal in Texas underscores AI regulation's growing prominence as a bipartisan issue, a trend highlighted in a recent op-ed by a bipartisan group of state legislators. This momentum suggests we can anticipate a significant increase in state-level AI legislation in 2025, potentially surpassing the volume seen in 2024. However, we may also see a shift toward states building on established models and frameworks, leading to greater uniformity in legislative approaches. This convergence could both streamline compliance for entities operating across states and accelerate the adoption of AI regulations nationwide.
2. Oregon Attorney General Issues Advisory on AI Governance Under Existing State Law
On December 24, Oregon Attorney General Ellen Rosenblum and the Oregon Department of Justice released advisory guidance (PDF here) on applying the state’s Unlawful Trade Practices Act (UTPA), Oregon Consumer Privacy Act (OCPA), and Equality Act to businesses’ use of AI technologies. The guidance underscores how these existing laws can address emerging AI concerns, including lack of transparency, bias, and the use of personal data in training AI models.
Key Points from the Advisory
Unlawful Trade Practices Act (UTPA): The guidance emphasizes that representations and communications made by or about AI to consumers, buyers, and the public fall under the UTPA’s scope. The advisory highlights several key obligations, including that entities must ensure that AI systems used to communicate with consumers provide accurate information, as misrepresentations—even in downstream uses—could lead to violations. Additionally, the state contends that the UTPA imposes an affirmative duty to disclose known limitations or inaccuracies in AI products or tools. These points appear consistent with prior guidance and enforcement actions set forth by the Federal Trade Commission.
Consumer Privacy Act (OCPA): The advisory outlines how AI use and training intersect with the Oregon Consumer Privacy Act (OCPA), particularly emphasizing the use of personal data in training AI systems, including generative AI. It specifies that when personal data is used for this purpose, businesses must disclose it in their privacy policies and obtain explicit consent if the data is sensitive. For AI systems involved in significant decisions—such as housing, education, or lending—businesses are required to conduct data protection impact assessments and provide consumers the right to opt out. Notably, the advisory introduces a novel interpretation: developers using third-party datasets for AI training may be classified as “controllers” under the OCPA. As such, they would be held to the same standards as the original data collector, with model training considered a secondary use that mandates fresh, explicit consumer consent.
Equality Act: Lastly, the advisory briefly notes that businesses may violate the Equality Act if AI systems are used in ways that deny public accommodations or opportunities protected under the law. It explicitly warns against the use of AI trained on historically biased datasets.
What this could mean for the year ahead: With a forthcoming Trump administration, state attorneys general and executive agencies may be likely to follow Oregon’s lead in leveraging existing laws to address AI governance more aggressively. States such as Texas and California are already moving in this direction. Last year, Texas Attorney General Ken Paxton settled an enforcement action against a healthcare AI company under the state’s Deceptive Trade Practices law, alleging deceptive claims about the safety and accuracy of its AI system. Meanwhile, the California Privacy Protection Agency continues to advance rulemaking efforts to regulate automated decision-making tools under the California Privacy Protection Act.
📜 Bills To Watch
Though legislative session have not yet started in most states, there are several bills carrying over from last session or have already been filed or pre-filed. We’re paying close attention to:
Virginia HB ___: Delegate Maldonado (D) introduced HB 747 during the last legislative session, proposing obligations for developers and deployers of “high-risk AI systems.” This bill marked the first legislative effort to introduce a framework similar to those later introduced in Connecticut and enacted in Colorado. Although HB 747 was ultimately carried over to 2025, it has since undergone further review and recommendation by the Joint Commission on Technology and Science (JCOTS) - AI Subcommittee.
Texas HB 1709 (Texas Responsible AI Governance Act) (discussed above)
Texas SB 668: While we’re tracking as many as eight AI-related bills already filed in Texas—most addressing deepfakes and nonconsensual intimate images—SB 668, filed by Senator Hughes (R) on December 19, introduces an additional private-sector AI regulation. Unlike TRAIGA, this secondary Texas AI bill adopts a narrower approach, emphasizing transparency by requiring companies above a certain revenue threshold to disclose certain information about the AI systems they utilize.
Based on current insights, we anticipate numerous other states to introduce noteworthy AI legislation in 2025, including 2024 bills likely to return and gain traction in the upcoming session:
Connecticut: In the wake of SB 2's fallout last year, prompted by Governor Lamont’s threatened veto, Senator Maroney (D) is anticipated to reintroduce his AI bill—aligned with Colorado’s framework—as a priority item in the Senate.
Colorado: After the enactment of the Colorado AI Act last year, and backlash from local startups and businesses, Governor Polis, Attorney General Weiser, and Senate Majority Leader Rodriguez (D) published a letter outlining their intention to revise the law to address these issues. As a result, Rodriguez is expected to introduce amendments to the landmark legislation, potentially based on policy discussions and recommendations provided through the Colorado AI Task Force.
California: Assemblymember Bauer-Kahan (D) and Senator Weiner (D) have publicly stated their plans to re-introduce their respective AI bills, AB 2930 regarding automated decisionmaking tools and SB 1047 regarding frontier models.
Vermont: Representatives Priestley (D) and Cina (D) stated in a recent press release that they plan to introduce an AI bill. Last session, Priestly (D) introduced two AI bills: H 710 governing “high-risk AI systems” (like Colorado, Connecticut, Virginia, etc.) and H 711 governing “inherently dangerous AI.” While neither bill gained traction, it is likely that Priestly will reintroduce a similar approach in the upcoming session, alongside her efforts to advance comprehensive data privacy, which was veto’ed last year, and children’s privacy and safety.
Oklahoma: Often under-discussed, Oklahoma's "AI Bill of Rights" (HB 3453) passed out of the state House last session. The bill sought to grant citizens certain AI-related rights, such as the right to be informed when AI is used in interactions and the right to be protected from algorithmic bias that could lead to discrimination. Despite its initial momentum, the bill did not pass both chambers, but remains a potential contender to resurface in the upcoming session.
⚖️ The Landscape
The Key Question for the Upcoming Year: How Will Republicans Approach AI Policy, and Will Their Positions Be Aligned or Divided?
In 2025, with Republicans controlling all three branches of the federal government and a slim majority in state chambers, important questions will likely arise about how the party will address AI.
Over the past few months, two main camps of Republicans appear to have emerged on the issue. One group broadly opposes any or most forms of AI regulation due to concerns about excessive government interference. They argue that heavy-handed regulations could create burdensome red tape, slow down innovation, and ultimately harm American competitiveness—particularly in relation to China. This camp also worries that such regulations could lead to censorship, stifling free expression and limiting the potential of AI technologies. An emerging group within this camp is also concerned about the incorporation of "DEI" principles into AI, fearing that AI systems could be made "woke." They point to incidents like the 2024 Google Gemini controversy, where AI generated historically inaccurate images, such as Black Vikings and Asian popes, which critics argued reflected a biased, politically correct agenda.
Another camp of Republicans support AI regulation for reasons related to consumer protection and the desire to create a stable business environment (though perspectives from "big tech" and small businesses diverge significantly on this point). With growing public skepticism and distrust of AI, these lawmakers, in agreement with many Democrats, believe that balanced regulation could help build consumer confidence while protecting innovation. On the federal level, Republican leaders like Representative Obernolte (R-CA) advocate that a national AI regulation could prevent a fragmented patchwork of state laws that create conflicting compliance requirements and stifle innovation.
Where will President-elect Trump land? The future of AI policy under a forthcoming Trump administration remains unpredictable, but several key factors are worth noting. While Trump’s campaign advocated for rolling back Biden’s AI executive orders, President-Elect Trump had already laid the groundwork for AI policy during his first term, with two executive orders on AI that the Biden administration’s approach was largely built upon. And it was recently announced that the same team responsible for those policies will return in his upcoming administration. Even if Trump does seek to undo certain aspects of the Biden administration's AI framework, it’s likely that he would preserve his own previous directives, which focused on the development of standards and principles for trustworthy AI.
Furthermore, Trump’s relationship with high-profile figures in the tech space, like Elon Musk, could influence his stance on AI. Musk has publicly supported the controversial California bill SB 1047, which was criticized by many for being overly restrictive, and, alongside other large tech executives, has advocated for pauses on AI development to assess and address critical safety risks. While many expect a full-scale rollback of regulations and initiatives under a Trump presidency, the reality may be more nuanced, with some elements of AI policy remaining intact or modified rather than entirely dismantled.
In Conclusion:
The AI policy landscape for 2025 is shaping up to be both dynamic and complex. AI regulation is set to take center stage, with states leading the charge and Republicans navigating the delicate balance between fostering innovation, ensuring consumer protection, and overseeing tech companies.
Stay tuned for next month's update, where we'll track the developments of this first month of the year, dive into these shifting dynamics, and offer insights on what’s to come. That's all for now—thank you for reading, and here's to navigating the tidal wave of change ahead in 2025. May the odds be ever in our favor!
I think your post caused the Oregon Attorney General‘s website to crash because I cannot get in to get this Oregon guidance no matter what I do today if you have a copy of it posting a PDF would be a public service