Happy February and Black History Month!
State legislative sessions are in full swing, with new bills, amendments, and hearings happening daily. With so much activity, it can be hard to know what’s worth your time and energy. That’s where this newsletter comes in—to help you cut through the noise and start your week with a clear view of what matters.
As I shared in my initial LinkedIn post, this newsletter is about more than just updates; it’s about understanding the full picture—the debates, trade-offs, and perspectives shaping the future of AI policy. The more we can see past the headlines, (hopefully) the more productive and informed engagement we’ll have in these critical conversations.
I appreciate your patience with this slightly delayed edition—there was a lot to cover this month! Moving forward, you can expect these updates on the first Monday of the month (next one coming March 3). Alright, let’s dive in.
TLDR:
Top Legislative Regulatory Development: Virginia's House passed HB 2094, a 'high-risk AI' bill modeled after the Colorado AI Act, though with a narrower scope than its Colorado counterpart. Notably, the amended version removed “integrators” and “distributors” from the framework, reflecting an ongoing debate over whether AI regulations should account for middle-tier entities in the AI lifecycle or stick to the more traditional developer-deployer model.
Bills to Watch: High-risk AI and generative AI legislation are key trends this session, with notable bills in Virginia (HB 2094), New Mexico (HB 60), Nebraska (LB 642), Maryland (SB 936), Utah (SB 226), and Washington (HB 1168); and we await how bills sponsored by AI leaders in Connecticut (SB 2), Texas (HB 1709), and Montana (SB 212) unfold.
The Landscape: Beneath the concerns over DeepSeek and U.S. AI leadership lies a more substantive policy debate over whether small businesses and startups should be regulated or exempt under AI laws. Some argue that excessive regulation could stifle innovation and limit competition, while others warn that broad exemptions may create accountability gaps for high-risk AI applications.
🏛️ Top Legislative/Regulatory Developments
1. Virginia House Advances HB 2094
Last week was "crossover week" in Virginia, where HB 2094—a bill modeled after the Colorado AI Act to regulate "high-risk AI"—passed the House along party lines and advanced to the Senate with a committee substitute. One of the largest changes in the revised bill is the removal of "integrators" and "distributors" from its scope, limiting responsibilities solely to developers and deployers (as in the Colorado AI Act). In the original version, integrators were independent third parties who incorporated high-risk AI into software applications for market release, while distributors were entities making AI systems available in the state.
Why were these new parties created in the first place and why were they removed? The inclusion of integrators and distributors in Virginia's HB 2094 was likely an attempt to account for middle-tier entities in the AI lifecycle that exist between developers (who build and train AI models) and deployers (who use AI in consumer-facing applications). Though the initial developer-deployer distinction emerged in U.S. AI legislation with California’s AB 331 in 2023 and was later utilized in the Colorado’s AI Act, some argued that AI supply chains also involve integrators (who embed third-party AI into broader applications) and distributors (who repackage and resell AI tools), creating potential regulatory gray areas. Despite these considerations, Virginia ultimately removed integrators and distributors in the substitute bill, reverting to the more commonly used developer-deployer structure. Some stakeholders argued that adding these roles introduced new ambiguities and potential loopholes, while others viewed the change as an expansion of liability. From the discussions, it seemed that while stakeholders reached significant compromises, the differing perspectives ultimately weren’t fully resolved—or at least not in a way that outweighed sticking to the two-party framework used in other U.S. AI laws and legislation.
With Virginia’s legislative session adjourning on February 22, the bill will need to move quickly and it remains to be seen whether the state’s Republican governor will sign or veto the bill. However, as highlighted in Keir Lamont’s Patchwork Dispatch, the Virginia bill takes a notably narrower approach than the Colorado AI Act (with the exception of new generative AI transparency requirements), potentially setting a more industry-friendly precedent for other states.
📜 Bills To Watch
It’s the peak of the state legislative frenzy—a familiar cycle for those who have been in the state policy field before. While a flood of tech-related bills is not uncommon, what advances or stalls in the next two months will be the true indicator of where things stand. As I discussed in Friday’s FPF LinkedIn Live, amid the surge of AI legislation, some key bill categories have emerged so far that warrant attention due to their advancement. (Note: these are bills to watch closely because they have either progressed or have the potential to move forward—they do not represent a comprehensive list of all AI-related legislation being tracked).
“High-Risk AI” / Automated Decisionmaking Bills: These bills govern AI in high-risk ‘consequential decision’ contexts, such as the provision or denial of employment, lending, and healthcare. While most follow the general framework of the Colorado AI Act, most have a notably narrower scope, applying only when an AI system serves as the principal basis for a consequential decision—defined, in turn, as when there is minimal human involvement or intervention. Thus, AI systems with meaningful human oversight are likely excluded from regulation under this approach (this contrasts with Colorado’s broader “substantial factor” standard, which includes AI that has the capability to alter a high-risk outcome). However, these bills also introduce generative AI transparency requirements not included in Colorado’s SB 205.
Virginia HB 2094: Passed House (narrower scope than Colorado + generative AI transparency requirements)
New Mexico HB 60: Passed House committee (similar scope to Colorado + generative AI transparency requirements)
Nebraska LB 642: Heard in committee yesterday (narrower scope than Colorado while introducing a novel two-tiered framework, where business compliance obligations only apply when AI is used with no human involvement but consumer transparency requirements apply when AI is the principal basis—a new approach worth watching as consumer advocates and industry seek compromise on advocates’ focus on consumer transparency with industry’s need for operational clarity)
Maryland SB 936: Introduced this week (narrower scope than Colorado + generative AI transparency requirements + private right of action) by the sponsor of last year’s enacted government AI use bill; it’s scheduled for a committee hearing on February 27.
Generative AI Transparency: This often-overlooked category of AI legislation aims to establish transparency obligations for generative AI. Unsurprisingly, these bills closely follow last year’s enacted legislation — California AB 2013 and SB 942, and Utah’s AI Policy Act.
Washington HB 1168: Passed committee (California AB 2013 model).
Utah SB 226: Introduced this week by the Republican sponsor of the enacted Utah AI Policy Act (creates additional generative AI consumer disclosure requirements to the Utah AI Policy Act, including in ‘high-risk’ interactions).
As noted above, these generative AI models are increasingly being incorporated into high-risk AI bills, including: Virginia HB 2094, New Mexico HB 60, and Maryland SB 936.
A few others worth watching, even with limited movement so far:
Connecticut SB 2: Though the official bill text is still pending, Senator Maroney released a draft for the Multistate AI Policymaker Public Feedback Sessions, which was updated Friday (text here) for the NCSL Task Force on AI, Cybersecurity and Privacy meeting in Tampa this past weekend. At a high level, the draft proposal diverges from Colorado in the following ways:
Like earlier versions of Virginia HB 2094, it includes obligations for integrators;
It contains more prescriptive categories of “consequential decision,” such as specifying that “employment decisions” include decisions concerning hiring, termination, compensation or promotion, or automated task allocation (appears influenced by California AB 2930); and
Like the 2024 version of CT SB 2, it includes provisions regarding general purpose AI models, synthetic content, and nonconsensual intimate images.
At the same time, Connecticut Governor Lamont—who previously threatened to veto SB 2 last year—introduced his own AI legislation, SB 1249, earlier this week. The bill includes several key provisions, such as establishing a regulatory sandbox (potentially influenced by the Utah AI Policy Act) and explicitly clarifying that entities cannot use AI as a shield against liability under existing legal claims. While Senator Maroney's approach emphasizes affirmative compliance requirements and transparency measures to mitigate harms in high-risk AI applications, Governor Lamont's prioritizes AI innovation in the state while leaving questions about AI-related harms to be resolved through litigation, enforcement, and judicial interpretation. This contrast highlights a fundamental policy divide: Should AI governance be built on regulatory standards and requirements upfront, or rely on case-by-case legal challenges to determine accountability? While a deeper analysis is a topic for another Algorithmic Update, it will be interesting to see whether these competing approaches can be reconciled into a workable compromise as the legislative process unfolds.
Texas HB 1709: Though Rep. Capriglione’s Texas Responsible AI Governance Act has yet to advance in committee or receive a public hearing since its December introduction, it has drawn significant attention and opposition. Still, there’s reason to watch this bill. Capriglione, who sponsored the state’s data privacy law—now aggressively enforced by the Attorney General—has a track record of advancing tech policy, though it took multiple sessions to pass privacy legislation. His recent appointment by the Governor to lead the state’s cybersecurity initiative, along with the Governor’s recent ban on Chinese-backed AI and social media apps, suggests he may have the technical influence and momentum to push this legislation ahead, at least in his own chamber.
Montana SB 212: Sen. Zolnikov, who sponsored the state’s enacted data privacy law in 2023, has introduced a novel AI framework featuring a “right to compute” provision. This section would prohibit the government from restricting AI development or use unless it demonstrates necessity and narrowly tailors regulations to public health or safety concerns. The bill also includes an AI safety provision, requiring deployers of AI-controlled critical infrastructure to ensure a fail-safe mechanism, allowing human control to be restored within a reasonable timeframe. Amidst opposition to AI regulations by libertarian organizations and venture capital firms, the bill may be one to watch as a potential test case for whether this “right to compute” approach gains traction in other states or influences broader policy debates.
⚖️ The Landscape
Should Small Businesses and Startups Be Exempt from AI Regulation? Balancing Innovation, Competition, and Consumer Protection
The question of whether small businesses and startups should be exempt from AI regulations has taken on new urgency in light of recent news regarding DeepSeek—a Chinese AI model that reportedly rivals American industry leaders at a fraction of the cost and compute. This has intensified concerns over U.S. AI competitiveness while also challenging the notion that only large, well-funded companies can drive large-scale AI innovation. Startups, VCs, and libertarian-leaning voices—a newer constituency in tech policy—argue that new regulatory burdens would disproportionately hinder smaller tech companies, stifling competition and reinforcing fears that regulation could entrench dominant players through regulatory capture.
Prominent venture capital firm a16z recently cited to DeepSeek’s release as evidence that U.S. companies are already at risk of falling behind, not just in theory, but in reality. They argue that a fragmented regulatory landscape places a disproportionate burden on startups, which lack the legal and engineering resources of large firms to manage compliance. Unlike major tech companies with in-house compliance teams, startups often operate without full-time legal counsel, making it harder to navigate varying state laws without diverting resources away from core product development. As a result, they contend that "patchworks of state laws may burden large tech platforms, but they have the power to cripple Little Tech and hinder American efforts to compete with AI development in other countries."
On the other hand, consumer advocates and civil rights groups argue that exemptions for small businesses could weaken critical protections. They emphasize that existing consumer protection and civil rights laws apply universally—regardless of company size—and warn that broad carve-outs could create regulatory gaps. They point to AI’s role in high-risk areas such as hiring, lending, and healthcare, arguing that smaller entities deploying these systems should still be accountable for potential harms like discrimination, privacy violations, and unfair decision-making.
A Consumer Reports survey reflects these concerns, showing that most Americans are uneasy about AI-driven decisions in critical life areas. Additionally, 83% of respondents said they would want to know what information an AI system used to determine their eligibility for a job interview, and 91% said they would want a way to correct inaccurate data. Advocates argue that broad small business exemptions could undermine transparency and accountability efforts, especially when small companies can still impact thousands of consumers. Additionally, given AI’s complexity and opacity, they contend that existing legal frameworks are not keeping pace, making it difficult for courts, consumers, and regulators to assess compliance or mitigate harms. As a result, they argue that new, standalone AI laws for any developer or deployer in these high-risk areas are needed to strengthen transparency mechanisms and create clear incentives for accountability.
At the same time, AI proposals vary in scope and approach to small business exemptions. While some startup representatives express concerns about regulatory burdens, others point out that many of the most contested proposals—such as those in Colorado and Texas—would already exempt most startups or would not apply to the vast majority of the startup ecosystem. To provide context, here’s how some key AI legislative proposals address small business exemptions:
Colorado AI Act: Regulates entities that deploy or develop high-risk AI (such as automation or predictive models used for employment, lending, and healthcare decisions). Deployers with fewer than 50 employees who do not train a high-risk AI system with their own data are exempt from certain requirements, such as maintaining a risk management program, conducting impact assessments, and providing public disclosures. However, they are still subject to a duty of care and must provide consumer notices and rights. Small developers are not exempt.
Texas Responsible Use and Governance Act: Exempts all small businesses as defined under Small Business Administration (SBA) standards, which vary by industry but generally apply to 99.9% of U.S. businesses.
California Generative AI Laws (AB 2013 and SB 942): No small business exemptions.
In Conclusion:
As AI takes center stage in state legislatures and national policy discussions, the noise of high-level debates can sometimes drown out the real, solution-driven work happening on the ground. Policymakers and stakeholders are actively working to balance innovation, competition, and consumer protection, but too often, these efforts get lost in broader ideological battles—making it harder to focus on practical, effective policy solutions.
Yet, beneath the noise, there is real progress. The challenge isn’t choosing between innovation and accountability—it’s finding the right balance to foster responsible AI development. The more we focus on constructive dialogue and tangible solutions, the better positioned we’ll be to shape AI policy that works.
That’s all for now—thank you for reading, and here’s to cutting through the noise, staying focused on solutions, and shaping AI policy that works. See you next time!
Super informative and insightful, Tatiana. Thank you!
Please check out my post that is of critical importance. I am new here so hello to all!
https://substack.com/@aeri4fairai/note/p-163290079?r=5o3buh&utm_medium=ios&utm_source=notes-share-action