On June 22, 2025, Texas Governor Greg Abbott signed House Bill 149 – the Texas Responsible Artificial Intelligence Governance Act (TRAIGA). TRAIGA is now set to take effect for Texas businesses on Jan. 1, 2026. With TRAIGA, Texas joins California, Colorado and Utah in implementing state-level artificial intelligence (AI) governance laws regulating the private sector. Eyes are on Texas’ regulatory space as the state reinforces its role as a tech and business powerhouse with a new stock exchange and business court. Against this backdrop, Texas passes a narrower, more targeted AI bill than its counterparts in states like Colorado.
Overview of TRAIGA
Originally drafted as a more comprehensive regulatory framework for AI in Texas, TRAIGA was scaled back through the legislative process. The final version focuses primarily on certain “discriminatory,” “harmful” and “manipulative” uses of AI, along with oversight and transparency requirements. Even though the original impact has been dulled, TRAIGA still creates new compliance obligations and potential penalties for both public and private entities operating within Texas, as discussed in detail below.
Who’s Affected?
TRAIGA applies broadly to any person conducting business in Texas, or developing, deploying or offering AI systems within Texas. While its scope includes both public and private entities, it imposes stricter requirements on public entities than on private businesses.
Restrictions on AI Use
Under TRAIGA, Texas businesses are prohibited from using AI to intentionally incite violence or crime, infringe individual rights or unlawfully discriminate. (Notably, a showing of disparate impact is specifically called out under TRAIGA as insufficient evidence to show a business’s intent to discriminate.) Most, if not all, of these prohibited acts using AI are based on established (non-AI) laws in Texas. So, while TRAIGA introduces a new regulatory hook and establishes new potential penalties for businesses that use AI to violate TRAIGA, the new compliance burden on Texas businesses does not extend far outside previous legal lines.
Additional restrictions set forth by TRAIGA apply only to Texas governmental agencies. These restrictions prohibit agencies from using AI for social scoring, and subject agencies to additional transparency requirements. The two most notable transparency requirements are: (1) that Government agencies must provide conspicuous notice to consumers that they are interacting with an AI system (even if it is obvious); and (2) that agencies may not gather biometric data without the individual’s informed consent if doing so would infringe that individual’s rights.
Enforcement & Penalties
The Texas attorney general alone has authority to enforce TRAIGA; there is no private right of action. TRAIGA requires the Attorney General to maintain an online mechanism for consumers to submit complaints about improper AI use. Where a complaint is received and acted upon, developers and deployers of AI systems are required to answer to the attorney general. This includes providing information on the purpose, use and deployment of the AI system, the types of data used for training, the categories of inputs and outputs, known limitations of the system and what performance metrics are used to assess the system. Texas businesses would be well-advised to create and maintain these types of records as part of their regular business processes, rather than waiting until the Texas attorney general asks for them during a compliance investigation.
Texas entities will have 60 days to cure any alleged violations of TRAIGA brought by the attorney general, after which, they could be liable for civil penalties ranging from $10,000-200,000 and up to $40,000 per day for continuing violations.
Safe Harbors:
Notably, there is a safe harbor from civil penalties for entities that follow the AI Risk Management Framework: Generate AI Profile, published by the National Institute of Standards and Technology (NIST), or comparable recognized frameworks.
Additionally, TRAIGA provides for a sandbox program allowing legal protection and limited market access for Texas businesses testing innovative AI systems for up to 36 months, so long as they comply with certain reporting and risk-mitigation requirements.
Additional Considerations
- Privacy Law Interplay: TRAIGA builds on Texas businesses’ obligations under the Texas Data Privacy and Security Act by requiring data “processors” to assist data “controllers” in ensuring the secure processing of personal data handled by AI systems.
- First Amendment Construction: TRAIGA provides that it cannot be construed to impose a requirement that adversely affects the rights or freedoms of any person, including free speech.
- Future Governance: TRAIGA establishes the Texas Artificial Intelligence Council to study and guide future AI policy and oversight.
- Federal Preemption? A pending federal budget reconciliation bill could include a broad preemption of state AI laws, which may effectively pause or nullify TRAIGA.
Key Takeaway
Texas businesses developing or deploying AI systems should review their AI tools for potentially illegal use cases, as well as maintain documentation about the types of data they use for AI training, the categories of data input into AI tools, any known limitations of AI and any performance metrics, to ensure compliance with TRAIGA, coming into effect Jan. 1, 2026.