The House Select Committee on Artificial Intelligence and Emerging Technologies has released their final recommendations to the 89th legislature. The report covers use of AI, impact of AI on certain industry sectors, and policy surrounding AI. The Committee also previously released an initial report in May of this year. See below for a spotlight on recommendations from the report.
Examining the current state of AI/ET and its uses by public and private actors in modern society
- Establish a comprehensive AI inventory for public agencies, requiring all Texas state agencies to conduct an annual audit of AI systems currently deployed, identifying their functions, data sources, and any potential risk of bias or misuse.
- Develop AI-specific training for public sector employees, focusing on responsible AI use, data privacy, and bias mitigation in decision-making processes.
- Mandate risk assessments for high-risk AI deployment by public and private actors, requiring a documented impact assessment evaluating the system’s potential societal and legal effects.
- Implement transparency standards for high-risk AI in the private sector, mandating disclosure requirements and transparency around data sources and algorithmic decision-making processes.
Determining the impact of the application of AI/ET on various sectors of society, including employment, healthcare, homeland and national security, and transportation
- Create AI workforce development programs prioritizing training for workers at risk of displacement due to automation.
- Provide upskilling training for workers in preparation for the implementation of new technologies.
- Encourage employers to provide transition programs for AI-displaced workers, incentivizing the offer of retraining.
- Require transparency in AI by requiring the disclosure of the type of data used to program an AI system.
- Establish AI risk monitoring for high-risk AI systems used in consequential decisions.
- Develop standards for human oversight, ensuring AI systems used in high-risk situations operate with meaningful human control.
- Collaborate on AI security standards with federal entities, such as the National Institute of Standards and Technology.
- Implement protections for the use of copyrighted data and unauthorized use of a person’s likeness.
Identifying policy considerations necessary to ensure the responsible deployment of AI/ET in Texas by both public and private actors
- Establish an Advisory Council to provide subject matter assistance to state agencies implementing regulatory guidance for licensed industries under their purview.
- Create a regulatory sandbox to allow innovation to thrive outside standard regulations.
- Require disclosure of the use of AI to consumers in high-risk AI interactions.
- Require AI systems to include measures that reduce bias and prevent algorithmic discrimination. AI system developers should be required to certify their systems as free from biases based on race, gender, or other protected categories before deployment.
- Strengthen laws to address emerging digital threats, including deep fake technology, election interference, revenge porn, and child sexual abuse material.
- Establish strict unacceptable uses of AI that intrude on privacy, exploit vulnerable populations, manipulate individuals, or misuse sensitive data.
- Evaluate and update laws to assign clear liabilities for AI-related outcomes, ensuring accountability for developers, deployers, and users of AI systems.
- Make necessary adjustments to the Texas Data Privacy and Security Act to include AI and emerging technologies.