The House Select Committee on Artificial Intelligence & Emerging Technologies met on April 29 to take invited and public testimony on the current state of AI tech, uses by the private sector, impact on various sectors of society, and potential public policy considerations. The hearing notice is available here.

This report is intended to give you an overview and highlight the various topics taken up. It is not a verbatim transcript of the discussions but is based upon what was audible or understandable to the observer.

 

Matthew Lease, UT Austin

  • Providing examples of what AI is, responsible AI, AI governance, and AI concepts & terminology
  • Provides overview of generative AI, used not only in speech & text generation, but also directions, shopping recommendations, score evaluations, etc.
  • AI also creates harm risks like Tesla self-driving car crashes, should be on the look out for when harm is less than with human operators as there are possibilities to save lives
  • Generative AI can lead to risks like impersonations of public figures, fake news (e.g. 2023 Pentagon explosion story)
  • Highlights 4 different companies (IBM, Microsoft, Apple, etc.) using AI and running into problems; generative AI can exacerbate these problems
  • UT Austin has an initiative called Good Systems to explore AI tech that can benefit society while minimizing harm
  • Responsible AI dev needs diverse input
  • Ethical AI, responsible AI, AI governance, etc. are all terms for developing AI tech safely and with an eye to potential harm
  • Provides an overview of AI, automating intelligence and cognitive tasks
  • AI often gets integrated in applications & you see the tech disappear, becomes “invisible technology;” AI automates predictions, recommendations, and decisions
  • “Human in the loop” is a common term nowadays which puts a human reviewer somewhere in the decision-making process
  • Describes algorithms, steps that computers take to perform tasks; leads into terms like “algorithm auditing” and “algorithm accountability”
  • In the 1980s tried to map human decisions a series of “if/then” statements, but difficult to implement as people often did not remember why decisions were made in certain ways; new tech tries to let the computer figure out the patterns of decision making
  • AI has gone through a series of “hype cycles” where AI has unrealistic expectations, then when these aren’t met it leads to disillusionment, then productive purposes are found and the tech is often not called AI anymore; researchers are concerned that current expectations will lead to another downturn
  • Provides overview of machine learning, looks at patterns in data to derive conclusions or predictions
  • Deep learning is using larger models to take advantage of larger data sets; most of the conversation surrounds size of data, but data quality also matters; can lead to poor responses when low-quality data is used like web content
  • Data-centric AI is a focus on curating and improving data rather than the model itself
  • Watchful of consistent bias v. random errors; large data input tends to solve for random errors, but can reinforce consistent bias
  • Provides example of determining patterns with the most common ending to the phrase “Monday night…” being “football;” other options exist to end the phrase “Monday night”
  • Language models have been happening for decades, but what has changed recently is the size of the data being used, the size of the models used, faster computing power, etc.
  • Some models are closed with access via limited APIs, some models are open source; with open source models you know exactly how decisions are reached, but also leaves it open for any number of uses
  • AI can disguise bad info by being seemingly fluent, AI “hallucinations” can be hard to detect and can influence people
  • Citations can help offset the issue, as can active search engine retrieval; citations can sometimes themselves be hallucinations
  • Unsupervised AI finds patterns without any human input, supervised AI is when AI is given a specific task with constraints
  • Important to think about the people involved in the supply chain of data feeding the AI, e.g. who is flagging content or supervising output
  • Filters can help, e.g. blocking users from asking for certain outputs and blocking the model itself from producing bad outputs; these guardrails are hard to put into practice
  • Longoria – Goingback to hallucinations; can you explain the concept of black boxes?
    • Lease – Black box is when we don’t know how the algorithm worked to produce an output; as models get trained on bigger and bigger data and models get larger, hard to determine how the model works
    • Trying to make models more transparent & less opaque
  • Longoria – Any suggestions on working around this?
    • Lease – On the transparency of the model; when we have retrieval augmented generation, can ask for the providence for evidence, enables consumer to check the data themselves
    • Evidence is also “buyer beware” – would need to check that too
    • Need to ensure human oversight is involved in the training of the model
  • Longoria – Asking for the data would be evasive?
    • Lease – Is about asking what data was used for the model
    • Can ask the model to give providence of the data so the user can check for themselves; models are dated and need to be continuously updated
  • Longoria – Needs to be a requirement of the model to tell you where it got the data from?
    • Lease – Would be a good idea
  • Orr – Red teaming?
    • Lease – Don’t know of red teaming in the public sector, but NIST has put together a framework
    • DID side is certainly doing this, but not talking about this
  • Walle – Mentioned something about bias for those with “darker skin?” Can you speak in relation to who looks at the data, etc.
    • Lease – There have been a variety of stories since roughly 2017, e.g. camera tracking for people with darker skin colors, Google AI mis-categorizing people’s faces as animals, etc.
    • One of the reasons why it is important to have diverse people involved in development as they will think about these aspects; products need to make sure they are making an effort to address
    • Also a problem in data, e.g. face data collected or sampled from one group of people over another
    • AI can also grab biased data from the web, avoiding gaps starts with ensuring data is drawn from diverse sources; can also happen on the modeling side
  • Walle – Gives me a much better understanding that we need to be cognizant about this; concerned that we account for these biases
    • Lease – As AI become more prevalent and powerful, need to be aware of these issues and possible impact
  • Walle – With rural communities, can access to internet be a challenge, e.g. may have a device but not internet access
    • Lease – If you train AI on social media asking questions about regions with low internet connectivity, probably won’t get a very good answer about those regions
    • Certain groups are more or less represented in data sets and certain data sets are used because they are convenient, not because they are quality
    • Appropriate data selection is important
  • Chair Capriglione – Regarding AI governance, important to have proper governance structure when implementing AI tools; privacy and security are two of the more important ones, can you get into governance a little bit & why that matters?
    • Lease – Range of issues go under this and addressing each one requires a little different attention
    • For security, think the most about misinfo, disinfo, info integrity, etc.
    • Brookings ran an article about potential for disinfo on election day surrounding polling locations being open, this is where you need a human in the loop
    • For privacy, a lot of data is out already & need to think about this
    • Sustainability gets a lot less attention, e.g. training models on massive super computers and using large amounts of energy
  • Walle – On disinformation, elections are run by local counties & would likely take a lot of staff time to troubleshoot or police the info; are you recommending ramping up local county election divisions for AI?
    • Lease – Need more people involved, but where they are is flexible, e.g. working at the social media companies
    • Social media companies are generally rewarded based on interactions and controversy gets clicks, so there are challenges in how disinfo spreads there
    • Pretty troubling situation, unclear how good of shape we’ll be in by the time of the next election
  • Walle – Seems profoundly disturbing, do you think we would be ready for a disinfo campaign from foreign adversaries?
    • Lease – Probably above my paygrade
    • Highlights issue with Jade Helm and foreign disinfo campaign leading to Gov. Abbott taking action
    • Also protests in 2016, 2017 where Russian users got people to protest at a mosque on both sides
  • Chair Capriglione – On AI recommendations, AI directs towards different products rather than browsing in person; how does this affect human behavior?
    • Lease – Don’t see much outside the initial results or recommendations & people have limited time to do due diligence and check; absolutely has an influence
    • Trying to look at this in encouraging certain behaviors, e.g. a couple of seconds delay before sharing a story
    • Recommendations have a tremendous influence on behavior
    • Brick & mortar stores will have cameras with facial recognition, possibly digital displays that advertise to you specifically as you walk around; may get closer to digital recommendations style over time
  • Chair Capriglione – Asks about data bias and tools to mitigate, labeling has it’s own issues, etc.
    • Lease – Might draw samples at random, but then check the balance & correct after the fact
    • Struggle a lot with scale versus quality, e.g. if data is biased , more data just reinforces the bias
    • Automatic translation between languages was one of the early successes, many governments have translations of legal documents required so data was already available to cover the use case
    • Also have cases where we’re trying to do something specific like spam filtering, users labeling some messages as spam is helping label data
    • Bias can also occur in the annotation and labeling process, can occur particularly when labeling task isn’t specified and the labeler falls back on gut feelings
  • Chair Capriglione – Explains how errors will show up, deliberate or unintentional

 

Lt. General Richard Coffman, Army Futures Command

  • With new chips & incredible amount of data available, can apply AI tech to more areas, but models will always make mistakes & it is importance for humans to be involved
  • Confidence levels are important, needs to be better than human, which is a very low bar in some cases & higher in others
  • Not all AI aspects advance at the same pace, detection tasks skyrocketed and have leveled off now, language models are starting to skyrocket now
  • Fair & prudent to look at bias in the data and models
  • Might not be one answer to solve issues with AI; humans get to determine what machines do and do not, then we put rules into place to make sure there is never autonomy for the model
  • Army Futures Command is working with AI every day, looking more at replacing staff and not the commander, trying to use AI to make better decisions based on real-time data
  • Working to train algorithms with real and synthetic data
  • Adversaries are not acting ethically, e.g. night sight shoot on sight, our systems must identify if the target is friendly or not
  • Looking at using AI to detect chemical agents, monitor power grids, rescue operations, etc.
  • Need to develop federal & state research & academic partnerships; Army Futures Command is partnered with most academic institutions in TX
  • Need policies that protect privacy & should hold nefarious actors accountable
  • Chair Capriglione – You mentioned that we should never have autonomy; could we have autonomous weapon systems?
    • Coffman – Yes, and we have some today, e.g. had some missile intercept systems & knew exactly how these would perform
    • Possible for machines to identify enemy vehicles for targeting, US Army tries to make sure human is in the loop for decision making
  • Chair Capriglione – AI being used to identify targets, but human still pushes the button
    • Coffman – Absolutely, AI sorting data is helpful but human must always make the determination
  • Chair Capriglione – How much of a role do you see for AI on the battlefield?
    • Coffman – Predictive models will factor into ammunition manufacturing etc., based on expected demand; supply chain will be completely covered with AI/predictive intelligence
    • Have heard of sensing on the battlefield; don’t know we’ll get to the “invisible battlefield” concept, but EM signature, web signature, etc. will be detectable
    • Could cause a deterrence for future conflicts
  • Chair Capriglione – Where are we as a country using AI in homeland security versus other countries?
    • Coffman – Very good in many areas and need to step it up in other areas
  • Chair Capriglione – What are the risks in us not keeping up with quantum computing?
    • Coffman – Need to be fully involved in quantum encryption, sensing, etc.; a lot of money going towards that
  • Walle – You mentioned federal & state collaboration with academic institutions, can you expand? Where could the state and federal gov work together on?
    • Coffman – Joined together with TAMU and UT systems; everyone is focused on solving the problem of competing with adversaries to get robotics and AI systems in the hands of personnel
    • Army Futures Command will probably not develop the quantum computer, but need to look into how to apply the benefits of quantum computing ahead of need

 

Lt. Steven Stone, Texas Department of Public Safety

  • DPS currently uses AI in a variety of data analysis functions, detection, turning big data into small data, identifying trends, hotspots, etc.; highlights how AI helps in identifying victims and perpetrators of CSAM
  • Laws are slow to catch up, having issues with legal req to identify a known victim of CSAM, but AI can composite and make identification difficult
  • With deep fakes, only allowed to take action if videos are sexual in nature, law does not address still images
  • AI image generators that can make illegal content are proliferating
  • AI is also used in scams, e.g. voice generation or text scams
  • Chair Capriglione and Stone discuss how AI can generate CSAM
    • Stone – DPS works with other entities and conducts torrent investigations to track and investigate
  • Chair Capriglione – You mentioned how laws may not fully allow for prosecution, particularly with still images; can you go into what you need to show to be able to prosecute?
    • Stone – With the current law, there needs to be a defined victim, but AI can now composite images
    • Some tools can help detect if an image is AI-generated, but still need a defined person
  • Leach – In a situation where someone is apprehended for CSAM with thousands of images with different individuals; what happens?
    • Stone – Under current law, if we are unable to prove those images are in the likeness of a child, difficult to prosecute
  • Leach – Why wouldn’t the accused individual claim all images are AI?
    • Stone – They could, but there are hash sets (digital fingerprints) of known CSAM & often these people will have those images
  • Leach – What tools do prosecutors need?
    • Stone – If intent of law is discourage or make CSAM unlawful if it depicts a child, then wording of the law needs to be changed so it isn’t a video and it doesn’t need to be an identified child
    • Ability to file charge should not be based on need to identify a victim
  • Leach – Similarly, I think we need to reform revenge porn laws to cover generated images
    • Stone – We do have a law that accounts for videos, but not still images
    • For still images, the best we could charge is Display of an Obscene Image which is a Class C misdemeanor
  • Chair Capriglione – So someone could take images of real children, use an AI tool to modify and create CSAM, currently this could not be prosecuted?
    • Stone – Not without an identified victim & images must be in likeness of the victim

 

Nathaniel Persily, Stanford School of Law

  • Speaking on implications of AI for democracy in elections, can also speak to other issues brought up in testimony
  • Not sure about effects of AI on democracy; AI enhances ability of good and bad actors, foreign and domestic, to achieve the same goals they already have
  • Socio-political forces at work are much more important that the tech, though the tech exacerbates problems
  • Majority of Americans believe misinfo spread by AI will have an impact on who wins the 2024 election; still a minority of Americans that have used AI tools, but people are concerned
  • Not surprising given coverage of AI which has been apocalyptic
  • Highlights examples incl. AI-driven home shopping network in China which can respond to questions, 24/7 constant running debate between Biden & Trump AI avatars, etc.
  • Mere fact that there are millions of examples of deepfake tech doesn’t mean there will be an impact on the election; already have an avalanche of materials, question is where & to what extent it will have an impact
  • Highlights examples of avatars trained for responses, can have AI avatars working for campaigns & this is already being used in some Congressional campaigns; will likely become more frequently used
  • Can also use AI to create campaign materials, provides example of a campaign jingle created by an AI
  • Myopically focusing on issues we’ve been traditionally concerned about like disinfo and assuming AI is about this; there is a problem with disinfo, but AI is mostly allowing for easy generation of this info, amplification and distribution is controlled by the traditional channels
  • One of the things that makes AI different is the availability of open source tools, these tools are being fine-tuned on the dark web to generate illegal material like CSAM & this is threatening to upend the entire enforcement apparatus surrounding CSAM; have a SCOTUS case that stated virtual CSAM without a defined victim is constitutionally protected
  • Overreaction to AI is the bigger danger; have been living in a world with disinfo and shallow fakes so far
  • Seeing trends that people are becoming better at detecting false info, but not as good at detecting true info
  • The more we overemphasize the danger of AI, the less people will be able to identify true material and the more often people will be able to deny true info with claims that it is disinfo
  • Highlights AI usage in elections already, like signature validation, detection of fraud, etc.
  • Stanford is developing a report on AI governance & can provide to the committee
  • CA has proposed a bill banning deepfakes in political advertising, most policy activity has surrounded transparency, etc.
  • Chair Capriglione – What do you see as the biggest risk between now and November?
    • Persily – Worried that the overreaction of AI influence will lead people to increasingly stop believing in true information
    • Risk that deepfake will convince one person one way or the other is comparatively rare; political content is rare on the average person’s content feed
  • Chair Capriglione – It seems to me this is correct with more recent elections, people will believe something to be true, will be shown evidence that it is not true, and then the evidence is not believed
  • Chair Capriglione – Have noticed that some political information has been suppressed in social media
    • Persily – This has been admitted outright, e.g. Threads is trying not to be a political platform and content lessened

 

Sam Derheimer, Hart InterCivic

  • Speaking on AI impact on TX’s election, voting tech, and voting officials
  • Provides overview of Hart InterCivic
  • Works with DSH to identify potential election interference, also works with information sharing & analysis centers (ISAC)
  • 2024 presidential election will be the first one run after the proliferation of generative AI
  • Good news is AI has no role to play in the election systems, including ballot casting systems; have strict and straightforward policy to not use AI techs in development and operation of election systems; risks outweigh the potential benefits
  • Have controls in place to block malicious applications, including AI tools; since voting tabulation tools are not connected to the internet, TX has mitigated generative AI impact
  • DSH observed in a report earlier this year that generative AI will likely not introduce new risks, but enhance existing risks; most likely targets where not systems, but elections processes, offices, and individuals, i.e. phishing attempts with election workers
  • Have spoken with election officials who are confident in election machines & voting systems, but also concerned about disinfo, phishing, etc.
  • Election community is highly aware of these emerging threats & have been ringing alarms
  • Best tools to mitigate are 1) implementing controls to combat phishing and social engineering like MFA, email authentication protocols, 2) limit possibilities for impersonation by having safe personal cybersecurity practices, 3) plan for disinfo content and plan for this disinfo to exceed capacity to mitigate
  • If in doubt, should go directly to local election officials, election officials should be the trusted source of info in local elections
  • Walle – Counties run the elections, what are your thoughts on role of TX?
    • Derheimer – Should help socialize fact that AI plays no role in voting systems and tabulation devices
    • Should support SoS asking for more training resources, etc. and help push these to local election officials
  • Walle – Asks about best practices
    • Derheimer – Institute the most advanced best practice, also recognize that it isn’t just tech, but people so go through multiple trainings that are updated for new threats
    • Continually socialize employees against phishing threats
    • Phishing and social engineering attacks are getting more sophisticated
    • Have spoken with elected officials who are concerned about voice cloning, e.g. using voices to say the election is on Wednesday, etc.

 

Lucas Hansen, Civic AI Security Program

  • Provides overview of CivAI
  • Provides example of using AI to conduct personalized phishing attacks, personalized persuasion material (persuasive political material targeted to specific people)
  • AI makes it much cheaper to run scams, easier to create fake images, etc.; a big part of the story is the change in economics of cyber crime
  • Provides examples of image generation, hyperlocal messaging, voice cloning
  • Orr – Voice cloning tech is open source, so free to use?
    • Hansen – The voice cloning is not free, most are provided via tech from ElevenLabs; free models are much worse
    • ElevenLabs is the only one open to the public and does not do a lot of moderation on voice cloning tech; most applications seem like they would be nefarious, so it is interesting that they offer this
    • Without ElevenLabs, seems like plausible voice cloning wouldn’t exist during the 2024 election
  • Orr – What do you pay for this?
    • Hansen – Maybe a fraction of a cent for the example, charged per word
  • Hansen showcases other examples of generated content incl. fake news stories, fake tweets on polling locations open
  • Tweets spreading fake closures could go out in many rural communities around the country at similar times
  • Examples of generative images were from Stable Diffusion, audio was ElevenLabs, and fake news stories were ChatGPT
  • AI has been a topic of conversation since the 1970s, but it is different this time as it is the same tech producing all of the various materials; all neural nets trained on GPUs
  • Term for this is “bitter lesson,” you don’t need to know anything about the problem you’re trying to solve, just through more compute at the problem & interference often makes it worse, e.g. state-of-the-art translation works best without translators involved
  • Market is betting incredible amounts of money on this, Nvidia produces GPUs and is worth considerably more today than 3 years ago
  • A lot of the downsides are due to decreasing costs, so one solution is to make it more costly, e.g. watermarking can be defeated, but raises skill bar and makes it more annoying to copy
  • Can create liability law for generating nonconsensual deepfakes; highlights how making pirated content more explicitly illegal chilled pirating
  • AI companies are in an arms race towards cheaper tools; government can help solve the coordination problems in the industry
  • Need to set up incentives for AI companies so they don’t feel pressured to be unsafe
  • One danger is parasocial relationships with content creators, these relationships could be automated and tailored to users & could encourage parasocial relationships
  • Ai tech could drastically decrease skill bar to perform cyber attacks
  • Walle – So the examples you generated aren’t illegal, but do you see a scenario where states need to implement some kind of policing? How could we police this at a state level?
    • Derheimer – Pretty difficult, many states have deepfake bills that do require malicious intent which is difficult to prove
    • A lot of the work needs to be done at the distributor level, e.g. social media should be labeling images as AI if they can detect; currently it relatively easy to detect if an image is AI
    • Meta is planning to roll this out on Facebook and Instagram, but this should be done universally
  • Walle – DTPA can be enforced by the AG’s Office or a private citizen; would you recommend policing be civil or criminal?
    • Derheimer – For deepfakes, probably civil
    • EU has GDPR, CA has the CCPA, etc., many companies implemented the CCPA despite relatively few CCPA cases making it to court
  • Walle – Re: ElevenLabs, what liability did they face for anything?
    • Derheimer – As far as I know there hasn’t been any consequences
    • ElevenLabs started banning voices of Biden & Trump, so they clearly can; wish someone would contact them to get them to go further
  • Chair Capriglione – Does stop most bad actors if there is a negative consequence, can also help guide behavior for legitimate actors
  • Leach – Agree that there needs to be negative consequences, also a question of if local DAs have the tools to prosecute
  • Chair Capriglione – Sure, and if you look at CCPA, enforcement resources were at the state level
  • Leach and Capriglione discuss enforcement of generated false political attacks
  • Chair Capriglione – Need a mechanism to stop distribution, prosecution comes later

 

Drew Hamilton, Texas A&M Cybersecurity Center

  • AI has been discussed since the 1960s, but hasn’t had the computing power behind it that we have now; highlights ELIZA system in the 1960s that was therapy-focused and limited in other types of information
  • Systems currently are more responsive, but still the same idea where you’re formatting a query to generate a response
  • Highlights how AI tools like ChatGPT can deliver wrong answers when it looks at the wrong data; important to consider what data AI is drawing from and what applications you’re trying to solve
  • Large data stores poisoned with bad info can generate impressive results that are wrong
  • Potential concerns include training AI to present false responses and poison the dataset; provides example of data indicating 10% of service members are not deployable influencing armed conflicts
  • Many AI systems will not let you use them for cyber attacks, but this can be sidestepped with prompt engineering
  • Can also do a variation of DDOS attacks where you continuously poll AI with questions that require large amounts of computing power, shutting out functionality
  • Highlights other issues like social engineering, hostile foreign actors standing up entities to take advantage of government contracts & gain access to government systems, etc.
  • Data integrity, availability, and confidentiality are key principles; integrity can be insidious because the data stores are too large to check
  • Highlights problem of large companies collecting info on you and then this info getting compromised
  • Chair Capriglione – CFTC sent a request to OpenAI to describe where they are getting their data from; can you explain how this becomes a problem and data poisoning?
    • Hamilton – If you’re trying to sabotage large agricultural data stores, manipulating this data can have systems make responsible-looking recommendations based on faulty data
    • Personnel comes to mind; if you’re looking for AI experts, if you’re database has incorrect data you may not be able to find the expertise needed
  • Chair Capriglione – Some may be incorrect deliberately
    • Hamilton – Right, and some may be incidental
  • Chair Capriglione – How is generating malicious content easier? In what way are the good guys combatting this? E.g. firewalls
    • Hamilton – Firewalls can help give you a break after an attack, there are a limited number of ports to connect to that are trivial for computers to check and AI could dynamically remap them to help mitigate; would be interesting to see if an AI could detect the remapping pattern also

 

Justin Brookman, Consumer Reports

  • Provides overview of Consumer Reports, focused on issues like company accountability, right to repair, etc.
  • Not expecting Congress to meaningfully pass AI regulation in the near-time, state will likely continue to be the drivers of policy
  • Most things are software driven and incorporate AI which makes it difficult to test systems, e.g. 10 questions producing 10 different responses makes it hard to check for errors
  • Many companies with neural nets don’t know why their systems are generating certain answers
  • Companies can also restrict access from those trying to figure out the logic behind targeted ads, etc.; makes it difficult for independent auditors to test systems & there is an argument that this is illegal
  • Clear you can’t deceive consumers, but ambiguous if you can deceive testers who are working on behalf of consumers
  • Many times companies will say you can’t test them in their T&Cs
  • Highlights NYC hiring algorithm law that reqs these to be audited for fairness, but saw little compliance
  • Consumer Reports also uses AI to ensure systems are effective; AI is an amazing tool but should be some rules & reqs in place to make sure it is used fairly
  • Should notify consumers if AI is being used
  • Fairness and bias is another concern, e.g. facial recognition performing worse on people with darker skin; sometimes well-intentioned systems can perform in unintended ways
  • Another protection is to tell people when AI systems are making decisions about people, e.g. if denied a loan based on AI decision; human review has been a common policy recommendation in similar cases
  • Privacy is another concern, TX’s law is a good start but could be improved; one recommendation is data minimization, e.g. only using data for what the consumer asked for
  • Have also recommended prohibition on AI generated images, but there is a question of liability; should it be the requestor, the company, the company that serves the content? Etc.
  • AI voice cloning is another example & should be considered for question of liability
  • Important to think about bad actions that should be policed, won’t get everything, but important to consider
  • Could have companies make data available for testers
  • IP is another issue to consider, e.g. AI uses public data, but data wasn’t made public for these purposes
  • Won’t be able to keep up with resources of big federal agencies or private companies, but need more resources & technologists who understand the issues
  • Chair Capriglione – With privacy and AI, \data used in perpetuity to generate AI materials, etc.
    • Brookman – Most T&Cs are vague and do a really poor job outlining usage and rights
    • One solution is a rule requiring data minimization, can also implement opt-out rights
    • Opt-out can be made more usable by having it be global opt-outs done through browsers, etc.; TX law does have a provision for this, but not yet in effect
    • Don’t want to put all the burden on people, people can’t micromanage their security and shouldn’t have to micromanage their privacy
    • Could have a law outlining what data can be used for, etc.; will need to iterate on these laws over time

 

Renzo Soto, TechNet

  • Provides overview of TechNet, works with a range of tech companies & works to promote innovative tech agenda at state & federal levels
  • Intentional approach where stakeholders and policymakers work together is crucial
  • TechNet members have been leaders on responsible use of AI and other tech; AI can help virtually every sector in the state
  • AI can help with health care, can remove bottlenecks like data input, models can also help with medicine and assist in diagnosis, AI navigators can help patients access info, etc.
  • In agriculture, can do yield prediction, pest & disease prevention, etc.
  • Over 200 AI-based ag startups in the US
  • IN education & workforce dev, AI will be highly effective in this space and can help fill jobs with the right talent; Texans will need more skills to take full advantage of AI tools
  • AI-powered learning assistants can help K-12 students learn, in higher ed colleges & universities are running courses on programming, prompt engineering, etc.
  • AI will also help employers and Texans to navigate the job market
  • Industry is working with state & federal policy makers
  • Industry has been looking at political deepfakes, TX passed a bill criminalizing deepfake election videos with malicious intent & companies are exploring policies like this; companies are also developing deepfake detention tech
  • Industry would like to collaborate to address some of these issues & develop policies
  • Provides written material showcasing actions by tech industry participants to address some of these problems
  • Advocating for a federal framework that provides uniformity, but absent this recommend interoperability between states
  • Policy recommendations incl. 1) avoid blanket prohibition on AI, machine learning or automated decision making; narrowly tailor harms to specific cases, 2) limit creating new authorities and utilize existing authority under state law, 3) limit to high-risk use cases
  • Other policy recommendations included in written material
  • NIST has been building an AI development framework and is working on other things like red teaming framework, etc.; should consider this work in developing policy
  • Chair Capriglione – On liability and risk, if an AI chatbot or therapist gives mistaken or hallucinated advice and person harms themselves, who is responsible?
    • Soto – Regulation should focus on high-risk use cases & this would qualify
    • Should look at how companies have deployed the tool and the sincere efforts to develop something to be used for good
    • Should look at existing laws for bias, malpractice, etc.
  • Leach – TX has policy positions in place that certain websites like sports gambling, etc. are unlawful, have age-verification requirements for pornography websites, etc., can this be done for certain online AI tools?
    • Soto – Crucial to look at how the human is interacting with the AI tool, would need to look at the specific occupation or use case that the harmful or malicious act
  • Leach – Legislature in 2017 or 2019 also addressed cyberbullying among minors, didn’t ban the websites but created a penalty for cyberbullying, so wondering if we could target wrongful acts?
    • Soto – Yes, this is what I’m referring to with high-risk use cases; can provide technical guidance on some things
  • Walle – you mentioned the opportunities for young people in education & workforce, will need a lot of people to do this work or monitor this work; are you aware of any programs that are teaching students to do this?
    • Soto – Yes, UT offers a certificate program; good way to integrate AI into CS is to insert this into the CE for teachers of these courses
  • Walle – Where would you categorize cybersecurity vs. AI?
    • Soto – One and the same in terms of safe development and deployment, best practice is to consider both, do red teaming, test, etc.
    • Industry is at the same time creating tools to help detect & should be integrating this in development and deployment
  • Walle – One of the primary players in developing instruction was industry, want jobs at the end of the effort; wondering if there are efforts to tie in industry
    • Soto – Highlights industry efforts to encourage these relationships; can connect offline to speak more about this

 

Matthew Scherer, Center for Democracy and Technology

  • Highlights article rom Arvind Narayanan called How to Recognize AI Snake Oil https://www.cs.princeton.edu/~arvindn/talks/MIT-STS-AI-snakeoil.pdf
  • Difficult to reduce key facets of most jobs to factors recognizable by a machine, AI will typically do a poor job & automating the decision will not lead to good results; will more likely pick up on society’s pre-existing biases
  • Many AI workforce/hiring techs are “snake oil” and do not accurately or consistently select appropriate candidates
  • Need transparency, workers need to know when and how they are evaluated & should be aware of factors leading to the decision
  • Innovation is not always good, only good when it serves the interest of consumers and workers; error-prone AI systems are not something we should be clamoring for
  • Chair Capriglione – Interesting that companies are using AI to write descriptions, employees are using AI to write resumes, etc. some companies are doing one-sided interviews with videos are question responses; AI is talking to AI

 

Kevin Welch, EFF-Austin

  • Easy with a transformative tech to want to change things, but important to consider if you will make things better or worse; would encourage caution and carefully tailoring legislation
  • Important to keep AI in proportion & ensure that people’s rights are protected; would hate to see deepfakes criminalized in a way that also criminalizes parody
  • Important to not be caught-up in sci-fi hypotheticals about what AI can do, harm would more likely result from humans putting AI in charge of something it shouldn’t be; IBM presentation in the 1980s noted that AI can’t be held accountable so shouldn’t make command decisions
  • AI Is already used in many questionable cases, e.g. in criminal justice or education with false accusations, job denials; plenty of harms from AI now that are ongoing & should look at these known ongoing harms vs. harms that have not yet been proven
  • Should look at unethical practices of some of the data brokers, would hope HB 4 can be strengthened to go after resellers of data as well
  • Chair Capriglione – Highlights social credit system where things people do or who you associate with are used to reduce participation society
    • Welch – These practices go on in the US more than we would like and are often tied to socio-economic status
    • Internet seems to operate only on two models, pay and don’t be spied on, or don’t pay and be spied on

 

Public Testimony

Andrew Cates, Self

  • Wrote a legal practices guide for Texas ethics & campaigns
  • After the Biden robocall, FCC made AI robocalls illegal but bad actors can get around this by obtaining prior consent and often do this through deceptive means
  • In TX, have a good head start with data privacy
  • Besides political advertising, should also take a look at Chapter 305 of Gov Code that regulates lobbying in areas like political advertising, grassroots communications, etc.; e.g. statute prohibits false statements from lobbyists, but not from anyone else
  • Texas Bar Journal articles recently were about AI, there are federal test cases stating that output of generative AI cannot be copyrights because it was not generated by a person; this is interesting because many TX criminal laws rely on a person acting & if the output is not from a person it could lead to legal applicability questions
  • Chair Capriglione – Would like to hear from you after this with the ethics Sunset bill and further hearings
    • Cates – True source of info bill with deepfake language, would recommend amending this for photos, videos, texts, audio, etc.
    • Speaker was hit with a fake image and it wasn’t against the law in TX because it was an image
  • Chair Capriglione – Was multiple images and happened 33 days before the election, so skirted the law; reason it should be illegal is that it is unfair and meant to deceive
    • Cates – Would recommend making this a felony as well, e.g. corporate donation ban is a felony and everyone is terrified of it

 

Closing Comments

  • Chair Capriglione – Speaker Phelan asked for a report by May 16th, so members should expect draft material soon