December 30, 2024

Welcome to the second edition of Byte Back AI, a weekly newsletter providing updates on proposed state AI bills and regulations, an AI bill tracker chart, summaries of important AI hearings, and special features. Starting January 6, 2025, Byte Back AI will be available only to paid subscribers.

Welcome to the second edition of Byte Back AI, a weekly newsletter providing updates on proposed state AI bills and regulations, an AI bill tracker chart, summaries of important AI hearings, and special features. Starting January 6, 2025, Byte Back AI will be available only to paid subscribers. For more information on subscriptions, please click here.

As always, the contents provided below are time-sensitive and subject to change. 

Table of Contents

  1. What’s New

  2. AI State Bill Tracker Chart

  3. Summaries of Notable AI Hearings

  4. Special Features

  1. What’s New

In New York, Governor Hochul signed the LOADinG Act (S7543 / A9430) last week. When the bill was passed by the New York legislature in July it was characterized as a first-in-the-nation law that “is the most advanced and meaningful regulatory approach in the nation regarding procurement and usage of AI.” The law, which we discuss in greater detail in our “Special Features” section, regulates the state’s use of automated decision-making systems.

We also continued to see lawmakers prefile more AI bills across the country.

In New York, Senator Hoylman-Sigal prefiled S185, which seeks to regulate employers’ and employment agencies’ use of automated decisionmaking technologies. The New York legislature opens January 8. 

In Texas, on December 23, Representative Giovanni Capriglione prefiled the Texas Responsible Artificial Intelligence Governance Act (TRAIGA) (HB 1709). The bill covers high-risk AI systems and is modeled, in part, on last year’s Connecticut’s SB 2. After a draft of the bill was circulated in October, the bill has been a hot topic of discussion among state lawmakers, consultants, and AI developers across the country. The Texas legislature opens January 14.

In Missouri, Representative Christensen prefiled HB 673, which would require political ads produced using AI to include a disclosure. This is Missouri’s fourth prefiled AI bill and the second one regulating AI use in elections. The Missouri legislature will convene on January 8.

Washington prefiled its first two AI-related bills last week – SB 5094 and SB 5105. Both bills deal with synthetic depictions of minors. The Washington legislature is set to convene on January 13.

The New Jersey legislature prefiled A5164, which would specifically regulate artificial intelligence in the news media industry, and establish the “Artificial Intelligence in Communications Oversight Committee.” It also prefiled SR 121/AR 158, which would urge generative artificial intelligence companies to ensure whistleblower protections for employees. While these resolutions are unique, they are not the first pieces of legislation to provide whistleblower protections. Recall that the vetoed SB 1047 in California would have provided whistleblower protections, specifically preventing AI companies from “retaliating against an employee for disclosing information… [which] indicates the developer is out of compliance with certain requirements or that the covered model poses an unreasonable risk of critical harm.” The concern for whistleblowers has also been raised in the Colorado AI Impact Task Force hearings.

Finally, on December 24, the Oregon Department of Justice issued a five-page guidance memorandum explaining how existing Oregon consumer protection laws apply to the use of AI by businesses. The Department noted that Oregon’s Unlawful Trade Practices Act (UTPA) prohibits misrepresentations in consumer transactions. The Department observed that the “novelty or complexity of AI systems does not exempt their marketing, sale, or use from the reach of Oregon’s UTPA. Companies developing, selling or deploying AI technology, including AI technology like chatbots to communicate with consumers, should ensure that those tools provide accurate information to consumers and may be liable if they do not.” Among other things, the Department explained that an AI developer or deployer can violate the UTPA if it knows or has reason to know that its product generates false or misleading information, employs a chatbot while falsely representing that it is human, or publishes fake reviews generated by AI.

Turning to Oregon’s recently-effective data privacy law, the Department observed, among other things, that “[d]evelopers that use personal data to train AI systems must clearly disclose this in an accessible and clear privacy notice” and “obtain explicit consent from consumers” before using sensitive data to develop or train AI models. The Department also states that a developer that purchases or uses another company’s data set for model training “may” be considered a controller and have to comply with such obligations under the law. Further, the Department cautioned that data suppliers and developers “cannot legitimize the use of previously collected personal data to train AI models by retroactively or passively altering privacy notices or terms of use. Instead, they must obtain affirmative consent for any new or secondary uses of that data.”

Lastly, the Department observed that Oregon’s Equality Act already prohibits the use of AI for discriminatory purposes.

  1. AI State Bill Tracker Chart

Click here to see our latest AI state bill tracker chart.

  1. Summaries of Notable AI Hearings

There were no notable AI hearings last week.

  1. Special Features

This week’s special feature is a closer look at the New York LOADinG Act (S7543 / A9430), which was approved by Governor Hochul last week. The Act, sponsored by Senator Kristen Gonzalez and Assemblyman Steve Otis, passed the New York legislature in June but was not presented to the governor until December. The governor signed the legislation last week.

The Act essentially does three things. First, it requires state agencies to have meaningful human review when using automated decision-making systems (ADMS) for certain government functions. It also requires state agencies to prepare impact assessments for those activities and submit them to the governor and legislature. Finally, the Act protects employees who are subject to collective bargaining agreements from negative impacts from the use of ADMS. 

Meaningful Human Review

The Act first prohibits state agencies, or entities acting on their behalf, from using ADMS or procuring, purchasing or acquiring any service or system utilizing or relying on ADMS, to perform certain functions (discussed below) unless the systems are “subject to continued and operational meaningful human review.” 

“Meaningful human review” is defined as “review, oversight and control of the automated decision-making process by one or more individuals who understand the risks, limitations, and functionality of, and are trained to use, the automated decision-making system and who have the authority to intervene or alter the decision under review, including but not limited to the ability to approve, deny, or modify any decision recommended or made by the automated system.”

The government functions covered by the Act are those that relate to the delivery of any public assistance benefit, will have a material impact on the rights, civil liberties, safety or welfare of any individual within the state, or affects any statutorily or constitutionally provided right of an individual.

The Act defines ADMS broadly to mean “any software that uses algorithms, computational models, or artificial intelligence techniques, or a combination thereof, to automate, support, or replace human decision-making.” It includes “systems that process data, and apply predefined rules or machine learning algorithms to analyze such data, and generate conclusions, recommendations, outcomes, assumptions, projections, or predictions without meaningful human discretion. The definition excludes “any software used primarily for basic computerized processes, such as calculators, spellcheck tools, autocorrect functions, spreadsheets, electronic communications, or any tool that relates only to internal management affairs such as ordering office supplies or processing payments, and that do not materially affect the rights, liberties, benefits, safety or welfare of any individual within the state.”

Impact Assessments

Agencies seeking to use an ADMS subject to meaningful human review must first conduct an impact assessment. The impact assessment can be conducted by the agency or another entity. It must be conducted at least every two years or if there is a material change to the ADMS. The law outlines the specific criteria that must be part of the assessment. If the assessment finds that the ADMS “produces discriminatory or biased outcomes,” the agency must cease using the ADMS. Impact assessments must be submitted to the governor and legislative leadership thirty days in advance of using the ADMS.

Agencies that use existing ADMS must provide a list to the legislature within one year after the Act’s effective date.

Employee Protections

Finally, the law provides that the use of an ADMS shall not affect the right of employees pursuant to existing collective bargaining agreements or existing “representational relationships among employee organizations or the bargaining relationships between the employer and an employee organization.”

The use of an ADMS also cannot result in the: “(1) discharge, displacement or loss of position, including partial displacement such as a reduction in the hours of non-overtime work, wages, or employment  benefits, or result in  the impairment of existing collective bargaining agreements; (2) transfer of existing duties and functions currently performed by employees of the state or any agency or public authority thereof to an automated decision-making system; or (3) transfer of future  duties and functions ordinarily performed by employees of the state or any agency or public authority.”