Background

On March 24, Kentucky Governor Andy Beshear signed SB 4 into law. As discussed below, the law is perhaps the nation’s strongest law regulating a state’s use of artificial intelligence. The law also allows candidates who are the victim of deepfakes to obtain an injunction against the sponsor of the deepfake.

The bill’s primary sponsors are Senate Republicans Amanda Mays Bledsoe and Brandon Storm. The bill passed both Republican-controlled chambers by overwhelming margins - passing the Senate by a 30-3 vote and the House by an 86-10 vote.

Presenting the bill on the Senate floor, Senator Mays Bledsoe stated that the bill “establishes a risk-based artificial intelligence governance framework to protect citizens, foster innovation in the state government, and address concerns related to AI-generated misinformation in campaigns and elections. This bill is a critical first step to ensure that AI is deployed correctly and within necessary parameters. As artificial intelligence becomes more integrated into government operations, we need to establish clear guidelines now to protect Kentuckians into the future because it’s going to be a challenge, honestly, to keep up with the technology as it is. Senate Bill 4 ensures that AI is used transparently, responsibly, and with human accountability at every level.” She continued: “And while this sets a strong foundation for AI governance in the public sector, I encourage Congress to take a similar action for the private sector, ensuring efficiency, innovation and, most importantly, the safety and privacy of American citizens.”

Of note, Kentucky is not the first state to enact a law regulating the state’s use of AI. Laws previously were passed in Vermont (H.410) and Connecticut (SB 1103).

Summary

Procedures for Responsible, Ethical and Transparent Use of AI

The law first expands the roles and duties of the office of technology to include establishing, publishing, maintaining, and implementing “comprehensive policy standards and procedures for the responsible, ethical, and transparent use of generative artificial intelligence systems and high-risk artificial intelligence systems” by state departments, agencies and administrative bodies.

The standards and procedures must include ones that govern the procurement, implementation, and ongoing assessment of such technologies; address and provide resources for security of data and privacy; and create guidance for acceptable use policies for integrating high-risk AI systems.

The law defines high-risk AI systems to mean “any artificial intelligence system that is a substantial factor in the decision-making process or specifically intended to make, or be a substantial factor in making a consequential decision.” That definition is similar to the one used in the Colorado AI Act, i.e., “any artificial intelligence system that, when deployed, makes, or is a substantial factor in making, a consequential decision.”

The Kentucky law carves out some AI systems from its definition of high-risk AI system. Specifically, the law excludes “a system or service intended to perform a narrow procedural task, improve the result of a completed human activity, or detect decision-making patterns or deviations from previous decision-making patterns and it not meant to replace or influence human assessment without human review, or perform a preparatory task in an assessment relevant to a consequential decision.” The Colorado AI Act includes most of those exemptions (it does not include improving the result of a completed human activity) but goes on to exclude things such as calculators, cybersecurity, data storage, firewalls, networking, and others - all of which are not excluded in Kentucky’s law.

Further, the Kentucky law does not define the phrase “substantial factor.” That phrase is used and defined in the Colorado AI Act and its definition has been the subject of ongoing debate. Essentially, the phrase is used to define the roles of the human and the high-risk AI system and when the law will apply. Virginia’s vetoed bill, for example, took a narrow approach saying that the AI system had to be used as the “principal basis” for making the decision.

The Kentucky law does define “consequential decision” broadly to mean “any decision that has a material legal or similarly significant effect on the provision or denial of services, costs, or terms to any citizen or business.” So understood, the definition appears to cover numerous types of government services such as employment, healthcare, housing, education, and legal services.

AI Governance Committee - Policies and Procedures

The law next requires the office of technology to create an AI Governance Committee, which is charged with governing the state’s use of AI systems. The committee must develop policy standards and guiding principles to mitigate risks and protect data and privacy of Kentucky citizens, establish technology standards to provide protocols and requirements for the use of gen AI and high-risk AI systems, ensure transparency in the use of AI systems, maintain a centralized registry of gen AI and high-risk AI systems, and develop an approval process to include a registry of application, use case, and decision rationale aimed at mitigation of risks.

The committee also must develop policies and procedures to ensure that state agencies and bodies verify the use of gen AI systems and high-risk AI systems and act in compliance with responsible, ethical, and transparent procedures to implement the use of AI technologies. The law specifically identifies three activities: (1) ensuring AI models have comprehensive and complete documentation; (2) requiring review and intervention by humans dependent on the use case and potential risk; and (3) ensuring the use of gen AI and high-risk AI systems are resilient, accountable and explainable.

Document Unlawful Discrimination

The office of technology also is required to consider and document, among other things, how an AI system will not result in unlawful discrimination against any individual or group of individuals; how the AI system will benefit Kentucky citizens; the extent of human oversight and interaction; and the potential risks and mitigation strategy.

Public Disclosures, including Right to Appeal Decisions

State agencies and bodies using gen AI, AI systems or “other artificial intelligence-related capabilities” are required to provide clear and conspicuous disclaimers when those systems are used (1) to render “any decision” regarding Kentucky citizens or businesses, (2) in any process, or to produce materials used by the system or humans to inform a decision or create an output; or (3) to produce information or outputs accessible by citizens and businesses. Disclaimers are also required to provide information such as system cards or other information provided by developers.

When an AI system “makes external decisions related to” Kentucky citizens, the state must disclose how the AI system was used in the decision-making process, provide the extent of human involvement, and “make readily available options for individuals to appeal a consequential decision that involves artificial intelligence.” Of course, the right to appeal an adverse consequential decision is a hallmark of the Colorado AI Act.

Risk Management Policy

The law also provides that high-risk AI systems cannot be used to render consequential decisions without the design and implementation of a risk management policy and program. The risk management policy must specify principles, process, and personnel that shall be utilized to maintain the program and identify, mitigate and document any bias or potential bias in making a consequential decision.

Annual Reports

The office of technology is required to produce annual reports that include, among other things, an AI registry.

Elections Disclosures

Finally, the law allows any candidate for any elected office whose appearance, action, or speech is altered through the use of synthetic media in an electioneering communication to seek injunctive or other equitable relief against the sponsor of the electioneering communication requiring that the communication includes a disclosure that is clear and conspicuous and included in, or alongside and associated with, the content in a manner that is likely to be noticed by the user.

Keep Reading