By Rachel V. See and Annette Tyman
Seyfarth Synopsis: The Office of Management and Budget (OMB) finalized its guidance to federal agencies regarding the risk management steps the federal government must take when using artificial intelligence. OMB’s guidance establishes the minimum AI risk management practices federal agencies must follow for “safety-impacting” and “rights-impacting” AI applications and includes a broad list of employment-related applications that are presumed to be “rights-impacting.” The AI risk management practices OMB is requiring for these “rights-impacting” employment applications are broadly consistent with principles that leaders from the Department of Labor and the EEOC have been discussing, and we expect future guidance and “promising practices” from those agencies will closely align with the principles outlined in today’s OMB guidance.
On March 28, 2024, OMB finalized and published Memorandum M-24-10, “Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence.” This document contains OMB’s guidance to federal agencies regarding the risk management steps the federal government must take for its own use of artificial intelligence.
As we previously wrote when OMB issued a draft of this guidance, private-sector employers using AI should pay attention to the way the federal government thinks about AI risk and the AI risk management practices it begins implementing. Moreover, as a large-scale purchaser of AI systems from private-sector developers, the government’s own purchasing requirements will influence the development of systems that are sold to both the government and to private industry.
While OMB’s final guidance purports to speak solely to the federal government’s own use of AI, and clarifies that it does not apply to federal agencies’ own regulatory efforts, past experience suggests that the federal government will ultimately decide that the AI risk management “best practices” it applies to itself should also be adopted by private-sector AI deployers. We also predict that these governmentwide AI risk management principles will influence the EEOC’s thinking on AI risk and risk management; at a minimum, these principles will shape EEOC executives’ experience with AI risk management concepts as the EEOC starts using AI in its internal processes.
Today’s AI guidance from OMB establishes minimum risk management practices for “safety-impacting” and “rights-impacting” AI. Importantly for private-sector employers using AI in hiring, M-24-10 contains a very broad list of employment-related AI applications that are presumed to be “rights-impacting” and thus subject to the memo’s minimum risk management processes. This list includes AI applications that “control or significantly influence the outcome[] of”[1]:
Determining the terms or conditions of employment, including pre-employment screening, reasonable accommodation, pay or promotion, performance management, hiring or termination, or recommending disciplinary action; performing time-on-task tracking; or conducting workplace surveillance or automated personnel management;
Notably, in addition to those employment-specific applications, M-24-10 also includes in its list of presumptively “rights-impacting” AI applications ones used for “biometric identification for one-to-many identification in publicly accessible spaces” and any application that seeks to “Detect[] or measur[e] emotions, thought, impairment, or deception in humans.” The inclusion of these categories in M-24-10 echo similar categorizations of “unacceptable risk” found in the recently finalized EU AI Act, highlighting ongoing convergence on AI risk issues.
As discussed in Seyfarth’s March 25, 2024 client update, leaders from the Department of Labor, including the Solicitor of Labor and the Acting Director of OFCCP, have recently confirmed that they will be issuing a “broader value-based document” that contains “principles and best practices” for both employers using AI and developers of the AI tools. Core concepts from the principles these leaders mentioned, such as the need for stakeholder engagement, validation and monitoring, and greater transparency, are also emphasized in OMB’s final guidance issued today, and in risk-management documents that others in the federal government have championed, such as the AI Risk Management Framework (RMF) issued by NIST, the National Institute of Standards and Technology.
Regarding the need for stakeholder engagement for rights-impacting systems, OMB is directing federal agencies to consult affected communities and solicit public feedback “in the design, development, and use of the AI and use such feedback to inform agency decision-making regarding the AI” and must include “seeking input on the agency’s approach to implementing … minimum risk management practices”.
Regarding testing, validation and monitoring, OMB is requiring agencies to “conduct adequate testing to ensure the AI, as well as components that rely on it, will work in its intended real-world context.” And while OMB is also requiring an “independent evaluation” of the AI “to ensure that the system works appropriately and as intended, and that its expected benefits outweigh its potential risks,” and further requires, “[t]he independent reviewing authority must not have been directly involved in the system’s development,” OMB does not require the independent reviewing authority to come from outside the agency itself.[2]
OMB is also requiring federal agencies to conduct ongoing monitoring to detect both “degradation of the AI’s functionality” as well as “changes in the AI’s impact on rights and safety”. OMB further directs the monitoring process to include an annual human review “to determine whether the deployment context, risks, benefits, and agency needs have evolved,” and emphasizes that this human review must include testing and “include oversight and consideration by an appropriate internal agency authority not directly involved in the system’s development or operation”.
In her remarks at an American Bar Association meeting on March 20, 2024, EEOC Chair Charlotte Burrows discussed her concerns regarding the data used to train AI systems. Consistent with remarks she has made in the past, Chair Burrows cited concerns that people with protected characteristics were disproportionately over-represented in “bad” data sets that were being used as AI training data. She also cited concerns that these same categories of people were disproportionately under-represented in “good” data sets being used to train AI.
The data-quality concepts mentioned by Chair Burrows are present in OMB’s guidance to federal agencies regarding their own use of AI. For “rights-impacting” AI applications, OMB directs federal agencies to “assess the quality of the data used in the AI’s design, development, training, testing, and operation and its fitness to the AI’s intended purpose.” Among other things, OMB directs federal agencies to document the quality and representativeness of the data for its intended purpose, and if the agency is using an AI tool provided by a vendor, the agency “must obtain sufficient descriptive information from the vendor” about the data. OMB’s directive to federal agencies further cautions, “[a]gencies should assess whether the data used can produce or amplify inequitable outcomes as a result of poor data representativeness or harmful bias. Such outcomes can result from historical discrimination, such as the perpetuation of harmful gender-based and racial stereotypes in society.”
Implications for Employers
While the Department of Labor’s deadline to publish “principles and best practices for employers that could be used to mitigate AI’s potential harms to employees’ well-being and maximize its potential benefits” is still a month away, at the end of April 2024, we anticipate that the information in OMB Memorandum M-24-10 provides a sneak peek at the “principles and best practices” that the Department of Labor will be issuing to private employers. While some variance between public-sector and private-sector practices are reasonable to expect, we are likely to see similar themes between the documents.
At a minimum, employers already using AI in their labor and employment practices should evaluate and consider how their current AI risk management practices align with today’s OMB guidance that establishes minimum requirements for the federal government’s own use of AI.
We will continue to monitor these developing issues, especially as the Department of Labor and other agencies continue their work to issue AI guidance and other documents set forth in President Biden’s executive order on AI. For additional information, we encourage you to contact the authors of this article, a member of Seyfarth’s People Analytics team, or any of Seyfarth’s attorneys.
[1] Notably and unsurprisingly, the scope of this list, and the operation of the “control or significantly influence” language, is significantly broader than New York City’s Local Law 144, for which enforcement began in July 2023. Among other things, to constitute an “automated employment decision tool” under New York City’s law, the tool must “substantially assist or replace discretionary decision-making”.
[2] This view of independence is consistent with prior guidance issued in 2011 by the Federal Reserve Board in SR 11-7, its seminal “Supervisory Guidance on Model Risk Management” applicable to the financial industry.