By Owen Wolfe, Eddy Salcedo, and Jamie Anderson

Seyfarth Synopsis: You may have recently seen press reports about lawyers who filed and submitted papers to the federal district court for the Southern District of New York that included citations to cases and decisions that, as it turned out, were wholly made up; they did not exist.  The lawyers in that case used the generative artificial intelligence (AI) program ChatGPT to perform their legal research for the court submission, but did not realize that ChatGPT had fabricated the citations and decisions.  This case should serve as a cautionary tale for individuals seeking to use AI in connection with legal research, legal questions, or other legal issues, even outside of the litigation context.

In Mata v. Avianca, Inc.,[1] the plaintiff brought tort claims against an airline for injuries allegedly sustained when one of its employees hit him with a metal serving cart.  The airline filed a motion to dismiss the case. The plaintiff’s lawyer filed an opposition to that motion that included citations to several purported court decisions in its argument. On reply, the airline asserted that a number of the court decisions cited by the plaintiff’s attorney could not be found, and appeared not to exist, while two others were cited incorrectly and, more importantly, did not say what plaintiff’s counsel claimed. The Court directed plaintiff’s counsel to submit an affidavit attaching the problematic decisions identified by the airline.

Plaintiff’s lawyer filed the directed affidavit, and it stated that he could not locate one of the decisions, but claimed to attach the others, with the caveat that certain of the decisions “may not be inclusive of the entire opinions but only what is made available by online database [sic].”[2]  Many of the decisions annexed to this affidavit, however, were not in the format of decisions that are published by courts on their dockets or by legal research databases such as Westlaw and LexisNexis.[3]

In response, the Court stated that “[s]ix of the submitted cases appear to be bogus judicial decisions with bogus quotes and bogus internal citations”[4], using a non-existent decision purportedly from the Eleventh Circuit Court of Appeals as a demonstrative example.  The Court stated that it contacted the Clerk of the Eleventh Circuit and was told that “there has been no such case before the Eleventh Circuit” and that the docket number shown in the plaintiff’s submission was for a different case.[5] The Court noted that “five [other] decisions submitted by plaintiff’s counsel . . . appear to be fake as well.” The Court scheduled a hearing for June 8, 2023, and demanded that plaintiff’s counsel show cause as to why he should not be sanctioned for citing “fake” cases.[6]

At that point, plaintiff’s counsel revealed what happened.[7] The lawyer who had originally submitted the papers citing the non-existent cases filed an affidavit stating that another lawyer at his firm was the one who handled the research, which the first lawyer “had no reason to doubt.” The second lawyer, who conducted the research, also submitted an affidavit in which he explained that he performed legal research using ChatGPT. The second lawyer explained that ChatGPT “provided its legal source and assured the reliability of its content.” He explained that he had never used ChatGPT for legal research before and “was unaware of the possibility that its content could be false.” The second lawyer noted that the fault was his, rather than that of the first lawyer, and that he “had no intent to deceive this Court or the defendant.” The second lawyer annexed screenshots of his chats with ChatGPT, in which the second lawyer asked whether the cases cited were real. ChatGPT responded “[y]es,” one of the cases “is a real case,” and provided the case citation. ChatGPT even reported in the screenshots that the cases could be found on Westlaw and LexisNexis.[8]

This incident provides a number of important lessons. Some are age-old lessons about double-checking your work and the work of others, and owning up to mistakes immediately. There are also a number of lessons specific to AI, however, that are applicable to lawyers and non-lawyers alike.

This case demonstrates that although ChatGPT and similar programs can provide fluent responses that appear legitimate, the information they provide can be inaccurate or wholly fabricated. In this case, the AI software made up non-existent court decisions, even using the correct case citation format and stating that the cases could be found in commercial legal research databases. Similar issues can arise in non-litigation contexts as well.  For example, a transactional lawyer drafting a contract, or a trusts and estates lawyer drafting a will, could ask AI software for common, court-approved contract or will language that, in fact, has never been used and has never been upheld by any court. A real estate lawyer could attempt to use AI software to identify the appropriate title insurance endorsements available in a particular state, only to receive a list of inapplicable or non-existent endorsements. Non-lawyers hoping to set up a limited liability company or similar business structure without hiring a lawyer could find themselves led astray by AI software as to the steps involved or the forms needed to be completed and/or filed. The list goes on and on.

The case also underscores the need to take care in how questions to AI software are phrased. Here, one of the questions asked by the lawyer was simply “Are the other cases you provided fake”?[9] Asking questions with greater specificity could provide users with the tools needed to double-check the information from other sources, but even the most artful prompt cannot change the fact that the AI’s response may be inaccurate. That said, there are also many potential benefits to using AI in connection with legal work, if used correctly and cautiously. Among other things, AI can assist in sifting through voluminous data and drafting portions of legal documents.  But human supervision and review remain critical.

ChatGPT frequently warns users who ask legal questions that they should consult a lawyer, and it does so for good reason. AI software is a powerful and potentially revolutionary tool, but it has not yet reached the point where it can be relied upon for legal questions, whether in litigation, transactional work, or other legal contexts. Individuals who use AI software, whether lawyers or non-lawyers, should use the software understanding its limitations and realizing that they cannot rely solely on the AI software’s output.  Any output generated by AI software should be double-checked and verified through independent sources. When used correctly, however, it has the potential to assist lawyers and non-lawyers alike.

[1] Case No. 22-cv-1461 (S.D.N.Y.).

[2] Id. at Dkt. No. 29. 

[3] Id.

[4] Id. at Dkt. No. 31. 

[5] Id.

[6] Id.

[7] Id. at Dkt. No. 32.

[8] Id.

[9] Id.

About the Program: With the second quarter well underway, it’s a good time to dive deeper into the problems, risks, and legal developments that can impact the overall health of your organization for years to come. The evolving legal complexities affect all businesses, so every employer should periodically examine their employment practices so they can rest easier at night!

Please Join Us for this In-Person Event

Tuesday, June 13, 2023

8:30 a.m. to 9:00 a.m. Breakfast and Registration
9:00 a.m. to 10:30 a.m. Program

Seyfarth Shaw LL
233 S Wacker Drive, Suite 8000
Chicago, IL 60606

Join us for our Q2 Breakfast Briefing on June 13. We are planning to examine:

  • Illinois Paid Leave for All Workers Act: Although SB208 does not take effect until January 1, 2024, NOW is the time to start preparing for this new legislation that will be affecting every employer in the State of Illinois. 
  • Self-Audit: Worried your company might have some undiagnosed concerns? Learn about when and how conducting regular audits can identify (and curtail) wage and hour and pay equity problems, ensure HR documents are current, and keep your team running effectively. 
  • EEOC Trends and Charge Data: Understand the EEOC’s new enforcement priorities (looking at you A.I.) and discuss ways to avoid the Agency’s scrutiny and reduce the risks of EEO claims. We’ll include tips on what your EEOC charge data should be telling you.

SpeakersSara Eber Fowler, Partner, Seyfarth Shaw LLP
Matthew Gagnon, Partner, Seyfarth Shaw LLP
Andrew Scroggins, Partner, Seyfarth Shaw LLP

  • If you have any questions, please contact Sarah Gschwind at and reference this event.

By Adam R. Young, Jennifer L. Mora, and Craig B. Simonsen

Seyfarth Synopsis: Across nationwide testing, marijuana positivity rates for 2022 reached 4.3% (up from 2.7% in 2017), with biggest gains found in states that legalized recreational marijuana. 

Impairment and related safety hazards have been disrupting the workplace resulting in lost time, absenteeism, safety hazards, and serious industrial accidents. We track annual positivity test reports from Quest Diagnostics, one of the country’s largest drug testing laboratories.  Quest’s recently released 2023 Drug Testing Index reveals that while positivity rates for some drugs declined, the rise in positivity rates for marijuana and amphetamine continues to climb.  Of the more than six million general workforce marijuana tests that Quest performed in 2022, 4.3% came back positive, up from 3.9% the prior year. Worse still, post-accident marijuana positivity of urine drug tests in the general U.S. workforce was 7.3%, an increase of 9% compared to 6.7% in 2021. While not entirely clear, it is possible that the widespread state legalization of marijuana has contributed to an increase in test positivity and also workplace safety hazards.

Scientific testing indicates greater likelihood of errors in judgment and workplace accidents where an employee is impaired by marijuana. A National Safety Council white paper continues to recommend a Zero Tolerance Policy for marijuana in safety-sensitive positions.  Federal OSHA further advocates for post-accident drug testing as a legitimate part of a root cause analysis to determine the cause of an accident.  Employers have struggled to address the hazard of marijuana impairment at work and how best to protect workplace safety.

A problem for employers is that none of the scientifically valid drug tests for marijuana definitively prove whether a person is impaired at or near the time of an accident or the time they provide a specimen for testing. Moreover, state and local marijuana laws are making it increasingly difficult for employers to even consider or act on a positive marijuana test result.  Accordingly, employers looking to address drugs and alcohol in the workplace should work with outside counsel to ensure compliance with their current drug and alcohol testing programs. 

We have blogged previously on his topic. See for instance Method and Madness Behind New California and Washington Cannabis Laws; New Jersey Recreational Marijuana Law Provides Significant Employment Protections to Marijuana Users; And the Winner of the 2020 Election Is…Marijuana!; National Safety Council Endorses Zero Tolerance Prohibition on Cannabis/Marijuana for Safety-Sensitive Employees; Illinois Marijuana Legislation Update: Senate Bill Would Protect Employers’ Rights; Cal/OSHA Drafts Rules for the Marijuana/Cannabis Industry and Heat Illness Prevention in Indoor Places of Employment; Beware: Marijuana Businesses Targeted With Product Labeling Violation Letters; Marijuana Farm Employees Face Numerous Health Hazards; and New Jersey Cannabis Regulatory Commission Issues Guidance on “Workplace Impairment” Determinations.

For more information on this or any related topic please contact the authors, your Seyfarth attorney, or any member of the Workplace Safety and Health (OSHA/MSHA) or Workplace Policies and Handbooks Teams.

By Adam Rongo and Erin Dougherty Foley

Seyfarth Synopsis: Michigan college loses motion for summary judgment in former instructor’s age discrimination claims because the college provided “shifting justifications” over time for its decision not to interview instructor for a tenure-track position.

On May 5, 2023, U.S. District Judge for the Eastern District of Michigan George Caram Steeh denied Delta College’s motion for summary judgment in Bardoszek v. Delta College, 1:21-cv-11923, stating that the school “changed its story” about why 68-year-old instructor, Edward Bartoszek, was not interviewed for a tenure-track position.

Bardoszek was an adjunct professor at Delta College where he taught classes in the Dental Hygiene Department and nursing program, and also taught several biology courses. In 2019, he applied to an opening for a full-time, tenure track biology instructor at the college. The college selected a thirty-eight year old candidate, whom Bardoszek believed to be less qualified. Bardoszek asked why he was not considered for the position, and according to Judge Steeh’s opinion, the college initially did not provide him with any reason.

After losing out on the job opportunity, Bartoszek filed a charge of age discrimination with the Equal Employment Opportunity Commission (“EEOC”). In its position statement, Delta College denied that age was a factor in its decision. It claimed Bartoszek lacked the proper educational qualifications and relevant work experience. More specifically, Delta College noted that the chosen candidate had a master’s degree in biology, which was a requirement for the position, and that Bardoszek’s doctorate in dentistry did not meet that requirement. The college also claimed Bartoszek’s teaching experience was “limited to working as an adjunct at Delta College in the Dental Hygiene Program.”

According to Judge Steeh’s opinion, Delta College then changed its story, arguing in its motion for summary judgment that Bartoszek’s doctorate in dentistry did meet the educational requirements for the job. The college also acknowledged that Bartoszek taught several biology classes “performing the same job for which he was applying.”  However, in the litigation, the college instead focused on the deficiencies in Bartoszek’s application, such as his failure to submit transcripts, and that his cover letter and application were deficient because they did not clearly address his teaching experience and focused on his private sector accomplishments.

In denying Delta College’s motion, the Court stated “Defendant’s litany of discredited reasons casts doubt on its claim that other candidates simply ranked higher than Plaintiff based on their applications.” Quoting the Sixth Circuit, the Court noted “shifting justifications over time calls the credibility of those justifications into question.” Cicero v. Borg-Warner Auto. Inc., 280 F.3d 579, 592 (6th Cir. 2002). Because of the college’s contrary responses between its position statement and summary judgment motion, Bartoszek “sufficiently cast doubt” on Delta College’s reasons for failing to hire him, warranting a denial of the college’s summary judgment motion.

Impact on Employers

This case provides a cautionary tale, warning employers to thoroughly investigate a claim at the administrative level, and provide consistent answers throughout litigation. An employer can be held accountable and lose at summary judgment for stating one set of facts to an administrative agency and a different set of facts to a court after going through discovery. This is true even if no other evidence of discriminatory animus is present in the underlying facts.

Further, if a similar case proceeds to trial, some courts may allow the plaintiff to introduce the employer’s EEOC position statement into evidence because the shifting justifications may (under certain circumstances) provide an opportunity to impeach the decisionmaker. As a best practice, thoroughly investigating claims as soon as they are charged at the administrative level and then accurately articulating the reasons for the employment decisions, can minimize the risk of backpedaling on the facts later on in litigation.

If you have questions about your investigation policies, or any threatened or pending administrative charges, contact your Seyfarth attorney or the authors of this post.

By Annette Tyman and Andrew L. Scroggins

Seyfarth Synopsis: On May 18, 2023, the Equal Employment Opportunity Commission (EEOC) released Technical Assistance on the use of advanced technologies in the workplace titled Select Issues: Assessing Adverse Impact in Software, Algorithms, and Artificial Intelligence Used in Employment Selection Procedures Under Title VII of the Civil Rights Act of 1964 (“TA”). The EEOC did not unveil new policies in the TA but reiterated that its long existing policies and practices continue to apply to the technologies (such as artificial intelligence and machine learning tools) that are grabbing the public’s attention today.  The TA broadly defines the types of automated systems that may be subject to employment laws and poses seven questions and answers designed to help employers avoid discriminatory employment decisions regardless of whether those decisions are made by humans or machines. The publication is an important read for employers.

EEOC Emphasizes That Long-Standing Title VII Principles Apply Even To New Technologies

In its new publication, the EEOC acknowledges that it is not announcing new policies. Rather, the publication “applies principles already established in the Title VII statutory provisions as well as previously issued guidance” to advanced technologies used in the workplace.

In line with that approach, the EEOC makes clear that it takes a broad view of the types of technology it has the authority to cover. Specifically any software, algorithm, AI, or other automated tool that is used to make “selection decisions” such as hiring, promotion and terminations, must be used in a manner consistent with EEO statutes. Expanding existing legal theories to emerging issues like AI and other technology tools fits squarely within the EEOC’s strategic enforcement priorities. A detailed examination of the EEOC’s strategic goals can be found here.

Focus On Disparate Impact Discrimination

The publication focuses on theories of “disparate impact” discrimination under Title VII of the Civil Rights Act of 1964. Disparate impact (sometimes called “adverse impact”) refers to the use of a facially neutral test or selection procedure that has the effect of disproportionately excluding members of a protected group, if the tests or selection procedures are not “job related for the position in question and consistent with business necessity.” Disparate impact theories are powerful tools for the EEOC, as they necessarily implicate broad swaths of potential “victims” in a single enforcement action, maximizing EEOC’s “bang for the buck.”

EEOC Expects Employers To Assess Algorithmic Decision-Making Tools For Adverse Impact In Accordance With The Uniform Guidelines on Employee Selection Procedures

Since 1978, the EEOC has directed employers to follow its Uniform Guidelines on Employee Section Procedures (Guidelines) to determine whether tests and selection procedures are lawful under Title VII. In its TA, the EEOC reiterated that the Guidelines continue to apply to new technologies “when they are used to make or inform decisions about whether to hire, promote, terminate, or take similar actions toward applicants or current employees.”

As a result, the EEOC’s expectation is that employers will assess whether a selection procedure has an adverse impact on a particular protected group. The assessment requires a comparison between the selection rates for individuals in a protected group to those not in the protected group. Significant differences between the two groups must be remedied, unless the employer can show that the selection procedure is job related and consistent with business necessity.

Employers Can Be Responsible For Tools Designed And Administered By Others, Including Vendors

In the TA, the EEOC made it clear that employers may be held liable for the tools created by third parties, including software vendors, and cannot simply wash their hands of responsibility for the outcomes that flow from using tools developed by others.

The EEOC suggests that employers must, at a minimum, ask vendors whether steps have been taken to evaluate whether use of the tool causes a substantially lower selection rate for those in protected groups. However, the EEOC also makes clear that an employer cannot rely on the representations of its vendors. If the vendor says its assessment does not result in different selection rates, but disparate impact nonetheless results, the employer may still be on the hook for any adverse results. As a best practice, employers should vet any tools provided by third parties before putting the tools into use, and also implement audit procedures designed to monitor the results of using those tools to guard against any adverse impact.

With regard to third-party developers of tools, EEOC Commissioners have separately suggested that vendors themselves could be targeted by the Commission if their input into employment decisions are enough to bring them into the orbit of EEO laws. While not specifically addressed in the TA, this is an important issue that we will continue to track.

The Four-Fifths (80%) Rule Alone Does Not Provide A Safe Harbor For Measuring Allowable Differences In Selection Rates

One clarification likely to grab the attention of employers (and vendors) is the EEOC’s position that the well-known “four-fifths rule” may not be used as a sole measure to assess bias in a selection tool.  As described in the Guidelines, the four-fifths rule is one measure used to assess whether selection rates of two groups are “substantially” different. More specifically, if one group’s selection rate is less than 80% of that of the comparison group, the rates are considered substantially different.

In the TA, the EEOC emphasized that the four-fifths rule is only a “general rule of thumb” that is “practical and easy-to-administer,” and courts have found that it is not a reasonable substitute for statistical tests. Curiously, the EEOC has historically championed the use of the four-fifths rule to assess significance in other contexts. (See prior EEOC Guidance here for an example.)

The EEOC’s position is important as many vendors and employers have used the four-fifths rule articulated in the EEOC’s 1978 Guidance as a threshold analysis in bias audits.  The general thinking was that since the four-fifths rule was a well-established benchmark articulated by EEOC and long applied to other testing and assessment tools, it was a good threshold indicator of potential bias in the absence of other guidance. Indeed, the FAQs in the Guidelines provide that to assess adverse impact, “federal enforcement agencies normally will use only the 80% (4/5ths) rule of thumb, except where large numbers of selections are made.” (See FAQ Guidelines Q18).

In keeping with the clarifying comments in the EEOC’s TA, as well as issues related to the appropriate audit method depending on the sizes of the pools being analyzed, employers and vendors that are studying the effects of selection tools (or asking vendors about the tools they provide) may need to reassess their audit strategies. This may include implementing audit standards that evaluate both statistical significance and practical significance, using the four fifths test or other “practical significance” methodologies.

Employers Should Act Upon Discovering That An Algorithmic Decision-Making Tool Results In A Disparate Impact

The EEOC encourages employers to conduct self-analyses before implementing any new tool, and periodically thereafter to ensure that the tool is operating free of bias. If an employer discovers that a tool would have an adverse impact, the EEOC’s expectation is that the employer will either take steps to remedy the impact or select a different tool to use going forward.

EEOC’s Initiatives And Guidance Related To Automated Systems and AI

The TA is part of the EEOC’s Artificial Intelligence and Algorithmic Fairness Initiative, first announced in October 2021, which was designed to ensure that AI and other emerging tools used in hiring and employment decisions comply with the federal civil rights laws that the agency enforces. Since that time, the EEOC has continued to beat its drum on the topic:

  • The EEOC has published guidance that discusses how existing requirements under the Americans with Disabilities Act (ADA) may apply to the use of AI, software applications, and algorithms in employment-related decision-making processes and practices and offered useful information and tips to employers in an effort to assist them with ADA compliance when using such tools.
  • The EEOC published a proposed Strategic Enforcement Plan (SEP) that announced its intention to focus on recruitment and hiring practices and policies that might give rise to discrimination against members of protected groups, including where employers use AI to aid decision-making.
  • The EEOC has held roundtable events and hearings to gather information and discuss the civil rights implications of the use of automated technology systems, including artificial intelligence, when hiring workers.
  • The EEOC has joined other federal agencies to release a joint statement to emphasize that the use of advanced technologies, including artificial intelligence, must be consistent with federal laws.

Employers should expect the EEOC to continue to focus on this topic.

Implications For Employers

The EEOC’s Technical Assistance document does not impose any new rules on employers. Rather, it is yet another reminder to employers that existing law applies to new and advanced technologies, and employers are responsible for employment decisions that impact applicants and employees, whether made by people or with the assistance of machines.  Employers should dust off the 1978 Guidelines and supporting materials and take a fresh look at them as they consider the various technologies that may be used to support employment decisions. 

The publication also represents more foreshadowing of the EEOC’s enforcement priorities, showing once again that the EEOC will scrutinize the technological tools that employers increasingly rely on to make hiring and employment decisions. Employers are well-served to track EEOC charges filed against them that include allegations concerning technological developments as well as those which may prompt the EEOC to issue requests for information seeking information about these tools.

By Karla Grossenbacher

Seyfarth Synopsis:  On May 11, 2023, the same day on which the federal government ended the national health emergency related to COVID-19, Florida Governor Ron DeSantis signed into law a sweeping bill (SB 252) that prohibits businesses in Florida from discriminating in any way against a person based on vaccination status and from requiring face masks and COVID-19 tests.

This law amends the existing law in Florida that prohibits business from requiring documentation of vaccination status from customers and patrons.  The amendments transform current law by broadening the restrictions on business and expanding them to apply to any “person” and not just “customers” and “patrons.”

The law’s anti-discrimination provisions effectively prevent any business, including employers, from having any kind of vaccine mandate.   Although many employers are moving on from vaccine mandates and rules around masking and testing in light of the Biden Administration’s declaration of the end of the national health emergency related to COVID-19, many hospitals and healthcare providers have not, and indeed, some cannot do so until the rule promulgated by Centers for Medicare and Medicaid for certain health care providers is discontinued. 

Billed as a “medical freedom” law, SB 252 provides that a business entity (defined broadly as any form of corporation, partnership, association, cooperative, joint venture, business trust, or sole proprietorship that conducts business in Florida) may not:

  1. Require any person to provide any documentation certifying COVID vaccination or post-infection recovery from COVID 19;
  2. Require a COVID test to gain access to, entry upon, or service from the business operations in this state or as a condition of contracting, hiring, promotion, or continued employment with the business entity; 
  3. Discharge or refuse to hire a person; deprive or attempt to deprive a person of employment opportunities; adversely affect a person’s status as an employee or as an applicant for employment; or otherwise discriminate against a person based on knowledge or belief of the person’s status relating to vaccination with any vaccine defined under subsection (2) or COVID-19 post-infection recovery, or a person’s failure to take a COVID-19 test;
  4. Require a person to wear a face mask, a face shield, or any other facial covering that covers the mouth and nose; or
  5. Deny a person access to, entry upon, service from, or admission to such entity or otherwise discriminate against a person based on such person’s refusal to wear a face mask, a face shield, or any other facial covering that covers the mouth and nose.

The restrictions regarding facial coverings do not apply to health care providers or health care practitioners, as defined under the law, as long as they are in compliance with the soon-to-be-developed facial covering standards for health care that the law directs the Florida Department of Health (DOH) and the Agency for Health Care Administration to develop by July 1, 2023.  Health care providers and practitioners have until August 1, 2023 to develop facial covering policies and procedures that conform to these standards to the extent they wish to require persons to wear facial coverings for any reason. The facial covering restrictions in the law also do not apply to a business entity when a face mask, a face shield, or any other facial covering that covers the mouth and nose is required safety equipment consistent with occupational or laboratory safety requirements, in accordance with standards to be adopted by the Florida DOH. 

If a business requires a person to receive a vaccine other than a COVID vaccine (as defined by the law), the business entity must provide for exemptions and reasonable accommodations for religious and medical reasons in accordance with federal law.

Businesses that violate the law are subject to fines of up to $5000 for each individual and separate violation of the law.  However, the law does not limit the right of a person to recover damages or other relief for violations of this law under any other applicable law.  The law is silent on its interaction with federal law in the event there are conflicting federal requirements.

Unlike Florida’s vaccine exemption law that is set to expire on June 1, SB 252 is not a temporary measure.  This law will be in place when and if another pandemic occurs and will tie the hands businesses in Florida with respect to putting safety measures in effect. 

By Linda C. Schoonmaker and Elizabeth L. Humphrey

Seyfarth Synopsis: Hospitals spend copious amounts of time developing hiring systems to ensure that the most-qualified applicants are selected for jobs in the healthcare field. The lives of millions of Americans depend on a hospital’s ability to provide access to knowledgeable healthcare providers, which ensures excellence in patient care. EEOC v. Methodist Hospitals of Dallas, a recent opinion from the Fifth Circuit, suggests that hospitals may not be able to hire the most qualified applicant when faced with certain accommodation requests under the Americans with Disabilities Act. 

Adriana Cook worked as a patient care technician (PCT) at Methodist Hospital in Dallas. In 2012, Cook injured her back on the job while turning a patient. Following her injury, Cook was unable to return to her PCT position and was assigned to a temporary light-duty role in the pharmacy. After unsuccessfully attempting to return to her PCT job, Cook’s physicians certified that she was physically unable to work for several months.

Between April and July 2012, Cook submitted five requests for FMLA leave and Methodist’s third-party administrator approved each request. While on FMLA leave, Cook requested accommodations that she believed would enable her to perform the more strenuous tasks of the PCT position. Instead of offering reasonable accommodations, Cook was encouraged to seek other work. On July 2, Cook applied for a scheduling coordinator job, which she would not have needed an accommodation to perform. Cook met the minimum qualifications, however, another candidate was selected.

On July 12, Cook’s physician sent Methodist a letter stating that Cook “is unable to physically return to the type of work involved in patient care at the hospital” and that the restriction was permanent. On August 3, Methodist began discussing Cook’s need for accommodations. Ultimately, Methodist determined that Cook should take personal leave so it could fill her position. On August 7, Methodist sent Cook a letter offering her six months of unpaid personal leave with no guarantee of reemployment. The letter requested medical documentation certifying Cook’s inability to work, and stated that, if Cook failed to submit the requested certification, Methodist would presume that she resigned. Cook did not submit the documentation requested and was terminated.

The EEOC sued Methodist for allegedly violating the Americans with Disabilities Act (ADA). The EEOC alleged that Methodist’s categorical policy of hiring the most qualified candidate violates the ADA when a qualified disabled employee requests reassignment to a vacant role, even if he or she is not the most qualified applicant. The EEOC also alleged that Methodist failed to reasonably accommodate Cook, who was not reassigned to the vacant position for which she applied. The district court granted Methodist’s motion for summary judgment on both claims. The EEOC appealed, arguing that the Supreme Court’s ruling in US Airways, Inc. v. Barnett requires Methodist to make exceptions to its most-qualified-applicant policy and that Cook was entitled to a reasonable accommodation under the ADA.

The Fifth Circuit vacated the judgment of the district court as to Methodist’s most-qualified-applicant policy and remanded for further proceedings consistent with its opinion. In rendering its opinion, the Fifth Circuit discussed the Barnett opinion, which held that reassignment is not a reasonable accommodation when an employer has an established seniority system. Barnett also set out a two-step test for determining whether an accommodation is reasonable. First, the employee must show that the requested accommodation seems reasonable on its face, i.e. in the “run of cases.” If an employee makes that showing, the employer must then demonstrate that the requested accommodation poses an undue hardship in the particular circumstances. If the employee fails to show that the accommodation is reasonable in the “run of cases,” he or she may show that special circumstances warrant a finding that the requested accommodation is reasonable on the particular facts of the case.

The Fifth Circuit upheld the district court’s holding that mandatory reassignment in violation of Methodist’s most-qualified applicant policy is not reasonable in the run of cases. The Court noted that “[t]he level of preferential treatment the EEOC asks for would compromise the hospital’s interest in providing excellent and affordable care to its patients and would be unfair to [Methodist’s] other employees.” The Court stated that the “EEOC’s proposed course of action turns the shield of the ADA into a sword” and “imposes substantial costs on the hospital and potentially on patients.” The Court recognized that “[w]hen lives of patients are on the line, mandatory reassignment in violation of a best-qualified system is unreasonable in the run of cases.”

However, the Court found that the district court failed to address the second step of Barnett, that is, whether the requested accommodation is reasonable as applied to Cook. Therefore, the Court remanded the case to the district court, with instructions that it focus on determining whether the EEOC can raise a genuine issue of material fact as to whether Cook’s case presents special circumstances that justify an exception to Methodist’s most-qualified application policy.


Although sympathetic to healthcare employers’ need to hire the most-qualified employees in their endeavor to provide excellent patient care, the Cook opinion invites employees to argue that they are entitled to a mandatory reassignment based on the specific facts of their case. This position has the potential to have considerable impact on the costs of hiring in the healthcare industry (and beyond). A healthcare employer seeking to avoid a lawsuit may be more inclined to incorporate an additional layer of legal assessment in its hiring process – one that considers the litigation risk associated with hiring the most-qualified applicant (an ADA suit) versus the risk of hiring a less-qualified applicant (a malpractice suit).

Healthcare employers should consider adopting clear, written policies that explain the business criteria for selecting applicants. An employee seeking mandatory reassignment as an accommodation may have more difficulty showing that the assignment is reasonable in the face of express policies delineating the employer’s justification for hiring the most-qualified applicant, i.e. to secure patient safety and to provide outstanding patient care. Healthcare organizations should regularly review their hiring policies in order to ensure their business and hiring principles are clearly articulated in a manner that insulates the employer from litigious employees.

For more information on this or any related topic, please contact the authors, your Seyfarth attorney, or any member of the Workplace Counseling & Solutions Team or the Workplace Policies and Handbooks Team.

By John P.Tomaszewski and Jason Priebe

Seyfarth Synopsis: Tennessee and Montana are now set to be the next two states with “omnibus” privacy legislation. “Omnibus” privacy legislation regulates personal information as a broad category, as opposed to data collected by a particular regulated business or collected for a specific purpose, like health information, financial or payment card information. As far as omnibus laws go, Tennessee and Montana are two additional data points informing the trend we are seeing at the state level regarding privacy and data protection. Fortunately (or unfortunately depending on your point of view) these two states have taken the model which was initiated by Virginia and Colorado instead of following the California model.

Is there Really Anything New?

While these two new laws may seem to be “more of the same”, the Tennessee law contains some new interesting approaches to the regulation of privacy and data protection. While we see the usual set of privacy obligations (notice requirements, rights of access and deletion, restrictions around targeted advertising and online behavioral advertising, et cetera) in both the Tennessee and Montana laws, Tennessee has taken the unusual step of building into its law specific guidance on how to actually develop and deploy a privacy program in the Tennessee Information Protection Act (“TIPA”).

Previously, privacy compliance programs have been structured in a wide variety of ways, mostly as a result of the operational necessities of various businesses. With Tennessee’s new law, we now see a state attempting to standardize how businesses develop and implement privacy programs with more clearly defined NIST standards, as opposed to the traditional, but nebulous  concepts of “reasonableness” and “adequacy”.

NIST Privacy Framework

Tennessee’s law incorporates standardized compliance concepts by requiring the use of the National Institute of Standards and Technology (“NIST”) privacy framework entitled “a tool for improving privacy through enterprise risk management version 1.0”. More specifically, the TIPA states that “…a controller or processor shall create…” it’s privacy program using this framework. Unfortunately, it is unclear for now whether or not failure to use the NIST framework would actually constitute A violation of the law. One could potentially argue that if a program fulfills all of the obligations of the TIPA it should not matter what framework is used.

Part of the concern around a “mandatory” use of the NIST framework is that the framework is somewhat complicated to implement; and does not factor the size, capabilities, and processing risk activity of a particular organization. Since NIST intended the framework to cover a wide range of use cases and operational complexities, the framework is inherently complex. As a consequence, smaller and less mature organizations will likely struggle in implementing a privacy program under the NIST framework. This is particularly true since while NISY framework has various levels of maturity for a privacy program, the TIPA doesn’t articulate what “tier” of program maturity a controller needs to fulfill within the NIST framework to be compliant.

The whole issue or “mandatory v. permissive” use of the NIST framework is further muddied as a result of the TIPA giving an affirmative defense to controllers who use the NIST framework. If the NIST framework is oriented as an affirmative obligation, it would not be necessary to articulate the use of the NIST framework as an affirmative defense. In our opinion, Tennessee may have been better served by providing a safe harbor for privacy programs built under the NIST framework, as opposed to mandating that all programs must use the NIST framework. IN any event, further clarity as to what constitutes “compliant” use of the NIST framework would be helpful.

Privacy Certification

Another useful concept which the TIPA introduced is the participation in a certification program  acting as evidence of compliance with the law. While not truly being a “safe harbor”, controllers that participate in the Asia Pacific Economic Cooperation Forum (“APEC”) Cross-Border Privacy Rules (“CBPR”) system may have their certification under these rules operate as evidence of compliance with the TIPA. Outside of one specific federal privacy law (i.e. COPPA), neither the federal nor state privacy laws have officially recognized certification schemes as providing evidence of compliance with the relevant law.

In the end, while there may be confusion in some of the components of the TIPA, Tennessee can be commended for attempting to provide more commercially viable guidance on how to comply with the TIPA, at least from the perspective of building out a privacy program. Additionally, this is the first time in the United states we have seen the use of privacy certification schemas as legally relevant evidence of compliance. Privacy certification systems have been around for some time, but they have almost never been capable of demonstrating legal compliance.

By Patrick Muffo

Seyfarth Synopsis: Last week, a joint statement was issued by four federal agencies expressing their apprehension regarding the use of AI for discriminatory or anticompetitive purposes and outlining their plans for regulation. This comes on the heels of Elon Musk requesting a “pause” in AI development and meeting with Senator Chuck Schumer to guide the statutory framework of AI management. It also comes as China and the EU unveiled their own proposed laws surrounding the regulation of AI. How will governments across the globe respond to this pivotal technology? And for the United States, how will our government respond?

1. Joint Statement

The Consumer Financial Protection Bureau (CFPB), the Justice Department’s Civil Rights Division, the Equal Employment Opportunity Commission (EEOC), and the Federal Trade Commission (FTC) released a Joint Statement on Enforcement Efforts Against Discrimination and Bias in Automated Systems (“Joint Statement,” found here). The Joint Statement expressed the collective concern that AI would be used for discriminatory or anticompetitive purposes.

  • The CFPB is concerned about the discriminatory use of AI in lending decisions. “[T]he fact that the technology used to make a credit decision is too complex, opaque, or new is not a defense for violating these laws.”
  • The EEOC similarly cautioned against the use of AI “to make employment-related decisions about job applicants and employees” and made clear that there is no technology exception to its rules.
  • The Civil Rights Division followed suit, explaining “the Fair Housing Act applies to algorithm-based tenant screening services.”
  • The FTC predictably focused on the potentially anticompetitive impact of AI and “warned market participants that it may violate the FTC Act to use automated tools that have discriminatory impacts, to make claims about AI that are not substantiated, or to deploy AI before taking steps to assess and mitigate risks.”

This is all to say that one cannot outsource its liability to AI. In the realm of decision-making and data-driven solutions, pleading ignorance to the ways in which AI is utilized is not, nor will it ever be, an adequate defense, whether presently or in the future. Many AI algorithms are a black box to the user, so the views of the Joint Statement will be difficult to mesh with the practical use of artificial intelligence tools.

2. International AI Laws

Countries across the globe are rushing to institute their own regulation of AI in response to how it is implemented in their societies. China’s law would require AI models to pass a security assessment before the tool is publicly released. This development is in response to the recent unveiling of numerous generative AI tools by large Chinese companies, which have the capability to simulate human speech and generate text and image-based content.

The EU AI Act would assign three categories to AI tools based on their risk level. The most egregious tools are assigned “unacceptable risk, such as government-run social scoring of the type used in China.” These would be outright banned. Less egregious tools would be deemed “high-risk applications, such as a CV-scanning tool that ranks job applicants” and such tools would be regulated, but not per se banned. Those models not falling within either category would be “largely left unregulated,” whatever that means.

3. What will the United States do?

The Joint Statement makes clear that there is no AI exception to current regulatory rules. It also implies the regulatory agencies believe its framework is already set. For example, using an AI tool to discriminate on the basis of gender or race is illegal—just as it always has been. Anticompetition is anticompetition—AI or not. It appears the agencies are awaiting congressional action for more aggressive oversight on AI.

How Congress reacts will be interesting to watch. The stakes could not be higher and legislation will certainly take time. It is helpful that Congress is seeking guidance from the private sector and understanding the ways in which this revolutionary technology is being used. Perhaps they should seek guidance from AI itself?

4. What AI Has to Say About…Regulation of AI

I had the following exchange with ChatGPT to see what it felt would be effective AI regulation in the financial lending industry:

Pat: I work for the government. We are concerned that artificial intelligence may cause financial firms to discriminate in their lending. We are also concerned that companies will use artificial intelligence to gain an anticompetitive advantage. What are some policies that may prevent such abuses of artificial intelligence?

ChatGPT: There are several policies that could help prevent discrimination in lending and anticompetitive behavior resulting from the use of artificial intelligence (AI):

  1. Algorithmic transparency: Requiring financial firms to disclose the data, models, and algorithms used in their lending decisions would make it easier to detect and address discriminatory practices.
  2. Ethical guidelines: Developing ethical guidelines for the use of AI in financial services could help ensure that algorithms are designed and implemented in a way that is fair and non-discriminatory.
  3. Oversight and regulation: Government agencies could establish oversight and regulation to ensure that AI applications in financial services comply with ethical and legal standards, and do not lead to unfair market practices.
  4. Anti-discrimination laws: Enforcing anti-discrimination laws can help prevent lending decisions based on factors such as race, gender, or religion, even if those factors are indirectly inferred from other data.
  5. Competitive safeguards: Promoting competition by preventing the accumulation of market power by large firms could help prevent anticompetitive practices enabled by the use of AI.
  6. Data protection and privacy: Ensuring the protection of customer data and privacy rights could also help prevent discriminatory and anticompetitive practices enabled by AI.

It will be interesting to see how government responds and who—or what—they listen to. We will certainly keep you updated as developments arise.

By Christopher KelleherAdam Rongo, and Christopher DeGroff

Seyfarth Synopsis: The EEOC has released technical assistance on preventing workplace harassment in the federal sector. While the guidance does not specifically apply to private employers, it provides important lessons for employers when dealing with workplace harassment and avoiding liability in employment litigation, and insight into how the EEOC views these concepts.

On April 20, 2023, the U.S. Equal Employment Opportunity Commission (“EEOC”) issued a technical assistance document, Promising Practices for Preventing Harassment in the Federal Sector, which provides practical tips for preventing and addressing workplace harassment. While the guidance is geared toward federal government employers, it provides helpful recommendations applicable to the private sector as well, and also provides an important perspective on how EEOC enforcement efforts may be evolving. Some of the highlights are discussed below.

Leadership and Accountability

According to the EEOC, all employers should establish and maintain an anti-harassment program, with neutral staff-members who are responsible for promptly, thoroughly, and impartially investigating allegations of harassment, and taking immediate and appropriate corrective action. Federal agencies must begin investigations within 10 calendar days of receipt of the harassment allegation. While there is no specific time period within which private sector employers must investigate internal allegations of harassment, liability can often hinge on the promptness of the investigation and remedial measures. As such, all employers should begin workplace harassment investigations as soon as practicable.  As a practice tip, however, employers may consider this 10-day interval as one at least facially supported by the EEOC,

The guidance also recommends that employers take other actions to demonstrate commitment to preventing and addressing harassment, such as conducting climate and exit surveys, reviewing harassment allegations to guide future policy changes, and ensuring that consistent penalties are implemented and enforced in the event that workplace harassment occurs.

Comprehensive and Effective Anti-Harassment Policy

The EEOC states that government employers should establish and maintain a comprehensive anti-harassment policy that is regularly disseminated to all employees. Such a policy is often an important element of an effective harassment prevention strategy, and also helps private employers limit liability. In its guidance, the EEOC requires federal agency policies to provide a clear explanation of prohibited conduct, and prohibit harassment on all protected bases, including race, color, sex (including sexual orientation, gender identity, and pregnancy), national origin, religion, disability, age (40 years or older), genetic information (including family medical history), and retaliation. Employers should note, however, that protected categories may vary by jurisdiction, and workplace harassment policies generally provide broader coverage than the law requires.

According to the EEOC, the policy should provide multiple avenues for employees to report harassment, including individuals or departments outside the employee’s reporting chain of command. The guidance provides that the policy should also include assurances that employees making complaints of harassment will be protected from retaliation, that the employer will take prompt corrective action to prevent or address harassing conduct, and keep identity of individuals involved (i.e., the complainant, witnesses, alleged victim, and alleged harasser) confidential, to the extent possible, consistent with legal obligations and the need to conduct a thorough investigation. 

The EEOC also recommends implementing additional measures, such as:

  • Providing explicit assurances that the policy applies to all employees;
  • Widely disseminating the policy, and making the policy available both onsite, and online;
  • Allowing for anonymous reporting of harassment through platforms, such as hotlines and websites;
  • Ensuring that reports of harassment and harassing conduct are well-documented;
  • Ensuring that investigations are not conducted by individuals who have a conflict of interest or bias; and
  • Providing guidance on the processes and procedures for addressing harassment allegations involving non-employees, such as contractors, guests, volunteers, or customers.

Effective Anti-Harassment Training

The EEOC stresses that to help prevent and properly address harassment, employees and management must be aware of what conduct is prohibited and how to prevent and correct it. As such, federal agencies are required to conduct periodic anti-harassment training to both supervisory and non-supervisory employees at all levels. The EEOC does not identify what it considers “compliance.”

Private employers should try to establish such training where possible as it can prevent harassment, boost employee morale and a sense of workplace safety, and help employers limit liability. The EEOC recommends that the training be tailored to the specific workplace, and regularly updated to ensure compliance. When considering such a policy, it is important to note that some states, such as California, Illinois, and New York require anti-harassment training, and some cities, such as Chicago and New York City, have additional requirements. Many other states and municipalities are considering implementing workplace harassment measures.

Implications for Employers

Though the EEOC’s technical assistance is geared towards federal agencies, the document provides useful insights for private employers attempting to determine what the EEOC considers to be an adequate anti-harassment program. This includes establishing and widely disseminating anti-harassment policies, promptly investigating internal complaints, taking prompt and effective remedial measures where appropriate, and conducting regular anti-harassment trainings. In establishing an anti-harassment program, employers should look beyond the technical assistance document and also take into account any relevant state and local laws. If you have questions about your anti-harassment practices, or any threatened or pending harassment litigation, contact your Seyfarth attorney or the authors of this post.