By Karla Grossenbacher

Seyfarth Synopsis: Since ChatGPT became available to the public at large in November 2022, employers have been wondering, and asking their employment lawyers, “What kind of policies should we be putting in place around the use of ChatGPT in the workplace?”  Although at this stage it is difficult to imagine all of the different ways ChatGPT, and its subsequent iterations, could be used by employees in the workplace, it is important to consider some of the more obvious usage cases and how employers might choose to address them in workplace policies.

What is ChatGPT?

ChatGPT is a form of artificial intelligence (AI) — an AI language model that is trained to interact in a conversational way.  At its most basic level, AI is a computer system able to perform tasks that normally require human intelligence.  In order to achieve this, AI needs to be trained.  First, massive data sets are fed into a computer algorithm.  Then the trained model is evaluated in order to determine how well it performs in making predictions when confronted with previously unseen data.  For ChatGPT, it is predicting the next word in a given context to provide that conversational tone for which it has become known.  Lastly, the AI goes through a testing phase to find out if the model performs well on large amounts of new data it has not seen before.  This is the phase in which ChatGPT finds itself. 

Legal Risks for Employers

Given how AI is trained and learns, significant issues can arise for employers when employees use ChatGPT to perform their job duties.  One big concern when employees obtain information from a source like ChatGPT in connection with their work is accuracy and bias. 

ChatGPT’s ability to supply information as an AI language model is only as good as the information from which it has learned and on which it has been trained.  Although ChatGPT has been trained on vast swaths of information from the Internet, by its very nature as AI, there are and will continue to be some gaps in ChatGPT’s knowledge base.  The most obvious example of such a gap is that the current  version of ChatGPT was only trained on data sets available through 2021.  On top of that, one needs to keep in mind that not everything that appears on the Internet is true and so there will be some built-in accuracy problems with information provided by ChatGPT given the data on which it was trained.  Thus, with respect to legal risk for employers, if employees are relying on ChatGPT for information in connection with work and not independently fact-checking that information for accuracy, obvious problems can arise depending on how the employee uses the information and to whom the information is provided.  Thus, it would make sense for employers to have policies that put guardrails on when and to what extent it is permissible for employees to obtain information from ChatGPT in connection with their work. 

There is also the question of inherent bias in AI.  The EEOC is focused on this issue as it relates to the employment discrimination laws it enforces and state and local legislators are proposing, and in some jurisdictions already passed, legislation that places restrictions on the use of AI by employers.  As described above, the information AI provides is necessarily dependent on the information upon which it is trained (and those who make decisions about what information the AI receives).  This bias could manifest itself in the types of information ChatGPT offers in response to questions presented in “conversation” with it.  Also, if ChatGPT is consulted with regarding to decision-making in employment, this could lead to claims of discrimination, as well as compliance issues based on state and local laws that require notice of the use of AI in certain employment decisions and/or audits of AI before using it in certain employment contexts.  Because of the risks of bias in AI, employers should include in their policies a general prohibition on the use of AI in connection with employment decisions absent approval from the legal department.

The other big concern for employers when thinking about how employees might use ChatGPT in connection with work is confidentiality and data privacy.  Employers are naturally concerned that employees will share proprietary, confidential and/or trade secret information when having “conversations” with ChatGPT.  Although ChatGPT represents that it does not retain information provided in conversations, it does “learn” from every conversation.  And of course, users are entering information into the conversations with ChatGPT over the Internet and there is no guarantee of security such communications.  Thus, while the details of how exactly confidential employer information could be impacted if revealed by an employee to ChatGPT, prudent employers will include in employee confidentiality agreements and policies prohibitions on employees referring to or entering confidential, proprietary or trade secret information into AI chatbots or language models, such as ChatGPT.   A good argument could be made that it is not consistent with treating information as a “trade secret” if it is given to a chatbot on the Internet.  On the flip side, given how ChatGPT was trained on wide swaths of information from the Internet, it is conceivable that employees could receive and use information from ChatGPT that is trademarked, copyrighted and/or the intellectual property of another person or entity, creating legal risk for the employer.

Other Employer Concerns

In addition to these legal concerns, employers also should also consider to what extent they want to allow employees to use ChatGPT in connection with their jobs.  Employers are at important crossroads in terms of determining whether and to what extent to embrace or restrict the usage of ChatGPT in their workplaces.  Employers will need to weigh the efficiency  and economy that could be achieved by employees using ChatGPT to perform such tasks as writing routine letters and emails, generating simple reports, and creating presentations, for example, against the potential loss in developmental opportunities for employees in  performing such tasks themselves.  ChatGPT is not going away, and in fact, a new and improved version should be out within the year. 

Employers will ultimately need to address the issue of its use in their workplaces the next iteration is going to be even better.  For all of the risks ChatGPT can present for employers, it can also be leveraged by employers.  The discussion has just started.  Employers – like ChatGPT – will likely be learning and beta testing on this for a bit.