Employment Law Focus

Employment Law Focus: AI and employment law

Episode Summary

How would you react if you had won at an Employment Tribunal, only to be told that the Judge’s decision had been written for him by AI? This thought-provoking scenario recently became a reality when a Judge used ChatGPT in part of a judgement. While the Judge emphasised the importance of using AI tools judiciously, this incident raises critical questions about what this might mean for employment tribunals, employment law and potential implications for employees.

Episode Notes

In this episode Jonathan Rennie, partner at TLT is joined by Sarah Maddock, senior knowledge lawyer at TLT and Emma Erskine-Fox, managing associate in the Technology and Intellectual Property team at TLT to look at the impact of AI on employment law and discuss: 

Episode Transcription

Jonathan Rennie (JR)

Hello and welcome to this episode of TLT's Employment Podcast.  I'm Jonathan Rennie, a Partner, and a member of TLT's UK wide employment team, and today I'm joined by my colleague Sarah Maddock, Senior Knowledge Lawyer and Emma Erskine-Fox a Managing Associate in our Tech, IP and Data team.

Today we're going to discuss AI and its impact with employment law.  How would you react if you'd won an employment tribunal only to be told that the judge's decision had actually been written using artificial intelligence? I'm assuming you'd be a little bit surprised and perplexed and maybe thinking about an appeal, but that is actually something that happened recently when a judge used ChatGPT in part of their judgement. Seemingly the judge was very clear they had used the AI tool to write a particular paragraph summarising an area of law that he actually already understood, and he actually emphasised the dangers of relying on ChatGPT for help with information the user knows nothing about.   This shows the power of large language model, artificial intelligent tools, and it raises all sorts of questions about what this might mean for employment tribunals for employment law and in terms of the employment relationship itself, and how employees and employers deal with this technology.  Today, we're going to talk through some of the more immediate concerns for employers as they navigate this new frontier.

Sarah Maddock (SM)So, Emma, I've noticed that the term AI can be bandied about a bit to cover a whole range of different types of technology. Everything from driverless cars, through to those annoying chat bots that pop up to offer help.  Just very briefly, could you just cut through the hype and explain to us why generative AI such as ChatGPT has been getting so much attention lately?
Emma Erskine-Fox (EEF)

 

I actually, I did an experiment and asked ChatGPT what is AI a few times before we recorded this episode and even ChatGPT came up with several different definitions of what is AI.  So, there's definitely not sort of a universal agreement on a definition of AI.   But the point that you've just got across Sarah, which I think is really important to remember is it's an umbrella term for lots of different types of technology.  The sort of essential component of it really is that it's to do with technologies that enable machines to act with sort of human like intelligence.  So, sort of comprehend data, learn from that data and act on it in a way that traditionally would have required human input.  And AI is not a new thing, it's been around for a long, long time.  You know, you mentioned chat bots, chat bots have been around for absolutely ages.  You know, even at TLT, we've been using AI to assist in our provision of legal services for quite some time now.  Generative AI and ChatGPT has kind of captured the public imagination.   One of the big differences with generative AI tools is, you know, when looking at chat bots, they are very specific in terms of responding to certain particular keywords or triggers.   Whereas ChatGPT is much more complex than that, and it can respond to much more complex questions, and it can come out with sort of very, very comprehensive, very full answers on virtually any topic and the reason it can do that is because it essentially takes everything from the internet, runs that huge amount of data through very powerful neural networks, which is software that's sort of loosely designed on neurons in the human brain.  And so, it's scrapes all that that data, runs through all of that to then push out an answer and so that's why I think it's different from possibly AI tools that we've seen previously.

JR

 

And in terms of the speed of it, Emma, obviously that lends itself to judges writing judgments and people looking at shortcuts, I'm assuming it's incredibly rapid.

EEF

 

Yes. Yeah, absolutely. It's just, it's immediate and you know the scope of use for it, you know the different use cases, I think that's the thing that sort of really captured the public imagination. According to Reuters, it's the fastest growing app in history and it hit 100 million monthly active users within two months of launch.  Just for a bit of context to that, it took Instagram two and a half years to get to 100 million monthly active users and TikTok nine months.   So, you know the scale at which this tool has grown is really quite astounding.

SM

 

So, those types of application that you're describing sound so much more sophisticated than the other similar types of tech that we've seen so far to date.  Big question for you Emma, is this something that listeners should be concerned about or embracing wholeheartedly?

EEF

 

I don't think there's necessarily a sort of binary black and white answer.  What I would say is there's a lot of, there's a lot of fear around AI often, but actually it can be an incredibly beneficial tool if it's used in the right way.  There is a lot of research that's coming out that suggests a very positive long-term impacts of AI and certainly tests and studies that have been done that have shown increased productivity of workers when they are combined with AI, when you sort of have that combination of people and technology to make workers as productive as possible.  So, it's not all about the dangers of it and you know the negative side of things that you often read in the press.  That said, there are risks to this and we'll no doubt come on and talk about some of those in a moment and I think it can be very easy for organisations to kind of get quite carried away and very excited about this, this new technology or not new, but you know this rapidly improving and exploding technology and think, you know, oh gosh, we need to implement as much AI as we possibly can within our business and we need to do it now, now, now, now, now and actually we really need to sort of, take a step back and think through what the risks of implementing that technology are, you know, from like, you know, a discrimination point of view and Equality Act point of view, a data protection point of view, all things that we'll come on and talk about more in the episode.  But all of those need to be addressed and considered right at the outset of implementing that technology, because otherwise, you know, you'll find yourself in a position where you've put this new tool in place, but actually the right processes haven't been gone through and you're potentially at risk of various different breaches and potentially putting your employees and your business at risk as well.

JR

 

I think there were two broad areas of risk that Sarah and I had spoken about before.  So, the first of which was just general issues related to the technology itself, which I suppose a bit futuristic but the idea that it might infect, pollute, corrupt or alter your business model without you knowing about it potentially.  And then secondly, the wider employment law risks.  When we talked about the technology itself, Sarah was telling me about the concept of hallucinations, which I broadly understand, of course, but in respect of AI, Sarah, what was that defined term, if you like?

SM

 

When we talk about hallucinations in respect of AI, what we mean is that generative AI tool, will as Emma said, scrape information from the whole of the internet and then present that information as a very plausible sounding answer.  But what it won't do is tell the user if it has undertaken its scan of the internet and hasn't been able to find the right data and has therefore produced a result, which is completely false, and it'll present those findings as fact without any warning on them.  So those presentations of information are referred to as hallucinations.

JR

 

And obviously there are Intellectual Property concerns about who would own that intelligence or that reporting, and we can predict the development in areas of fraud and crime as well.  I think in the employment context, the main concern would be a discrimination risk.   The question as to whether AI is actually fairer, more impartial, unbiased, given that at its base, presumably the coding and sourcing of some of this information will rely upon humans.   And it seems to a point to be common sense that you know clearly AI can get things wrong and a very good example of this was actually Amazon with all their global resources and finances in 2014, they set up a team in Scotland to develop its own automated CV screening algorithm, and they used a decade's worth of internal recruitment data and essentially found out that the data-driven tools quite simply did not like women.  Women were being rejected and it identified that the algorithm looked at certain traits and qualifications that tended to exclude women.  Obviously, the project was ultimately abandoned, but it shows what can happen and the control measures and sense of trust that can be placed in these tools and we could understand taking that further that if these tools worked properly and appropriately that there might be a discriminatory decision actually to appoint a particular candidate.  And these decisions could then subsequently be challenged and what that looks like.  Sarah, in terms of that, is there something that you're concerned about with AI and generative decision making?

SM

 

Yeah, I think when an AI tool is brought in to make management decisions, then there are a number of risks around that being introduced.  One of which is, like you mentioned, bias kind of being baked into the system itself.  Another risk is around what's is described as the "black box" nature of decision making by generative AI tools.  The nature of the technology is so complex, in the way it's developed, whether that's, you know, through from initial coding, right the way through to the final use by an end user manager.  It quite often can be very hard to establish any legal accountability for the decision making that's made within that technology, especially when a manager is applying that tool without any understanding at all about how the decision making is made within that piece of technology.  And that's referred to as a as a "black box" of decision making.  Which then, you know, collides with the equalities framework.  One of the key notions that underpins that is that of accountability and query how easy it is to be accountable for decisions when they're made by piece of technology which is very little understood by the manager applying it.

EEF

 

Yeah, Sarah I absolutely agree.  And I think that "black box" decision making that you've talked about is a kind of quite key feature of AI, but something that from a data protection perspective does carry its own specific risks.  So, when you're processing personal data about your employees, if you're doing that to make a decision, you have to tell people what's going on with that processing, how their data is being processed and you also have to tell people how those decisions have been arrived at.  And obviously that is very, very difficult to do if that decision is taking place inside this "black box". You're going to be very, very heavily reliant on AI vendors for these tools.  Likelihood is you're not gonna be developing them in house, so the control over the development and the testing and the training of that model is all gonna sit with the vendor and actually they're gonna be quite unwilling possibly to give information to you that they think could kind of jeopardise their own intellectual property or their trade secrets.  So, you've got a challenge in terms of getting enough information out of your vendor to enable you to understand how that decision has been made and to explain that to individuals.  Even if you can get over that hurdle, you've then got the hurdle of the, these systems are very, very complex and actually distilling that into something that individuals can really understand in terms of the impact on their data and the impact of that decision on them in itself can be tricky, and there's a third potential challenge to think about as well, which is that in cases of true machine learning, if the algorithm, you know, goes off and sort of learns by itself and finds different ways and different purposes for processing the data.  Even the vendor might not be able to tell you what's going on inside that "black box", even they might not fully be able to keep track of what the algorithm is doing.  So, if they don't know and they're the ones who have developed it, how can they feed that information through to you as the employer, using this tool to potentially make decisions that have impacts on your employees.  It's really, really important as the employer that you understand as much as possible about how that tool works, and you get as much help as you can from the vendor in terms of then explaining that to the employees who might be affected by the decisions that are made.

JR

 

I think it's helpful to understand that data protection angle and it's important to stress that in respect of the equalities law, the Equality Act 2010, that that absolutely applies regardless of AI tech.  So, there's no alteration to the statutory rules around concepts such as less favourable treatment and reasonable adjustments.  It's still the view that the decision will be deemed to be taken by the employer regardless of the tool that they have used and obviously that could mean that the employer can be in a difficult situation if, as Emma describes, somehow the AI has developed or progressed in a way that was unexpected.  And I think Emma, further in the data protection field, we've talked before about accuracy, which links in a little bit with what Sarah was saying around hallucinations and you have some additional thinking on that.

EEF

 

Yeah, absolutely.   And it ties in really well with what you've just been saying, Jonathan, as well about the Equality Act and, you know, the decisions that are being made by these tools.  So as the employer in that circumstance, you're the one who's responsible for making sure that not only that the data about your employees that's going into the tool is accurate, but also that the decisions that are coming out are accurate as well, and that's a really key tenet of data protection law that you have these accuracy obligations.   And so, when you're using these tools, you need to be really conscious of those obligations and very conscious of the fact that again, you don't have the day-to-day control over what that tool is doing.  And just as an interesting anecdote, that's not sort of directly employment or recruitment related.  I read a story a while ago about an Australian mayor and the inhabitants of the town of which he was mayor had kind of run his name through ChatGPT.   I can't remember his name, let's say it was John Smith.  So, they'd asked ChatGPT who is John Smith and ChatGPT had come out and told them that he had been involved in a bribery scandal in a previous job, and actually he had been the whistle-blower on the bribery scandal.  So that's really interesting in the context of Sarah, what you were saying about hallucinations, because that's clearly a circumstance where ChatGPT had sort of filled in, filled in a blank, but filled it in wrongly and you can see how if these tools are being used as part of recruitment for example, how errors like that could actually have really, really significant impacts for people.  So again, it's really important as an employer to be thinking about what, you know, what are the impacts if this tool does churn out an inaccurate decision, what are the impacts of this?  Have we got the resources to verify the output?  And are we willing to take on responsibility for the impact of those potentially inaccurate decisions? 

SM

 

Okay, and I think that makes a really interesting point, Emma, about the accuracy of the data that might be being created within AI, and then how that links into the reasonableness or the fairness of the decisions being made off the back of that data, because those concepts of reasonableness and fairness are concepts that we're used to grappling with on a daily basis as employment lawyers.   But I wonder whether employers might find themselves in difficulties if they're having to justify the reasonableness or the fairness of their decisions based on data that has been produced from generative AI and one of the things that occurs to me is that disability, discrimination and the obligation to make reasonable adjustments might be triggered by an AI tool, and that might be considered a, the use of that tool might be considered a provision, a criterion or a practice, which means that employers are then required to make reasonable adjustments.  And I don't think there's been any case law on that point, yet.  But it occurs to me that this is something that maybe employers should be thinking about now, when they're considering what might amount to a provision, criterion or practice, and how they might tailor any information that is taken from an AI tool in order to remove any potential disadvantage to an employee.

JR

 

So, Sarah, when I think about that and the complexities that it, that it develops, one thing that occurs to me, of course, is that the AI tool might be used in conjunction with more human input or overview in particularly difficult disability cases and that might help to have a balancing out or a different form of assessment.  I'm not sure, Emma, if that creates data protection issues from your point of view.

EEF

 

Yeah, yeah, so there's definitely some data protection compliance points there as well.  So, when you're using AI tools to make decisions on a sort of wholly automated basis there are actually very, very strict conditions for that.  So, you have to be able to show that that's necessary for the purposes of contract or that you've got consent to do that.  So actually, the circumstances in which you can make decisions that impact or that have significant impacts for individuals purely using AI are actually pretty limited and so employers need to be definitely thinking about that as well in the context of those sort of, you know, reasonable adjustments and fairness and all the things that you and Sarah have just been talking about and there are appeal rights as well.  So, if decisions are made solely on an automated basis, then there are rights for employees or for individuals more widely to contest those decisions and to request human input as well.  So, you definitely need to be thinking, as you say, Jonathan, about kind of the points in the process at which you can have a human assessment and that has to be a meaningful human assessment as well.  It's not just a kind of, you know, rubber stamping of an automated decision.

SM

 

So, I'm really grateful to hear those points Emma cause it's not something that we necessarily consider as part of our day-to-day work as employment lawyers, taking into account the data protection angle and there are clearly several strands to this which need to be considered in relation to a workforce.  But what seems to be a fairly consistent theme from the discussion today is that whilst AI does have a lot of benefits in relation to productivity and being able to increase the turnover of work, there does seem to be a requirement to have a human in the loop at some point, working alongside that AI to try and offset those risks around accuracy and fairness and reasonableness of decision making and I wonder whether it would be useful for us to chat through some of the ways that listeners might think about limiting the risks around AI.  One point that occurs to me is whether listeners should be thinking about their internal AI policy and what their governance framework looks like around AI.  By which I mean thinking about who's going to be responsible for making decisions that are assisted by AI.  What risk assessments might be needed?  What you might need to do to adjust any existing processes which might be affected by AI technology.  And all the data protection risks and compliance around that.

EEF

 

That's absolutely one of those, sort of, number 1 things that I think organisations need to be thinking about is where this kind of, I say I use new in a loose sense of the word because as you said, AI isn't new but where this sort of rapidly expanding technology fits into their existing processes.  And I think another thing that is really important is training and guidance.  You know, it's all well and good having things written down in your policies, but actually if you're not training those out to employees, then you know the documents themselves are, you know, not as useful as they certainly could be.   So, you know, certainly looking at what you need to be telling employees about the circumstances in which they can use AI, where AI tools are intended to be used in a way that affects staff, affects employees, then they need to be told, as you said, you know, transparency is really important and also think about consulting them as well, you know, getting views from your staff as to kind of how they would feel about the use of AI in particular circumstances and you know where you can, making sure that people have sort of informed choice about whether they engage with AI tools and things like that.

JR

 

Whilst I can see employers taking responsibility and perhaps enlightened ones getting staff in a room and understanding people at different grades and different ages and different seniorities and all of their approaches and consulting, of course the question arises then, well, what's the government doing about it?   Is there any input or consultation or legislation to look out for in this regard, and I think Sarah, you've been keeping quite a keen eye on this over the last little bit.

SM

 

The headline is that there is no overarching specific AI legislation in the pipeline and the government is not proposing to introduce a one size fits all single piece of AI guidance or code of conduct for employers.  What the government has said in a white paper, which they published back in 2022, is that there are risks around AI.  Does it acknowledge that mainly in relation to privacy and the potential for discrimination to creep in, but whilst they've rode back from taking a legislative approach, they have said that they are planning to introduce a pro innovation framework that will be principles based and that they'll be working with individual regulators around curbs and guidance on AI.

EEF

 

Just sort of following on from that, Rishi Sunak has announced that the UK is going to host this global AI safety summit in November of this year and so it will be really interesting to see whether that will change the UK's approach.   You know, you mentioned this sort of quite light touch approach that the UK is currently proposing.  There have certainly been some MP's who have kind of indicated that they think that actually we need possibly a more robust approach and I think the other thing that we need to bear in mind in the UK is that there are lots of other proposals globally for AI regulation, some of which are very, very different from the approach that the UK has currently proposed.  So, for example, the EU has kind of gone to the other end of the spectrum and proposed a very prescriptive, very detailed AI act, which imposes some quite stringent obligations on certain types of AI.  Canada and China have come out with regulatory and legislative proposals as well.  The US is starting to come out with frameworks as well, albeit again not so much on a legislative footing.  I mean, there are, you know, there are proposals coming out everywhere, as you can imagine.  And so, for organisations that do operate on a multinational level or that might be implementing AI tools that affect people globally, you know, if you if you have got employees all over the world, then potentially you need to be aware of all of those regulatory frameworks and you know, thinking about kind of how your approach to AI within your organisation kind of ticks all the boxes and complies with all of those different frameworks and that in itself is potentially a challenge as well.

JR

 

Yeah, it does generate an awful lot of wider and bigger questions.  So, for example, understanding that some global organisations have signed up to letters with their views in AI.  Some of them might, for example, be protecting their competitive position and they might take a view in AI because they don't want new businesses coming into their marketplace.  Definitely one to think on in terms of the different approaches to legislation across different jurisdictions.  Thanks for that, Emma.

SM

 

So, we've talked quite a lot about the employer side of things in terms of employers' use of AI to assist them in management tools and perhaps increase productivity.  But I wonder, Jonathan if you could talk a bit about how employers deal with employees using generative AI, maybe to assist them at work, maybe using ChatGPT to write code, or like the judge you mentioned summarising a complex topic or even maybe to write the draft of a script for a podcast.

JR

 

Well, we've talked about this quite a bit Sarah, I suppose it can be used as a productivity tool like any other, like an extreme version of a spell checker back in the day.   There might be some form of carve out where the AI tools are uniquely focused for things like tests, assessments, perhaps even promotion decisions.  But it's going to be a business decision for individual employers, factoring in all of these risks that which we've identified and possibly larger organisations having to take a jurisdiction-by-jurisdiction approach. As we've described, TLT employees aren't using ChatGPT, so I have to put that out there, but you can see the organisations will be keeping this under review and possibly consulting with clients, customers, suppliers and even competitors to understand, for example, within the legal marketplace is something that as a profession we need to look at and legislate for ourself.

EEF

 

Yeah, that's absolutely right, Jonathan.  You know, as you say, it is a kind of individual business decision and I think organisations need to make that decision taking into account their own risk profile, their own sector, their own sort of regulation that they might be subject to and their own risk appetite, really.  And I think if you do make a decision to allow employees to use ChatGPT or other generative AI tools as part of their work, then you know that risk assessment piece is going to be really important and having a policy in place, setting out the scope of use, for that, that employees, you know how employees are allowed to use those tools and guidelines around appropriate use of those tools as well.  You know there was a tech organisation that kind of ran into hot water a while ago now in the news because their, some of their employees had put some sort of secret code into ChatGPT and had kind of inadvertently given ChatGPT, sort of, trade secrets that ChatGPT then used as part of its training more widely.  So, you know that that appropriate use is really important as well, you know, sort of telling employees what they can and can't do what they can and can't put into those tools and you know, thinking about, sort of, transparency and record keeping as well and you know what you're going to require employees to do in terms of tracking and documenting their use of AI for business purposes.  And finally, you know, if employees don't comply with all of those requirements, what sanctions are going to be in place?  You know, what does that, what does that lead to in terms of the impact for the employees who haven't complied?

SM

 

Those are some really excellent points, Emma, and we appreciate that some people might be listening to this while they're driving their car and doing other things and not able to make a note.  So, what we'll do is send out a written briefing alongside the podcast, setting out some of those points and some more detail about what employers might want to think about if they're putting together a generative AI policy.

JR

 

All that leads me to say is thanks very much from myself, Sarah and Emma for listening.   If you're looking for tech or automation solutions, then please get in touch with our Future Law Team and if you're enjoying the podcast, please do rate us and follow us and tell your teams and networks.  If there's a topic or a question you'd like us to cover, you can email us at EMPLawPodcast@TLT.com and you can find us on Twitter, at TLT_employment, or use the hashtag @TLTEmploymentPodcast. 

The information in this podcast is for general guidance only and represents our understanding of the relevant law and practice at the time of recording.  We recommend you seek specific advice for specific cases.   Please visit our website for our full terms and conditions.