I, Robot

The use of AI is without a doubt on the up and up, with more people than ever turning to it for advice on everything from what to have for dinner to, perhaps unfortunately, “should I sue my employer?”  While AI can be an incredibly useful tool, it is not infallible, as demonstrated in the now infamous tax case where it hallucinated and cited case law in support of an individual’s tax dispute that turned out, after multiple lawyers spent considerable time searching for the judgments, not to exist!  While its limitations are clear, this does not mean that employees will necessarily appreciate that, especially when their pal ChatGPT is bolstering their suspicions that they are being hard done by at work.  So, if an employer receives a complaint that has the hallmarks of having been drafted by AI, what should it do? 

Open up a conversation 

Firstly, if an employee, for example, raises a grievance or makes a request for reasonable adjustments that has the whiff of automaton about it, it should not be dismissed out of hand.  Regardless of who (or what) drafted the correspondence (and the legal accuracy or lack thereof!) it should still be dealt with in accordance with the usual procedures. The best approach is likely to be to meet with the employee to discuss the contents of the correspondence and to raise with them any areas which appear to be factually or legally incorrect.  This should be approached in a diplomatic and open manner, allowing room for constructive conversation rather than creating a standoff whereby both parties become entrenched in their own position.  Ideally, acknowledge that it is understandable why the employee may have sought advice from an AI while also helping them understand that it has limitations and may be giving them a false impression of their position from a legal perspective.   

Redirect staff to credible sources 

If there is a part of the complaint/request that is easily demonstrable as being incorrect, it may be helpful to show the employee why that is, for example, by referring them to a page on the Acas website.  While no employer wants to be giving their employees advice on how better to structure a claim against them, it will save everyone time (and no doubt money) in the long run if legal inaccuracies are cleared up at the outset.  It may be useful to adjourn the meeting and allow the employee to go away and do further research of their own on the points raised and then reconvene once they have had the opportunity to do so.  This may help them to focus on the relevant/legally sound points they have made and remove those that are muddying the water due to inaccuracies.   

Be alert to AI’s potential 

Employees may, of course, go further than just asking for help with drafting correspondence, even asking AI to draft claims for them or to advise them on prospects of success or likely compensation.  This can be more troublesome as it can then require employers to draft lengthy responses to claims which have no legal basis, or result in employees having completely unrealistic expectations as to likely compensation were their claims to succeed, making sensible settlement discussions impossible.  As frustrating as it may be, these types of claims will have to be dealt with in the usual manner, perhaps with additional consideration being given to issuing a costs warning! 

While the advent of AI is incredibly useful in many ways, blind reliance on it can cause headaches for employers.  Being on the lookout for telltale signs of AI involvement and knowing how to address inaccuracies in a calm and measured manner will hopefully nip any robot-related problems in the bud. If all else fails, and as frustrating as it may be, deal with any AI-constructed complaints in the same way as those drafted by us mere mortals, albeit possibly with a little help in the form of, “Alexa, play relaxing music!”