banner
Home / Blog / By all means, use AI at work, but be mindful of the thorny legal issues that can arise www.singaporelawwatch.sg
Blog

By all means, use AI at work, but be mindful of the thorny legal issues that can arise www.singaporelawwatch.sg

Aug 15, 2023Aug 15, 2023

Depending on how the technology is harnessed at the workplace, AI could give rise to a myriad of legal pitfalls for the unsuspecting.

“AI will probably lead to the end of the world,” proclaimed Sam Altman. This is a bold and cataclysmic prediction, coming from the driving force behind OpenAI and its ground-breaking creation, ChatGPT.

His sentiments are broadly echoed by Geoffrey Hinton, widely considered to be the godfather of artificial intelligence (AI), who mentioned that he was concerned about the “existential risk” of what happens when AI becomes more intelligent than humans.

While the jury is still very much out on whether we are living in the end-times, it cannot be denied that AI is having a transformational impact on the way we work.

As AI continues to develop and – dare we say – become more intelligent, it will inevitably reshape the foundations of how companies conduct business and manage their workforce. Depending on how the technology is harnessed, this could give rise to a myriad of legal pitfalls for the unsuspecting. In this commentary, I highlight, non-exhaustively, three legal issues which employers and employees should be mindful of.

Legal dilemma 1: Who takes (legal) responsibility for the decisions AI makes?

One of the most immediate effects of AI in the workplace is its role in automating and optimising processes.

AI-powered tools like WebHR, Workable and Talkpush have already begun streamlining numerous Human Resource (HR) processes, from talent acquisition and onboarding to payroll management and performance evaluations.

This increased efficiency leads to reduced administrative burdens, enabling HR professionals to focus on more strategic tasks, such as workforce development, business needs and employee well-being.

However, this also raises important legal considerations. As processes become automated, the issue of ownership and responsibility for decision-making becomes more blurred.

In the context of talent acquisition, in particular, the use of software to automate CV screening and candidate matching drastically reduces the time recruiters need to spend sieving through resumes. However, the use of AI algorithms could lead to perverse results if it has the unintended effect of amplifying bias, such as by favouring persons with certain accents or facial features.

In case you think this is far-fetched, in October 2019, researchers found that an algorithm used on more than 200 million people in US hospitals to predict which patients would likely need extra medical care heavily favoured white patients over black patients. While race itself was not a variable used in the algorithm, another variable highly correlated to race was, which was healthcare cost history.

Who is responsible if an AI algorithm makes a discriminatory hiring decision?

It is not unreasonable to anticipate that hiring managers would point their fingers at HR professionals, who are likely to direct responsibility to their business services colleagues responsible for the software, who are in turn likely to deflect blame onto the hiring criteria given to them by the hiring manager in the first place.

Clear guidelines and regulations must therefore be established to address such ethical concerns and ensure compliance with anti-discrimination laws, such as Singapore’s upcoming Workplace Fairness Act.

Legal dilemma 2: At which point is it problematic for employers to harvest data for surveillance?

AI’s ability to process and analyse vast amounts of data is upturning existing workplace practices.

Predictive analytics can help organisations make informed decisions about workforce planning, identify potential areas of concern, and improve employee engagement. AI algorithms can flag trends related to employee turnover, workplace satisfaction and performance, allowing employers to take proactive measures to address issues before they escalate.

AI-equipped tools also enable sophisticated workplace monitoring and surveillance. From tracking employee activities and behaviour to monitoring digital communications, AI-powered systems can offer employers unprecedented insights into their workforce.

However, as Uncle Ben told Peter, with great power comes great responsibility. And in this regard, data privacy is of paramount concern – and for good reason. In Singapore, the Personal Data Protection Act 2012 places the responsibility of safeguarding personal data on the organisation which has custody of it, very often the company, and requires it to make reasonable security arrangements to prevent any unauthorised access or loss. The Personal Data Protection Commission has the power to impose penalties for data breaches.

To the extent the personal data is stored in servers located elsewhere, other regulations with even more stringent requirements may apply. For example, the European Union’s General Data Protection Regulation (GDPR) not only requires data processors to safeguard personal data, but also requires data controllers to implement appropriate measures to ensure that data is processed in accordance with the requirements of the GDPR.

And closer to home, China’s newly enacted Personal Information Protection Law has special rules around the processing of sensitive personal information, and further prohibits organisations from processing or disclosing personal information in any way that is contrary to national security or public interest.

These are potential minefields that organisations must be aware of to prevent inadvertent violations and breaches.

In respect of workplace surveillance, in particular, this also raises concerns about employee privacy and individual autonomy. It isn’t a stretch of the imagination to draw parallels to the dystopian future contemplated by George Orwell in his seminal novel 1984, where Big Brother uses advanced technology to monitor and control the thoughts and actions of the citizens of Oceania.

Clear boundaries are required to prevent the potential misuse of surveillance data, and any downstream discrimination, harassment or unfair treatment of any individual.

Legal dilemma 3: How to ensure AI is not breaching intellectual property (IP) rights?

Another area where AI is having a transformative effective is in the creative process. Of late, AI is playing an increasingly significant role as a creative tool in architecture, science, music, and, in particular, the arts. Generative AI systems such as DALL-E2 are able to create realistic images and art from simple descriptions input in a natural language. You can tell DALL-E2 to create an impressionist oil painting of sunflowers in a purple vase in the style of Pissarro, and it will do so.

This inevitably raises questions around both the ownership/authorship of any output generated by AI-equipped systems, as well as the risk of IP misappropriation. It is well known that AI is largely data driven, whose machine learning algorithms scrape large quantities of data found in numerous databases.

We’re already seeing this in the lawsuit brought by authors Mona Awad and Paul Tremblay, who allege that OpenAI breached copyright laws by training ChatGPT on its novels without their permission. They point to the fact that ChatGPT was able to generate very accurate summaries of their works.

There are also ethical and moral issues to consider. Using generative AI to replicate artistic styles or genres that mimic the works of the old masters raises questions around authenticity and the integrity of artists today. As Ed Sheeren will tell you following his copyright trial against the estate of Marvin Gaye, this is a very slippery slope, and one from which there is no guarantee of financial recompense even if you prevail at trial. Let’s get it on, then.

Within the corporate sphere, AI has the propensity to open a world of possibilities, sometimes with adverse consequences. At one end of the spectrum, employees could use generative AI as a benign tool in developing eye-catching spreadsheets and pithy summaries of reports. At the other, languorous or unscrupulous employees might seek shortcuts by employing AI to draft or even plagiarise entire reports for them.

The pressing need for guardrails

If, like Altman and Hinton, you fear for the future of mankind and the impending war with the machines, then consider the New Zealand supermarket chain Pak ‘n’ Save. Its AI-powered meal planner recently recommended customers recipes for, among other things, poison bread sandwiches, mosquito-repellent roast potatoes and an Oreo vegetable stir-fry.

Or consider the two hapless New York lawyers who landed themselves in hot water by relying ChatGPT for their legal research. They submitted a legal brief that cited several cases, purportedly in support of their arguments. It later transpired that the cases were entirely fictional, having been made up by ChatGPT.

It is clear that, for all the power and potential AI has, this nascent technology is still prone to failure, sometimes with hilarious results.

That being said, more so than ever, guardrails are needed to prevent this technology from being abused or even weaponised as it develops.

The European Union has taken the lead by formulating the EU AI Act, the world’s first comprehensive legal framework for AI. The Act will establish obligations for providers and users depending on the AI risk level. It is likely a matter of time before other leading jurisdictions will do so as well, and organisations must be prepared for the ensuing regulation – after all, nobody wants to eat turpentine methanol French toast, surely?

Clarence Ding is a partner and the head of the Singapore employment practice at international law firm Simmons & Simmons. This commentary reflects his personal views and is not intended to constitute legal advice.

Source: Business Times © SPH Media Limited. Permission required for reproduction.

Legal dilemma 1: Who takes (legal) responsibility for the decisions AI makes?Legal dilemma 2: At which point is it problematic for employers to harvest data for surveillance?Legal dilemma 3: How to ensure AI is not breaching intellectual property (IP) rights?The pressing need for guardrails