Robot and human staring into each other's eyes - Image by Freepik

Regulating the use of generative AI in the workplace

Does your workplace have an AI Policy? If not, it’s time to start thinking about implementing one. Below are some factors you should consider in developing guidelines for the responsible use of AI.

Andy Thompson

14 March 2024

6 minute read

Note: This article has been written to provide general guidance only. It does not constitute legal advice. Varying legal requirements exist in different jurisdictions and laws and regulations governing AI, privacy and cybersecurity are in a state of flux. It is therefore recommended that organisations seek specific legal advice tailored to their particular circumstances.

It’s no secret that AI has revolutionised the way we work. At the time of writing, the global AI market was worth nearly $200 billion, up from $95.6 billion in 2021. Generative AI tools like ChatGPT or Google Gemini can provide significant benefits – including increased productivity, creativity and innovation. But they also come with significant risks.
Mishandling sensitive data or misuse of AI tools can have severe consequences, including:

  • Breaches of confidentiality: Inadvertent sharing of data could lead to reputational harm and legal ramifications.
  • Loss of intellectual property: Utilising AI to generate potentially derivative content may weaken your organisation’s legal standing regarding intellectual property.
  • Security risks: Exposing internal data to third-party models can increase cybersecurity vulnerabilities.

So it’s critical to provide guardrails to ensure that individuals within your organisation use these tools responsibly – not only to mitigate the risk to your own organisation but to the customers and clients you serve. 

What is generative AI?

Generative AI refers to artificial intelligence systems that generate new data and content based on prompting by an external source. This is distinct from conventional AI systems, which are designed to recognise patterns in existing data. Generative AI can be used to create a range of different content types, including text, images, software code and music. Generative AI systems are trained on data scraped from sources such as books, articles, images, web pages and code repositories.

Key factors to consider 

The following is a non-exhaustive outline of factors that should be considered when formulating a policy around the use of generative AI in your workplace. 

Data sensitivity 

A recent report by cloud security company Menlo Security Inc. found that an alarming 55 percent of all generative AI inputs contain sensitive and personally identifiable information. Without solid anonymisation measures in place, data entered into an AI tool subsequently becomes part of the tool’s training dataset, which means it can potentially be re-exposed in response to future user queries. And unlike conventional applications, AI tools process and store data in a manner that means there is no easy way to ‘delete’ or remove sensitive information.

Caution should therefore be exercised when interacting with any generative AI tool and users should avoid submitting the following types of information:

  • Personally identifiable information (PII), such as email addresses, phone numbers or birthdates
  • Confidential company information (e.g. sales figures, financial data, intellectual property, trade secrets)
  • Information protected by non-disclosure agreements (NDAs).

Privacy and security

AI and machine learning (ML) algorithms rely heavily on user data, raising concerns about privacy and surveillance. With the increasing volume of personal information collected, there is a risk of misuse or unauthorised access. Moreover, the use of AI in surveillance technologies can infringe upon individuals’ right to privacy. Any data breach concerning personal information could potentially have serious and far-reaching consequences.

So wherever possible, you should prioritise tools offered by trusted vendors with robust privacy and security practices. Before anyone in your organisation engages with an AI vendor, they should conduct background research on the vendor’s track record and reputation, avoiding any that have a history of security breaches or unethical practices. It’s also important to ensure the vendor is compliant with any relevant laws or regulations that are applicable in your region. 

It’s a good sign if the vendor is proactively publishing information on their own use of AI and responsible AI practices. See, for example, the ‘Responsible AI’ tab on CMS vendor Kontent.ai’s page on Trust and Governance

Intellectual property

Intellectual property (IP) rights legally protect ‘creations of the mind’, including text, images, software code, and music, to name a few. Generative AI presents an exciting opportunity for creators, but it also opens up the potential for widespread infringement of IP rights. In order to mitigate this risk as a user, it’s essential to avoid using tools that might infringe existing IP rights.

Before signing up to use a generative AI tool, the user should review the platform’s terms and conditions to ensure that the vendor uses legally sourced materials, including datasets and pretrained models, or that the vendor indemnifies the user from third-party infringement claims resulting from use of the tool.  

Fact-checking and critical thinking

The quality of the output from a generative AI tool is limited by the quality of its training data. Low quality or outdated data can lead to inaccurate results and biases. In addition, imperfect algorithms can lead to what’s known as AI ‘hallucinations’, where the AI tool unwittingly produces misinformation by recognising patterns and word associations without having a proper grasp of the meaning behind them. If incorrect information matches a pattern, AI may present it as factual – and often in an alarmingly convincing way. This can lead to the information being passed on by the user, further reinforcing the inaccuracy as ‘fact’.

For these reasons, AI output should never be treated as infallible. Avoiding the trap of AI hallucinations or biases again comes back to seeking out transparent and credible AI platforms that have diverse and representative training data. 

Users should also carefully review all AI-generated text and be mindful that AI models can produce errors or biases. Prompting the AI tool for further clarification and verification of information will also help to minimise hallucinations. Most importantly though, information should always be verified using credible sources before it is acted upon. 

Indirect use of AI via third parties

Responsible AI practices should also be considered when selecting third party vendors or suppliers. Many tools and platforms that organisations use in the course of their day-to-day operations are also using AI in the background, which means they are potentially exposing your organisation to risk if they don’t have sound AI policies and processes in place themselves. For example, in the context of a digital agency, this might include CMS providers, productivity tools, coding environments, or design tools. To minimise this risk, when evaluating third party providers you should apply the same principles as you apply to evaluating direct providers of AI services.

Schedule periodic reviews of your AI policy

An AI policy is not a ‘set and forget’ document. The world of generative AI is constantly changing, as are the laws and regulations that govern it. You need to treat your AI policy as a living, breathing document. Schedule in time to review it on a regular basis, taking into consideration any relevant legal or policy developments.  

Appointing an AI ‘gatekeeper’

If you are fortunate enough to have someone in your organisation with the expertise and willingness to provide guidance on generative AI questions, appoint them as your AI ‘gatekeeper’. This person (or team, if you have the resources) should be charged with the responsibility of keeping abreast of relevant legal or regulatory changes, and should act as the ‘go-to’ for questions or uncertainty about specific use cases of AI tools, as well as being a reporting point for any instances of suspected misuse of AI.

Image by Freepik

Want to tap into the expertise of an agency that’s been in operation since 1999?

Get in touch

Keep Reading

Want more? Here are some other blog posts you might be interested in.