Is Generative AI Putting My Company At Risk?

A couple of months ago, I posted a poll on LinkedIn. The question I posted was:

“Does your company currently have a policy in place that outlines the use of AI tools in the workplace?”

The results of this question were quite interesting. 2554 people voted in it with the breakdown of responses below.

Having a background in data analysis, I’ve spent hours since the poll closed reviewing the results (if you work or have worked in data, you’ll understand). What stands out to me is, looking at this data from a governance standpoint, 71% of the respondents are working for companies that have not or are not addressing the use of generative AI. I believe this is a mistake and in today’s article, I’m going to tell you why.

The increasing use of generative AI tools

Unless you work solely in a physical trade, you have seen or will see AI tools slowly becoming more readily available to assist you in the workplace. Companies that currently dominate the software markets are investing billions in AI technology and what we’ve seen so far is just the beginning.

I’ve spoken to a lot of people who have very valid concerns about AI taking over their jobs. While I don’t think AI will eliminate job categories completely, the way in which we do our jobs is going to change and anyone who doesn’t stay up to date with the latest methods is going to find themselves falling behind. AI will interrupt existing business processes in much the same way that spreadsheets interrupted the finance industry.

Allowing the use of AI tools in the workplace is going to have to be a decision that every company makes strategically and intentionally. Unlike other software tools, there are significant risks in letting usage grow organically.

What are the risks of using generative AI tools?

Generative AI is one of the most powerful tools you will encounter in your career, it has the potential to elevate your work to new heights or destroy your career. It all depends on how you use it.

If that last statement sounds ominous, it was meant to be. Generative AI learns from and uses data that is provided to it. That includes any information it finds on the internet. If it’s available publicly, AI will be consuming it and using it to improve. While there is no real risk in learning from all available information, at some point it will be required to use that information when performing a task and that is where you can run into trouble.

While this is by no means complete, here are three of the biggest risks you will encounter unless you put strict guidelines in place:

Information security and privacy.

In order for AI tools to analyze data or use data in any way to perform a task or create a result, it will need access to the data. That data might then be available and used when performing similar tasks for your competition, your clients, or the latest scam. You have an obligation to protect client and company data. You need guidelines in place to do this, without them your company is at risk of legal complications.

Accuracy.

Remember when I said AI consumes and uses the information it finds on the internet? That includes misinformation and disinformation that exists. Just like people, if AI reads it enough times, it will start to believe it’s the truth. Companies have an obligation to their clients and customers to put out accurate and truthful information. Employees have that same obligation to the company. If AI tools have the potential to generate inaccurate information, it’s the company’s responsibility to ensure that all of its employees know how to produce accurate and truthful results. You need guidelines in place to do this.

Plagiarism and Copyright Violation.

This, in my opinion, is the biggest risk you are going to face with the use of AI tools. AI only does what it’s told to do. If you ask it to generate a social post, write a blog post, generate an image or do hundreds of other tasks it’s capable of doing, it will use what it has learnt to do that. What it won’t do is run a check to make sure that no copyright laws have been violated. While you can certainly include commands in your prompt to tell your AI tool to make sure it doesn’t violate any copyright and cites any quoted text, you cannot and should not completely trust what is generated. Why you ask? Refer to the previous point on accuracy. In order to check the output for plagiarism and copyright, employees need to know what tools to use along with how to use them. You need guidelines in place to do this.

What guidelines need to be in place?

Now that you’re sufficiently concerned that AI is going to destroy your life, let’s look at how you can implement policies around the use of generative AI tools so that you don’t encounter the issues we just talked about.

Update your privacy documents.

The first thing you need to do is update your existing privacy documents. Anything that references the data your clients and customers provide, needs to include a statement on AI usage. Whether you use AI tools or not, you owe it to your customers to tell them what your policy is. They have a right to know what, if anything, you are doing with the information they provide.

Create an AI Acceptable Use Policy for all employees and contractors.

This is where you will have to do a bit of work and I suspect this is the real reason behind the 71% from the original poll. Legal documents are expensive and with the lack of samples and specific legal expertise in this particular area, it makes it difficult to even know where to start. You need to start somewhere however and my recommendation would be to create a contract that employees and contractors are bound to.

While you need to consult with a legal expert to make sure you are properly protected, I decided to try an experiment and ask ChatGPT to generate an AI Acceptable Use Policy. I think it did a pretty good job. I’m happy to make this document available to anyone who would like to use it as a starting point for the creation of their own.

Here is a sample of what it produced:

Policy Statement

The company recognizes the benefits of using AI tools to enhance productivity, creativity, and innovation. However, the company also acknowledges the challenges and risks that AI tools may pose to data privacy, information security, intellectual property rights, ethical standards, and regulatory compliance. Therefore, the company expects all employees to use AI tools in accordance with the following principles and requirements:

  • Evaluation of AI tools:– Employees must evaluate the quality, reliability, and security of any AI tool before using it for work-related purposes.- This includes reviewing the tool’s features, functionality, accuracy, limitations, terms of service, privacy policy, and data protection practices.- Employees must also check the reputation and credibility of the tool provider and any third-party services integrated with the tool.
  • Authorization of AI tools:– Employees must obtain prior approval from their manager or the relevant department before using any AI tool that involves sharing or processing company or customer data.- Employees must also comply with any internal processes or guidelines for requesting and using AI tools.- Protection of confidential data:- Employees must not upload or share any data that is confidential, proprietary, sensitive, or regulated without proper authorization and encryption.- This includes data related to customers, employees, partners, suppliers, products, services, contracts, finances, strategies, or plans.- Employees must also ensure that any data shared or processed by an AI tool is deleted or anonymized after use.- Transparency of use:- Employees must inform their manager or the relevant department of the intended use and purpose of any AI tool.- Employees must also disclose the use of any AI tool to any internal or external stakeholders who may be affected by or interested in the output or outcome of the tool.- Employees must not misrepresent or conceal the use of any AI tool or its results.
  • Respect for intellectual property rights:- Employees must respect the intellectual property rights of the company and third parties when using AI tools.- Employees must not use any AI tool to copy, reproduce, modify, distribute, or publish any content or material that is protected by copyright, trademark, patent, or trade secret laws without proper permission or attribution.- Employees must also ensure that any content or material generated by an AI tool does not infringe on the intellectual property rights of others.
  • Adherence to ethical standards:- Employees must adhere to the company’s code of conduct and ethical standards when using AI tools.- Employees must not use any AI tool to create or disseminate any content or material that is false, misleading, offensive, discriminatory, harassing, defamatory, or illegal.- Employees must also ensure that any content or material generated by an AI tool is consistent with the company’s values and goals.

Final Thoughts.

Implementing policies and procedures around new technology like generative AI can be expensive and very time-consuming. Hopefully, this article has shed some light on why it’s necessary regardless of the time and expense involved and that it gives you at least a starting point on what needs to be done.

AI tools are going to change the way you work and will have a dramatic impact on your business. Ensuring the impact is a positive one, starts with proper guidelines in place.

If you would like the full AI Acceptable Use Policy document generated by ChatGPT or need any assistance in creating and implementing AI governance, feel free to contact me.

Verified by MonsterInsights