SST Software – Maatwerk software Enschede
Facebook Instagram LinkedIn

How to deal with the risks of using AI in your company

The speed at which we are adopting new AI tools in our Workplace is incredibly high. Especially the adoption curve of Gen-AI shows one of the highest technology adoption speeds ever. Because of this high speed at which new AI products are getting available, as a decision-maker you might get over-excited or experience FOMO and overlook the risks for your company, your team and your customers.

How to deal with the risks of using AI in your company

For example, a great use of new AI tools is automated transcription of your meetings. You can let AI take notes, write down actions and create summaries, even when you’re not attending the meeting yourself. All your meetings can have full transcripts, so you can always search through your transcripts to find what was agreed upon during that meeting. Great. Now why not use these transcripts and let AI do fraud detection? All digital meetings can be recorded and analyzed. If some employees discussed something questionable you can get a notification and check exactly what they discussed. This is not a fictional example. Microsoft is already offering this service as part of their Copilot for Security. The example Microsoft uses is about potential stock fraud. And yes, you want to prevent this, but what else could be monitored this way?

Unexpected side-effects of using AI in your company

The example above might at first sound interesting from a business or legal perspective, but how will this affect your team in the long run? If your team members know that everything they say is being recorded and analyzed, they might change the way they interact. Will they feel like they can make a joke or use sarcasm during a meeting? Can they still build good relationships, or will they avoid any private topic to avoid that their private life gets logged in the transcripts? Will they become afraid to criticize a new (bad) company policy? If there are no personal relationships between team members and when freedom of speech disappears, how will the team perform?

This is an extreme example, but a realistic one. The product is already here. But there are plenty of other risks that can affect your company. AI generated content is often still inferior to content created by experts or creative minds. Factual mistakes and racial or gender stereotypes in generated content might harm your reputation. Using AI to replace customer service might save costs, but does it really increase the speed and quality of support to your customer, or does it just throw up another barrier to avoid contact. Using AI productivity tools often means you share (confidential) data with those AI tools. Some tools can use your data to train their own AI models, and this way there is a (theoretical) chance that other users, or even competitors, get access to your data.

Basic rules for using AI Tools

At SST we, of course, use AI tools too. Our software engineers get help from an AI powered programming copilot. We use Gen-AI to generate content, and we are actively looking into ways to improve our processes with help of AI. To reduce the risks that come with these AI tools we’ve implemented three basic rules and measures:

  1. As soon as customer data, code or other sensitive information is used, we only allow pre-approved tools that guarantee that our data is not used for training purposes and can’t ever be shared. We buy licenses for our employees so we can manage settings about data usage on a company level.
  2. We never ever share personal data or highly sensitive business data with external AI tools.
  3. When a team member uses Generative AI, they must check everything that’s generated and they are responsible for the content or code that’s generated, just like they would be if they would create that content themself.

Useful and safe applications of AI

At SST we also help companies to build meaningful and safe AI solutions, for example a tool that makes multiple separated knowledge bases in a company easily accessible, a machine learning powered estimation program and a chatbot that helps a legal department to search through cases to find similar situations and solutions. Whilst we often use existing AI models for these solutions, we pay extra attention to how the data is used and which users will be able to access certain information.

Of course, AI has many more possibilities and applications, like using image recognition in production lines to do quality control, use of optimization models for more efficient use of resources or to analyze data to make more accurate forecasts. All these solutions come with their own risks, but also with many benefits. Especially when it comes to large amounts of data, AI will be much more efficient than Humans. Just keep data security, human happiness and environmental sustainability in mind when assessing the risks of your new AI solution.

Menno van der Werff

CTO | Business & IT Consultant | Tech Enthusiast

Follow Menno

welcome solution

Cookies are required for the contact form. Show cookie notice

fields marked with * are mandatory