In collaboration with the police, the Digital Trust Center (DTC) warns about the risks of using generative artificial intelligence (AI) in the workplace. They also share tips on how both employees and employers can safely use AI. Human contact is the key here.
While AI tools such as text generators and image creation apps offer entrepreneurs significant (efficiency) benefits, there is also a dark side to these technologies. Cybercriminals can also use these tools for fraudulent practices.
1. Identity fraud, such as CEO fraud. For example, AI can be used to clone a voice or to create realistic texts.
2. Spreading disinformation. Language model ChatGPT produces authentic-looking texts at scale and with great speed. Such a language model can help criminals for propaganda and disinformation purposes.
3. Malware. ChatGPT is capable of producing codes in a number of different programming languages. For a potential criminal with little technical knowledge, this is an invaluable resource to produce malicious code (such as malware).
Manon den Dunnen, Strategic Digital Specialist at the police, emphasizes the importance of being vigilant when using AI yourself: “If you wouldn’t put it on LinkedIn, you shouldn’t put it on ChatGPT either. Because that system trains itself with the information you enter and before you know it, your information appears in texts generated for others. That’s why companies like Samsung have banned their employees from using it.”
Tips for dealing with artificial intelligence and cybercriminals who use it:
• It is best to have confidential conversations in person.
• Never enter confidential data into ChatGPT or similar language models. So no names of people either. Be aware that the systems are aimed at generating texts ‘that resemble’. It is not a search engine, there is no database behind it, so do not use it if factuality is important.
• If you have any doubts about the identity of the person on the phone, you can suggest calling back. Another option is to ask an experience question. For example: How was your conversation yesterday?
• Agreements can be made, for example, to only handle invoices if there is an opportunity to check the source.
• Investigate what solutions you can implement in coordination with partners in the chain to determine the authenticity of the sender of invoices or other important communications. Refer back to advice relevant to, for example, phishing or CEO fraud. These forms of cyber incidents remain basically the same, even if AI is used as a tool.
• Know what questions to ask when purchasing software. For example: How does this software use artificial intelligence, how is it trained, what happens to this data and what security issues are involved?