In the modern age that we live in, with advancements in technology as well as artificial intelligence, more people are relying on neural network-based language models like ChatGPT for their jobs.
A Kaspersky survey in Russia has revealed that 11% of respondents have utilised chatbots, with nearly 30% believing in their potential to replace jobs in the future. Other surveys indicate that 50% of Belgian office workers and 65% in the UK rely on ChatGPT.
While there is a growing trend towards integrating chatbots in the workplace, it begs the question of whether they can be entrusted with sensitive corporate data.
Kaspersky researchers have identified four key risks associated with employing ChatGPT for business purposes.
Data leak or hack on the provider’s side
Although LLM-based chatbots are operated by tech majors, they are vulnerable to hacking or accidental leakage.
Data leak through chatbots
Theoretically, chats with chatbots might be used to train future models.
Since LLMs are susceptible to "unintended memorisation," where they remember unique sequences, they don't enhance model quality but pose privacy risks. Any data in the training collection may be intentionally or unintentionally accessed by other users of the model.
Malicious client
This is a big concern in places where official services like ChatGPT are blocked. Users might resort to unofficial alternatives like programs, websites, or messenger bots and download malware disguised as non-existing client or app.
Account hacking
Attackers can get into employee accounts and access their data through phishing attacks or credential stuffing. Data loss is a major privacy concern for both users and businesses when using chatbots. Responsible developers outline how data is used for model training in their privacy policies.
Kaspersky’s analysis of chatbots shows that in the B2B sector, there are higher security and privacy standards, given the greater risks of corporate information exposure.
Therefore, the terms and conditions for data usage, collection, storage, and processing are more focused on safeguarding compared to the B2C sector.
Anna Larkina, security and privacy expert at Kaspersky, said: “After examining the potential risks tied to using LLM-based chatbots for work purposes, we have found that the risk of sensitive data leakage is highest when employees use personal accounts at work.”
Larkina said that this makes it important for companies to raise awareness with their staff about the risks of using chatbots.
“On the one hand, employees need to understand what data is confidential, personal, or constitutes a trade secret and why it must not be fed to a chatbot. On the other hand, the company must spell out clear rules for using such services, if they are allowed at all.”
Kaspersky experts share tips to stay safe when using chatbots in your business.
Use strong passwords
Create complex passwords for all of your accounts, and avoid using information that can be easy to guess, like birthdays or names.
Beware of phishing
Be cautious of unsolicited emails, messages, or calls asking for personal information like your ID number, banking information, or address. You should verify the identity of the sender before you share any sensitive data.
Educate your employees
Employees should be informed about the latest online threats and the best way to stay safe online.
Keep software updated
Regularly update your operating system, apps, and anti-virus programs. These updates often contain security patches.
Limit corporate information sharing
Be careful about sharing personal information and sensitive data on social media or in public forums. Only offer this information when absolutely necessary.
Verify URLs and websites
Double-check the URL of websites you visit, especially before entering login credentials or making purchases.
Use corporate security solution
To prevent employees from independently consulting untrusted chatbots for work purposes, you can use a security solution with cloud service analytics.
IOL Business