MADRID, 22 Dic. (Portaltic/EP) -
The use of generative Artificial Intelligence (AI) systems, such as ChatGPT, worries senior managers of companies due to the risks it entails, especially due to the possible loss of confidential information and commercial control, and this despite plans to implementation and the employees who already use it in a context where there are not always rules in this regard.
Generative AI has become a useful tool for companies, allowing processes to be automated and a wide range of tasks to be carried out, but senior management is wary due to the security risks it may imply for the business.
This is reflected in a Kaspersky report carried out among Spanish managers, which indicates that of the total number of respondents, only 19 percent have debated the imposition of rules to control the use of generative AI.
96 percent believe their employees regularly use these types of systems, which makes it necessary for them (95%) to know how employees use generative AI to be protected against critical security risks or leaks of data. 64 percent even fear economic losses in organizations.
Another Kaspersky study among business users of generative AI in Spain reveals that 25 percent of those who use Chat GPT at work, the most popular 'chatbot' based on this technology, do not know what happens to the data they enter. on the tool.
This data reflects the importance of awareness and regulation by companies to keep their information safe, since, as the cybersecurity firm recalls, ChatGPT can store information such as IP address, browser type and user settings. , as well as data on the most used functions.
However, according to the employees surveyed, nearly half (45.5%) of companies do not have any internal regulations regarding the use of Chat GPT. In the case of those that have implemented it, 19 percent say that they are not clear enough, 7 percent say that the rules are clear, but are not followed, and only 27 percent say that they are clear and that are fulfilled.
These data contrast with the intention of Spanish managers, since according to the study, half plan to use generative AI and automate tasks with it in the future. 46 percent indicated their intention to integrate this technology into their own routines to improve productivity, as well as those of employees (43%).
Despite all this, 16 percent of Spaniards who use ChatGPT at work consider that it is not important to maintain privacy in the questions they ask the chatbot, and 31 percent consider that it is important not to share private data. , but he still does it.
ChatGPT ensures that it does not share the information provided by the user with third parties. It is only retained to improve the platform and provide the most accurate answers possible, using the data collected to improve the language model and fine-tune the user experience as much as possible.
On the other hand, cyber fraudsters already use the tool to generate malicious code or 'phishing' scams that are as credible as possible. For this reason, Kaspersky experts advise avoiding entering sensitive information and confidential data, both personal and corporate, that may be susceptible to falling into the hands of cybercriminals.
In the case of companies, they consider it important that companies create internal rules that regulate the use of ChatGPT and raise employees' awareness about the importance of not sharing certain data, educating employees in cybersecurity.
They also highlight the need to be cautious when receiving links from unknown websites, as they may contain malicious programs or redirect victims to 'phishing' sites.
Kaspersky also insists that you use a trusted security program that offers protection against viruses and monitors data leaks in real time, as the Kaspersky Premium solution does. And manage keys and passwords with a specific tool like Kaspersky Password Manager.