Managing Data Security Risks of AI Technology
In recent months, artificial intelligence has generated widespread interest and conversation as tools like ChatGPT demonstrate a wide range of personal and professional applications. Perhaps equally important to this discussion, however, is an understanding of the risks posed by AI technology, including the significant data security threats that are already having unintended consequences for companies.
With current AI technology, users can input more data than ever before and use that information to learn patterns of behavior; uncover and predict future trends; and create and emulate works, sounds and images quickly and efficiently. While this can have many beneficial applications for organizations, experts warn that data exposure, loss of intellectual property and other data security risks will all increase exponentially.
These threats have already been materializing. In March, Samsung found out the hard way how easily employees acting in good faith can inadvertently breach company confidentiality and compromise its intellectual property by using third-party AI tools. In three separate incidents in the space of a month, employees unwittingly leaked sensitive company information when they tried to use ChatGPT to solve work-related problems. One employee asked ChatGPT to optimize test sequences for identifying faults in chips, which is a confidential process. Looking for help to write a presentation, another employee entered meeting notes into ChatGPT, putting confidential information for internal use into the public domain. Employees also pasted sensitive, bug-ridden source code from the company’s semiconductor database into ChatGPT in an attempt to improve it.
The problem with asking ChatGPT or other public AI-based platforms to assist with fixing such issues is that the information that is put in becomes training data for the platform’s large language model (LLM). And since ChatGPT retains data users input to further train itself, Samsung’s trade secrets and intellectual property were effectively put into the hands of the platform’s parent company, OpenAI.
Although OpenAI later acknowledged that it was possible for an organization to retrieve such information, the key takeaway from this self-inflicted breach is that proprietary information should never be pasted into ChatGPT or any other LLM-based services. Further, companies should ask any third-party AI provider what happens to the data inputs and outputs from any queries or prompts its employees enter.
Developing Policies for AI Use
Such mistakes have forced companies to reconsider who in the organization should have access to AI tools and for what purpose. Samsung’s response has been to restrict employee usage of the tool to such low-volume data inputs that it is unlikely another security blunder of a similar magnitude could occur. Several other large companies including Amazon, Apple and Verizon have banned employees from using ChatGPT, while Wall Street giants JPMorgan Chase, Bank of America and Citigroup have also curtailed its use.
But banning or restricting the use of such prevalent and easy-to-use solutions can lead to other problems. “The issue with this response is that some employees are going to use LLMs in the workplace regardless of a company policy that bans them,” said Greg Hatcher, co-founder of cybersecurity consultancy White Knight Labs. “LLMs make employees exponentially more productive, and productivity is directly correlated to compensation in the workplace. Companies have been battling shadow IT for 20 years—we do not want a ‘rinse and repeat’ situation with LLMs becoming shadow AI.”
The best way forward is for companies to explicitly tell employees what constitutes acceptable and unacceptable AI use in the workplace. “Although we are relatively early in AI/LLM adoption, eventually there will be compliance and regulatory requirements around AI usage in sensitive environments where privacy is critical,” he said. “We are just not there yet.”
Moving forward, experts believe it is essential to raise employee risk awareness through training. According to Kevin Curran, professor of cybersecurity at Ulster University, senior management needs to prioritize security training for each level of employee. “All training should provide real-world examples and case studies that employees can relate to, showcasing the impact of security practices on their work,” he said. “It is important to encourage employees’ continuous education and for organizations to provide regular opportunities for staff to stay updated on emerging security threats and countermeasures. There should also be far more active participation in security initiatives, such as reporting suspicious activities and contributing to discussions or even offering suggestions for improving security measures.”
It is also vital that companies develop and circulate an enterprise-wide policy on the use of AI technologies that prevents people from using such tools until they have been fully trained and made aware of the associated risks, said Dr. Clare Walsh, director of education at the Institute of Analytics.
She recommended companies establish clear rules on what can and cannot be entered into the tool, such as no personal data, nothing with commercial value to the company, and no systems code. All staff should also be required to state when any material has been produced by AI.
Another sensible precaution would be to sanction only “low-stakes” data requests, and to “advocate the use of these technologies only where the human requesting the output has the training and knowledge to supervise and check that the machine has produced something accurate,” Walsh said. To that end, employees should be trained to look for simple anomalies in presentations, marketing material and other documents, such as outputs that do not make sense, are irrelevant or are factually inaccurate.
Both employers and employees have a duty to question how they use AI and whether they are using it securely, agreed Ed Williams, EMEA regional vice president of pentesting at cybersecurity firm Trustwave. They should consider key issues such as: whether the AI model (including its infrastructure) is secure by design; what vulnerabilities it might have that could lead to possible data exposure or harmful outputs; and what measures can be put in place to ensure correct authentication and authorization, as well as appropriate logging and monitoring.
“Once both the business and employee have adequately answered these questions, it then becomes a question of risk acceptance or mitigation where possible, and consistent evaluation of employees’ skills, internal cybersecurity capabilities and threat detection going forward,” he said.
The Role of Risk Management
Risk managers have a key role to play in protecting companies from increasing cybersecurity and data risks introduced by AI. First, companies must conduct thorough risk assessments. “Begin by evaluating the potential risks associated with AI technologies within the organization,” said Rom Hendler, CEO and co-founder of cybersecurity firm Trustifi. “Identify the AI systems in use, the data they process, and the potential threat vectors. Assess the existing security measures and identify gaps that need to be addressed.”
Another important step is to implement robust data governance by developing comprehensive policies and procedures to ensure the secure collection, storage and processing of data. Companies should encrypt sensitive data, implement access controls and regularly audit data handling practices. They should also promote a culture of security awareness, emphasizing best practices for data handling, recognizing social engineering techniques and reporting potential vulnerabilities. Data minimization strategies are also critical to reduce the potential impact of breaches. The more data a company has, the more data that can be stolen, potentially resulting in bigger ransomware demands and fines from data regulators around the world.
Establishing an AI data governance framework may not be as difficult as it sounds. Many companies probably already have the governance and control infrastructure in place to address several key emerging AI risks—they just may not be aware of it. “Some key AI risks look very similar to already known cybersecurity risks and companies can calibrate their technical and organizational measures to account for variations on the theme,” said Brock Dahl, partner and head of U.S. fintech at law firm Freshfields.
He advised companies to build on current cybersecurity risk governance frameworks while continuing to ensure they remain flexible and adaptable. Organizations should question whether the use of new technology is integral to their assets and activities and if there are any features of this technology that present familiar governance challenges, or introduce new ones.
“In the age of rapid innovation, the key is not simply to keep pace with each new development, but to take a step back and ensure the organization’s risk management architecture is geared toward absorbing constant flux,” he said. “There will be surprises, but the goal of the risk management enterprise is to create a robust mitigation capability for when those surprises emerge, while also limiting surprise to the greatest degree possible.”
However, risk managers need to be aware of other risks that may be more unique to AI. For example, in inversion attacks, hackers try to determine personal information about a data subject by poring through the outputs of a machine learning model. In data poisoning cases, malicious actors input incorrect information to skew results. Even if the necessary controls look similar to existing governance measures, these risks will require specialized mitigation approaches.
It is also critical to monitor the development of AI, data and cybersecurity regulation. Since the use of chatbots in business is still relatively new, current rules can be vague.
“We are seeing steps toward AI-specific legislation in various jurisdictions around the world,” said Sarah Pearce, partner at law firm Hunton Andrews Kurth. “By far the most advanced of these is the European Union’s AI Act, which is going through its final phases before coming into force. Certain aspects of the proposed legislation will undoubtedly require clarification in due course. The definition of AI itself, for example, will likely pose issues as to interpretation and, ultimately, in identifying which technologies are subject to the act’s requirements.”
Risk managers should make a dedicated effort to foster collaboration across the organization, engaging cybersecurity experts, AI specialists, and legal and compliance teams so there is a shared understanding of AI-related risks and appropriate safeguards. According to David L. Schwed, cybersecurity professor and practitioner-in-residence at Yeshiva University’s Katz School of Science and Health, risk managers should align themselves with cybersecurity professionals who understand these unique attack vectors to establish strong controls. “Controls that were good enough last week may not be good enough this week,” he said. “Given the advancement of AI-related and broader cyberrisks, the ‘rinse-and-repeat’ mindset will not work in this new world.”
Focusing on Better Data Security
Some experts believe the increased cyberrisk is not the fault of AI—it is because of poor risk management. According to Richard Bird, chief security officer at AI security firm Traceable, the increased exposure is due “in no small part” to the fact that companies have been mishandling data and IT security for years. Trying to embrace AI technologies at an enterprise-wide scale has just exposed the weaknesses further.
“It is not time to integrate AI into every aspect of any enterprise’s workflow for one very simple—yet very obviously overlooked—reason: No one has taken the time to figure out the operational, functional and corporate changes necessary to consume, leverage and optimize their uses of AI,” he said.
“Our operational workflows have been conditioned to direct human interaction for centuries,” Bird explained. “A simple ‘rip and replace’ approach to implementing AI is going to lead to a massive outbreak of unintentional consequences. Large and medium-sized enterprises are rigid, inflexible institutions when it comes to change, compounded by the problems of corporate politics, budget restrictions and shareholder liabilities. Simply dropping AI into the mix is going to result in a lot of avoidable pain and failure.”
Bird added, “When it comes to security, it is clear that most companies are in much worse condition to mitigate the risks of their corporate and customer data being stolen or leaked than they were just six months ago.” This is because most companies were already struggling to keep their data safe before the rise of generative AI. The ease with which employees can take advantage of the technology and the lack of adequate security controls around it have further increased the risk to companies.
AI technology is evolving quickly. Organizations and risk professionals need to act fast to understand AI risks and ensure that they not only have the appropriate controls in place, but that their risk governance frameworks are flexible enough to adapt as new threats emerge. Anything less may put valuable company data at risk.
Reprinted with permission from Risk Management Magazine. Copyright Risk and Insurance Management Society, Inc. All rights reserved.
Written by ENeil Hodge, 2023
Source: https://www.rmmagazine.com/articles/article/2023/08/01/managing-data-security-risks-of-ai-technology