Can AI Transform Compliance?
Corporate compliance departments are being squeezed. Accenture’s 2019 Compliance Risk Study found that nearly three-quarters (71%) of compliance departments at financial institutions face a cost reduction target, with nearly two-thirds targeting budget reductions of 10% to 20% over the next three years. They are also suffering employee attrition, with reports of compliance officers being overworked and exhausted.
“Compliance departments have to do more with less,” said Samantha Regan, global co-lead for the regulatory remediation and compliance transformation group within Accenture’s finance and risk practice and co-author of the study.
Thus, there is an urgency to embrace new technologies like natural language processing (NLP) and artificial intelligence (AI) to improve compliance productivity. The study calls for “a new generation of compliance talent that is digitally fluent, well-versed in analytics, and capable of delivering proactive risk insights.”
Current compliance professionals will not become redundant overnight, Regan said, as departments need to maintain a balance of skills. It is more about adding professionals with new skills, such as those trained in NLP and machine learning, who can create scoreboards, visualizations, predictive models and the like. It is about doing the same job in a different, more efficient way.
In the coming years, artificial intelligence may radically improve compliance, aiding the critical shift from check-the-box toward a risk-prevention outlook—“not so much because of the cost issue, but because it will allow companies to re-imagine their processes,” said Dilip Krishna, chief technology officer for Deloitte Risk and Financial Advisory.
To be sure, the speed with which NLP and other AI processes can work through a pile of documents is impressive. For example, regulatory technology firm ComplyAdvantage estimates that it can process 150 million articles a month—6.5 million articles a day—looking for the adverse media reports used in anti-money laundering (AML) compliance. By comparison, 50 traditional bank researchers working a full day without breaks can cover 24,000 articles, according to Livia Benisty, head of the firm’s financial crime unit. AI processing is often more accurate, too.
But most noteworthy are the potential qualitative changes that AI could bring to compliance, Krishna said. Are loans being booked accurately into the bank’s internal loan system and in compliance with the applicable loan regulations, for example? With traditional compliance processes, human beings might look at 10% of a bank’s loans to ensure things are being done correctly, while AI processes can review 90% of the data, improving accuracy. Even more impactful, AI systems might eventually do the actual booking, eliminating the need for this kind of first-line compliance altogether.
In the meantime, the compliance costs and manpower requirements in heavily regulated industries like financial services can be daunting. The Bank Secrecy Act, for one, requires financial institutions to detect and report customers engaged in money laundering, fraud, terrorist financing and sanctions violations. In a large bank, 200 to 500 analysts can be occupied with know your customer (KYC) and AML compliance alone, scouring news articles and other public reports to avoid onboarding clients with sketchy pasts, Benisty said.
The false-positive rate in these media searches is very high, however—on the order of 95%, she said—which means following up on most searches red-flagged using a traditional rules-based search is “a complete waste of time and money.” Only about 2% ever lead to a Suspicious Activity Report (SAR), she estimated.
NLP can do better because it looks for context. Traditional adverse media searches might use Google and other tools to scan the internet for hot words like “harassed,” “indicted” or “charged.” But words can be ambiguous. “Charged,” for instance, can mean to accuse someone of an offense under law, but it can also mean to entrust someone with a task. Consider a headline like “Elizabeth Warren Charged with Reinvigorating the Democratic Caucus.” A traditional search might flag that article, but an NLP process probably would not because it would also look at adjacent words in the headline like “reinvigorating,” which is more often associated with the benign meaning of “charged.”
Using NLP and other AI algorithms, ComplyAdvantage can reduce false positives by 70%, Benisty claimed. The financial institution can then get by with fewer analysts and reduced costs, or, alternatively, the bank’s analysts can be freed up to do more meaningful work.
Regan noted AI and NLP can also be applied in non-financial sectors like trade surveillance, anti-corruption compliance (e.g., picking up violations of the Foreign Corrupt Practices Act from time and expense reports), and privacy compliance (e.g., identifying personally identifiable information violations).
Natural Language Generation
NLP is still in the early adoption stage in organizational compliance departments, but an even newer AI technology is now emerging: natural language generation (NLG). NLG analyzes structured data and summarizes its findings using natural language. In other words, the software writes compliance narratives automatically.
“Natural language generation is great for SARs,” said Anthony S. Dell, chief compliance officer at venture capital firm General Catalyst. Writing SARs is time-consuming and repetitive, and many bank compliance officers could use that time for more strategic tasks.
According to Keelin McDonell, senior vice president at NLG technology provider Narrative Science, NLG can reduce the time spent preparing SARs by as much as 75%. The NLG report is generated almost immediately, but often a compliance officer will review the AI-generated document and may add anecdotal evidence. As institutions often write thousands of SARs a month, the annual savings to big banks from automating these processes could be in the “seven figures” range, McDonell estimated.
NLG is not yet a silver bullet, however. One common misconception is that users can input data in any form and the software will spew out fully-formed prose, but it requires clean, structured language. Some companies that attempt to implement the technology have failed to anticipate the amount of preparation required and have struggled to get NLG programs up and running.
NLG has been useful in some areas, like reporting on sports events using the structured language in a baseball box score, said John Lucker, principal at Deloitte Advisory. It might also be effective with fact-based reports, like SAR or EDGAR filings. But language is complex. Understanding sarcasm, double negatives, or idiomatic expressions is still problematic for computers. It is not clear how far NLG can be taken before its output sounds like computerese (which it is, of course). The software’s lexical rules engine also often only works in English.
That said, the technology allows companies to deploy analytics to a wider audience, not just data analysts, which means organizations may not be so beholden to “expensive and hard-to-find talent like data analysts, data engineers and data scientists,” according to research firm Forrester.
Overall, NLG and other machine learning technologies have a role to play because the compliance burdens faced by organizations are “not trivial,” Lucker said. For instance, pharmacies are increasingly required to track prescriptions amid issues like growing public concern about opioid addiction. Most prescriptions are still paper-based, so some companies have sought to streamline the tracking process using handwriting-reading software. The technology remains a work in progress, however. Software can read about 80% of the prescriptions accurately, but human beings still have to deal with the other 20%, meaning it is not eliminating the burden, just the number of people involved in the task.
Compliance has been somewhat neglected in recent years, he noted, with few companies heavily investing in it lately. The Trump administration has been deregulating, but regulations are not necessarily being rolled back at the state level, and many of these can be quite onerous, such as the forthcoming California Consumer Privacy Act.
The Risks of AI
AI comes with its own risks, of course. As the Accenture report noted, “The financial services ecosystem also continues to experience a surge of newer types of risks anchored in technology and data, such as cyber and privacy. Such risks are further compounded by the growing adoption of artificial intelligence in business processes, which presents additional ethical issues. Compliance officers may find themselves having to navigate these without the ability to foresee unintended consequences.”
Last year, Amazon had to scrap an AI-based hiring tool after it demonstrated a bias against women, for instance. The company used resumes from the previous 10 years to “train” the model to select successful applicants, but as in the tech industry overall, the resumes of candidates who were hired were overwhelmingly male, so the model “learned” to select other resumes from men.
There will always be risks with black-box processes like AI, Benisty said, but there are also risks with human beings doing compliance searches. If compliance departments hire the right people who understand the technological tools and their limitations, some of those risks can be mitigated.
That said, if a suspect character tries to open a bank account and a SAR is filed with regulators, then compliance departments have to be able to explain exactly why this person’s activity was deemed suspicious. At some level, the technology must be interpretable.
In five or 10 years, Dell foresees more intuitive AI-type tools. Doing advanced analytics will be like an internet search today. Dell believes “we’re heading in a great direction” with respect to AI and compliance, but noted that, especially in larger organizations, “what will make the difference [in the future] is still the human element, building human relationships.”
In any event, the status quo with regard to compliance is not acceptable. “We can’t continue as we are going,” Benisty said. “Compliance is failing.” Compliance departments cannot keep up with the skyrocketing amount of financial crime, and many companies face a level of unmanaged employee attrition that is above expectation.
“The time for compliance departments to maintain the status quo or take incremental steps in the face of disruptive forces has passed,” Regan said. “Financial services has changed, and compliance has to change with it.”
Reprinted with permission from Risk Management Magazine. Copyright Risk and Insurance Management Society, Inc. All rights reserved.
Written by Andrew W. Singer, 2019
Source: https://www.rmmagazine.com/articles/article//2019/09/03/-Can-AI-Transform-Compliance-