Intelligent Segmentation as the Attack Point for AML

Financial crimes are not victimless. They fund terrorism, human trafficking and other crimes against humanity. At the center of the financial crimes portfolio sits money laundering. Each year, money laundering transactions account for approximately $2-3 trillion. This equates to 2-5% of global GDP.

Nonetheless, only about 1% of global illicit financial flows are ever seized by authorities.

Given the technology that we have at our disposal this shouldn’t be the case. Money laundering should be a losing proposition – the most risky, least likely to succeed in the spectrum of financial crimes. Regulators have made this a priority – fining banks at an unprecedented rate and in the process creating an entire sub-industry of fintech (although they could be doing more – something we will address in a future post).

Why is the financial services industry failing to combat this issue when everyone’s incentives are essentially aligned?

We believe the answer lies in how the industry attacks AML. Today we attack it from the diligence or transaction level. The lever point, however, is with segmentation.

Segmentation is the fulcrum for better AML outcomes. By better we mean lower false positives and lower false negatives – more efficient.

Some banks (the more sophisticated ones) practice segmentation today. Most just hand code rules.  These result in coarse segmentation, bad scenarios and correspondingly high thresholds.

For Financial Institutions (FIs) to generate better outcomes, they need to apply intelligent segmentation.   

  • Intelligent segmentation delivers orders of magnitude improvements on false positives and false negatives.
  • Intelligent segmentation has a minimal impact on existing systems delivering disproportional benefits to competing approaches.
  • Intelligent segmentation continues to learn over time – meaning evolving methods are quickly identified.
  • Intelligent segmentations is immensely scalable – meaning that it can look across more transactions to deliver better outcomes.
  • Finally, intelligent segmentation is transparent – meaning that everyone from the internal model review board to the regulator knows what is driving the segmentation, the scenarios and the threshold setting.

Let’s dig in.

Why Segmentation is the Attack Point for Advancing Today’s AML Programs

Most FIs have adopted a variety of mitigation efforts in their AML programs and have reaped material benefits from that work. The following diagram, lists the key components in an advanced AML program.

Of these steps, segmentation has the greatest opportunity for impact and is the key attack point for improved detection.

Here’s why:

Governance and customer due diligence procedures have reached a high level of sophistication and the benefit of additional rules has a marginal impact on detection. These procedures require a considerable amount of human time and subject matter expertise, and have become standardized by regulators and the industry.

As the next step in an AML program, segmentation has tremendous downstream benefits. By appropriately segmenting customer and transaction data, important behavioral patterns can be observed. These behaviors are essential to set effective rules and thresholds that are used to flag transactions via the transaction monitoring systems.

While the segmentation process is open to change and improvement, the next step, transaction monitoring is essentially locked down from an AML process perspective. All transaction monitoring system (TMS) providers must be vetted by government and independent agencies. Many TMS vendors provide important data to government and law enforcement agencies. For this reason and others, regulators are very resistant to tampering of these systems even to the point of discouraging new competitors to enter the field.

Nonetheless, the effectiveness of a TMS is a largely a function of what happens upstream of it. The customer segments, rules, thresholds, and watch lists provided to the TMSs are essential for detection of suspicious transactions. Without strong segments, rules, and thresholds, the TMS step will lead to a high number of false positives and possibly miss authentic money laundering attempts.

High false positive rates ultimately will negatively impact investigation of suspicious activities, which is predominantly a human effort.

In effect, intelligent segmentation offers a key way to improve the overall impact of an AML program. With better segments, comes better rules and thresholds, better detection by the TMSs, and less costly waste of human investigative effort. Just as importantly, this is the area of least disruption in an AML process, thereby delivering the greatest return.

They key word above is intelligent. While segmentation is a great addition to the AML process, the problem is that the traditional statistical and machine learning methods used for segmentation are deeply flawed.

The Baseline

Before we get into the nuts and bolts of intelligent segmentation for AML and why it works – we need to establish a baseline of what happens today.

For most FIs the world looks like this:

 

It is worth noting that this process doesn’t even include standard segmentation.

Current AML processes typically have hand-coded money laundering rule scenarios to evaluate each transaction for each geography or type of business. Subject matter experts encode established patterns such as high repetitive number of small transactions (structuring), money flows in and out of high-risk countries, etc.

Hand-coded rules are hopeless in the face of the torrent of data experienced by FIs. From the volume and velocity of the financial transaction data, to its inherent complexity and unlabeled nature – banks cannot effectively distinguish between signal and noise.

This is why the industry is awash in false positives.

A recent industry survey of large FIs by PwC, revealed that of the transaction alerts generated by their current anti-money laundering detection efforts (which rely on traditional data mining and statistical techniques), 90%-95% of alerts were found to be false positives. This is consistent with our work with major financial institutions and may, in fact, under-report the problem.

The cost of these false positives to an FI is tremendous.

To complete due diligence reviews of the hundreds of thousands of alerts produced monthly under current AML detection programs, FIs must employ thousands of personnel (both internal and external) at a cost of hundreds of millions of dollars a year. Even a small decrease in the number of false positives generated, can save a firm millions of dollars while enhancing the overall efficacy of the program (meaning catching more bad guys).

Furthermore, there is well documented research on the concept of alert fatigue. Delivering better cases will result in better investigations – creating a virtuous cycle for the investigative teams.

For a select number of FIs, they have moved toward a world where they apply machine driven segmentation.  This is a significant step forward but is not quite intelligent segmentation.  

Current segmentation practices are weak for a variety of reasons including:

  • use of traditional statistical methods such as K-Means Clustering
  • segmenting client and transaction data separately
  • slow segment uptake into real-time transaction monitoring.

Segmentation Through K-Means Clustering

Most institutions using machine powered segmentation use K-Means Clustering. K-Means is powerful and we are definitely fans, however, in this context it has some shortcomings.  While the following is a little dense technically, it describes why a more nuanced approach is necessary.

  • K-Means relies on Euclidean distances – Euclidian distance is the metric of fit and variance is used to measure cluster scatter. Calculating Euclidean distance requires n2 distance computations, again where n is the number of observations. For each variable in a dataset with 10 million customers, this translates to 100 trillion distance computations, which is 800TB of memory. Data scientists often down-sample larger datasets to a few thousand observations to allow the data to fit in memory and process in a reasonable amount of time. Sampling may lead to ill-fitting segments and these segments may not represent the dominant characteristics present in the population.
  • K-Means requires k to be specified – The algorithm requires that the number of clusters be specified. This is why K-Means is often referred to as a semi-supervised learning process. An inappropriate choice of k can lead to poor results. While there are methods to help identify k, each has its drawbacks.
  • K-Means is an unstable algorithm and works well when k is low – K-Means works well on limited dimensional datasets. Each additional dimension, or variable, requires n2 calculations. This can quickly become an unstable for computation. For this reason, data scientists must select a subset of variables to use in the clustering algorithm to be efficient with time and computing resources. Increasing the number of dimensions is quite helpful in our context where it is a major drawback for a standard K-Means implementation.
  • Clusters are often expected to be of similar membership size – The K-Means clustering method is based on the idea of spherical clusters so that the mean value converges toward the cluster’s center. The method expects clusters to be of similar size in membership and this often means the assignment to the nearest cluster center is the correct assignment. This many not be the optimal assignment and can result in coarsely defined segments.
  • K-Means relies on convergence to a minimum – When identifying distance to the centroid, K-Means can mistake a local minimum for a global minimum. This in effect will lead to incorrect cluster assignments.

Segmenting Client and Transaction Data Separately

The limitations of establishing segments by subject matter expertise or by statistical methods such as K-Means Clustering often means that client data and transaction data must be segmented separately. Combining client data with multiple types of transaction data, builds a high dimensional dataset that is not well served by the current techniques discussed.  By not incorporating the entire dataset, many patterns and anomalies can go undetected.

Slow Uptake of New Segments into Real-Time Monitoring Process  

Even after segments have been defined, it may take months for internal model review teams to evaluate the correctness of segments. The subjective nature of segment assignment often creates lengthy discussions on best fit for customers or transactions that are new or not well understood. After this, it takes significant lead time for the IT teams to update the appropriate TMSs with updated segments, rules, and thresholds. Given these difficulties, updating customer data on a monthly or quarterly basis can be a significant hassle for FIs. Thus, keeping up with the changing behavior of criminals is limited and difficult.

The Next Evolution: Intelligent Segmentation

Intelligent Segmentation is a proven method for greatly reducing the number of false positives detected and has improved detection of previously overlooked suspicious transactions. Properly implemented, Intelligent Segmentation can help any FI improve operational efficiency, reduce risk and focus employee man-hours on a smaller, higher risk caseload.

Intelligent Segmentation combines Ayasdi’s Machine Intelligence Platform with a purpose built application designed to deliver that intelligence broadly across the organization. Powering this experience is a combination of Topological Data Analysis and dozens of Machine Learning algorithms that drive the categorization of customer data into segments/groups with similar characteristics so that appropriate rules and thresholds can be determined to flag suspicious transactions.

Intelligent Segmentation even works on unlabeled data allowing financial institutions to attack know your customer’s customer (KYCC) problems.

Ayasdi’s approach to Intelligent Segmentation relies on a principled approach designed to augment a subject matter expert’s knowledge. That approach follows our Discover, Predict, Justify, Act and Learn framework and is detailed below:

  • Discover – Ayasdi’s software uses unsupervised learning to automatically discover customer segments. These groups are then put through a tuning process with additional algorithms to identify optimal groupings. A subject matter expert can adjust the segmentation process per their specifications.
  • Predict – The next step uses supervised learning to predict future behaviors that allow one to create new rules and thresholds that accurately detect potential launderers.
  • Justify – A complete model, decision tree, audit trail, and documentation are provided to enable internal review teams to discuss and justify segments, with the ability to adjust segments as needed. These tools support compliance efforts and regulatory review.
  • Act – Ayasdi’s application provides clients with the ability to apply segments to scenario analysis to create appropriate rules and thresholds and to then seamlessly push this information to relevant transaction monitoring systems.
  • Learn – Finally, the segmentation model learns from new and updated data sources and produces results rapidly to keep up with the changing behavior of criminals.

The customer data used in Ayasdi’s process is ideally a superset of customer risk profile data and correspondent risk profile data collected via the FI’s customer due diligence process, in addition to historical transaction data from all banking services offered by the FI. This allows the TDA process to examine the high dimensional dataset for both global and local patterns/anomalies.

Ayasdi’s ability to analyze high dimensional data using TDA and ML combined sets it apart from other ML solutions. This additional dimensionality (generally orders of magnitude) provides far greater resolution.

The result of Intelligent Segmentation is a set of unbiased, optimally selected segments, a supporting model, decision tree, recommendations for the watch list, and full documentation that can then be reviewed by the FI’s subject matter experts.

Inclusion of these optimally selected segments in scenario analysis yields superior rules and thresholds for use in transaction monitoring.

This, as noted elsewhere, can deliver spectacular results in reducing false positives and detecting previously overlooked suspicious transactions.  

A Demonstrative Use Case

A recent use case of Ayasdi’s Intelligent AML Solution involved a large, geographically-diverse financial institution. By applying Ayasdi’s approach to a set of unlabeled SWIFT transaction data (around 50+ million observations), our solution achieved reductions in investigative volume of more than 20% while capturing all existing SARs and an additional set of newly identified SARs. To put that in context, the client had a multi-year engagement with an expected an improvement of 3-5%.

In terms of data dimensionality, Ayasdi had access to the same data features as the client. The client’s analytical and computing resources had limited segmentation analysis to a handful of key features. Ayasdi’s software discovered close to 1,000 features relevant to the dataset, and ultimately 120 features were selected in creation of the final segmentation model – a 12x increase over what was previously used.

The initial project took twelve weeks and involved a team of two Ayasdi data scientists, a project manager from Ayasdi, and a team of two domain experts from the bank. Once data access and data transforms were completed, the initial “run” from a compute perspective took eight hours to go from raw data to the segments and predictive segment models. The run time of eight hours accomplished the same work as what the FI had completed in 6 months through manual segmentation efforts.

Some Closing Thoughts

Getting better at detecting money laundering is in everyone’s best interest and we have the technology to tackle the problem.

The attack point to achieve these ends is intelligent segmentation. Intelligent segmentation is optimal for a number of reasons:

  • It has minimal impact to existing systems
  • It can deal with labeled and unlabeled data
  • It is highly scalable
  • It is transparent and justifiable

To find out how to leverage AI for your AML challenges reach out to us at sales@ayasdi.com to arrange for a demonstration.