The future of lead scoring is prescriptive

Gerben Oostra
7 min readOct 3, 2022

--

For over a decade, marketeers have been using lead scoring to prioritize their engagement activities and drive revenue. Now it’s time to move on, and use the right tool for the job. We’ll show how a prescriptive approach is more effective and will do a much better job.

Lead scores

The primary use of lead scores is to decide how to prioritize engagement efforts. Your resources are limited, time is precious, and you probably have a mix of high- and low-quality leads. With too many possible daily actions, you must prioritize where to spend your effort. If you know which leads are lost cases, you can ignore them and put effort into those that you can convince.

Lead scores aim to increase conversions by changing engagement efforts.

The rise of predictive lead scores

Several methods construct lead scores, including rule-based models and profile similarities. Rule-based models are built by domain experts and typically assign points to your leads. The profile similarity approach compares leads to an ideal customer profile. The more similar attributes, the higher the ranking.

A recent approach is machine learning to generate predictive lead scores. The model is trained to predict which leads will become successful (for example, are “Closed Won”). It considers all available historical and current information and provides better lead scores.

Now that you know the scores of all your leads, you can put more effort into high-quality leads. These are the most likely to convert, and you might expect these to be the ones to focus on. However, these scores aren’t sufficient to optimize your engagement efforts.

Why predictive lead scores fail

Lead scores, even if advanced data-driven approaches create them, aren’t ideal for optimizing sales effort decisions. There are two misalignments between the information the score represents and the action you take on them.

Firstly, data-driven approaches like machine learning assume the organization behaves as it did during the data collection. The model extracts patterns from this data, which can be arbitrarily complicated. Based on those patterns, it can subsequently predict if a specific new lead will succeed or fail. However, this assumes that the future aligns with the same patterns. As we aim to improve (and thus change) our engagement with our leads, we want to replace those existing patterns with improved ones. Predictive lead scores assume you maintain the status quo.

Lead score predictions assume future engagement stays the same.

We can now introduce the underlying and main issue:

The biggest issue is that predictive lead scores answer the wrong question. A predictive lead score model provides you with “the likelihood someone will convert.” The model will attribute the success rate to the combination of lead characteristics and the effort you put into it. But it isn’t designed to predict what happens for different actions.

Let’s take an extreme example. Suppose you never chase foreign leads; the model could attribute the failures to you not spending any effort and the lead being foreign. The predicted conversions assume you won’t engage with foreign leads in the future and thus still be correct. However, to recommend a particular engagement, we need to know what would happen if you engaged differently. We need to know if the lead failed because it was foreign or because you didn’t chase them historically.

To analyze the effect of your effort, we need to separate correlation from causation. However, before we show how to do this, let’s tackle a self-fulfilling prophecy.

But the metrics show it works!

Your dashboards show that your lead scores work and have good predictive qualities. Unfortunately, this is probably due to a self-fulfilling prophecy.

Let us continue the previous example. Again, foreign leads will get a low score. As a result, you will engage even less with foreign leads, and cause the deal to fail. It works the same the other way. If you think you can close high-scored deals, you’ll spend more effort and improve their success rate.

Thus if the organization uses lead scores to optimize engagement efforts, predicted lead scores often align with the actual results. In this case, you may have good predictive power, but you won’t have improved your engagement efforts!

If you diligently (and blindly) follow lead scores, your may have good predictive power, but you won’t have improved you engagement efforts.

Was the effort spent wisely? Didn’t we over-engage with leads that would succeed anyway? Didn’t we under-engage with leads that could have been converted? Unfortunately, predictive lead scores don’t answer these questions. In short, predictive lead scores keep you in the status quo.

The prescriptive approach as the solution

To optimize engagement correctly, we need to answer the right question and separate causation from correlation.

The regular lead scores answer “how likely can we close the deal,” which implicitly included “given the current engagement decisions.” But we want to answer, “how much does extra engagement help us to succeed in this lead?”. Or, similarly, “what would happen if I either engaged or not with this lead?

The former frames the problem as predictive, while the latter is a prescriptive framing. A predictive approach aims to predict what would happen when the organization keeps working as it did. The prescriptive approach seeks to get recommendations on how to act, as shown in the following picture:

Prescriptive framing of engagement optimization

The Causal Inference domain provides techniques that allow separating causation from correlation and determining the actual effect of engagement efforts on your objective. It can then give meaningful recommendations on how to act.

Prerequisites for successful Causal Inference

To apply Causal Inference, you must correctly set up the problem definition. Only then it’s possible to determine the effect. A correct problem definition has three prerequisites:

Firstly, you need to define and track the engagements you want to optimize. These are the actions that the model will recommend. You can track outgoing calls, meeting invitations, newsletters, and any other arbitrary action you take. These must be things you do or actions you take, not something that happens to you. Thus, include “invited lead for a demo” but not “lead accepted meeting invite.”

To get actionable recommendations, you also need to define the specificity of the recommended actions. For example, is the choice a free “engage,” or do you want detailed instructions that distinguish between calling, emailing, or other activities?

Second, you need to determine which decisions you want to optimize. The decision definition consists of the subject of the decision, combined with its moment. The subject defines the granularity of the decision; is your choice of action related to a lead, a company, or a contact? The moment specifies the frequency of decisions; do you make daily, weekly, or one-off decisions?

Thirdly, you need a clear definition of your objective. For example, when would you say a decision is successful, and when not? The Causal Inference model needs information about successes and failures to determine the best decisions. Each historic decision thus needs to be assigned a success metric. The metric can be binary, like “associated lead closed within 30 days,” or numeric, as “total signed contract size within the next 30 days.” The metrics allow you to train the Causal Inference algorithm, evaluate the resulting policy offline, and track the performance of your future prescriptive lead score model after deployment.

Three prerequisites to enable successful causal inference.

With these three definitions and sufficient historical data, Causal Inference techniques can recommend the best action for your objective. If necessary, you can even include business constraints, like limiting the total number of activities or comparing the effect with the cost of an action.

But I always engage with my leads!

Before we conclude, you might think prescribing actions is useless because you need to act to progress a lead. Without any engagement, opportunities will never close.

However, there are many moments during a lead’s lifetime where you decide whether or not to take additional actions. You don’t engage with every contact every day in every possible manner. Because some you’ve contacted yesterday, emailing them again today might actually hurt the process. Some are lost cases, so why would you spend effort on them? Engaging is unnecessary for others, as they are already on the right track.

In the context of everything we currently know of a lead, and taking into account all actions we’ve previously done, the prescriptive policy’s goal is to determine which additional action (or none) is beneficial.

Conclusion

Do not predict which leads you will win with your current process. Instead, determine which actions improve your conversion rate. This change from predictive to prescriptive modeling will enhance your bottom line!

If you need any help collecting all your data, adding clear insights, and even leveraging it into a prescriptive model based on causal inference, you might find Dealtale’s Revenue Science platform helpful. It is out-of-the-box, requires no code, and is easy to implement. Causal AI is ready and waiting for you, schedule a chat with one of our experts to learn more.

Don’t predict the status quo, use causality to improve

--

--