I’m a big believer in using data-driven performance models to drive both business and personal change. I also believe you need to take care in choosing the metrics that you use in your model. A poor choice may result in a distorted view of what is happening and might actually result in outcomes outside of what you really want.
This past month, while watching Ken Burns and Lynn Novik’s documentary The Vietnam War on PBS, I was reminded of the cautionary tale contained in the US military’s use of data analytics to quantify success during the Vietnam War.
The effort taken to do data collection and number crunching as well as the early use of computer modelling was impressive. I just think they used the wrong metric. In the past military campaign success was often measured in terms of ground taken. As this strange war was more akin to a civil war with no real ground changing hands, they needed a new measurement. In terms of analytics, they focused on kills as a data point, and kill ratio as a metric (enemy; own military). A metric first used towards the end of the Korean War in the previous decade.
Using this kill ratio metric resulted in various negative behaviours and outcomes:
- The military became overly focused on killing people.
- Civilians were caught up in the kill numbers. The enemy, the VietCong, weren’t easily recognizable in the local population. The drive for kill numbers led to a justification of killing civilians and calling them the enemy after the fact.
- Killing civilians resulted in increasing the ranks of the VietCong as an emotive response of relatives and neighbours who experienced these personal loses.
- The kill ratio metric didn’t consider the American reaction to the increasing number of American military deaths. The kill ratio metric on its own assumed there was going to be American deaths without putting a tolerance limit on how many. As the Vietnam War continued on, this increasing number of American deaths feed into the growing anger, anguish and disillusionment about the war at home.
We don’t even have to go back to the ‘60s to see how the wrong incentive model can promote negative behaviours and outcomes. Its a widely held opinion is that banking performance incentive bonuses helped contribute to the 2008 financial crisis. These banking incentives were based on short term gains rather than long-term quality of loan assets, and helped to promote the riskier decision making that fueled the lending bubble.
I’m not going to make suggestions on what should have been considered as metrics instead in either of these cases. It’s just too large a quagmire of emotions and opinions for those affected most. I would rather view them as what I call cautionary tales of what not to do. Instead, I’d like to talk about what you should consider when planning and deciding on your own metrics for data-driven change.
In deciding on performance metrics to use I think you need to keep a few key things in mind
- Decide on your top level business objectives
- Identify and understand your goals in terms of what you want to achieve or change.
- Find a balance between the metric being driven by the data that is already available to you and seeking out new data points to collect. I think you need to do both.
- Decide on key metrics that represent the above
For me the secret sauce is to use this formula to identify at least two goals that look at your objectives from contrasting or just different points of view. Choose at least two metrics that measure that measure contrasting things or that will look at the data from different angles. This will give a multi-dimensional view to what you are seeing and will result in better insights.
As an example, let’s look at the SaaS business model. If the top level objective is to be profitable, and the business goals include reducing churn, a popular and necessary metric to look at is Customer Lifetime Value. We want to make sure that the revenue we receive from ongoing customer exceeds the cost to acquire them and the operational costs to keep them. Anything negative can represent churn – a customer leaving before we recover these costs. Though another reason we might be experiencing churn is the product/market fit for the type of customers we are acquiring. So we might want to consider a metric that tells us something about the market segment of customers in relation to when they end their subscriptions. Or consider a metric that measures the utilization rates of customers in relation to when they end their subscriptions. Knowing who is staying on the platform and how they are using it, and who is not, can inform decisions to target these customers as well as how we on-board and educate them on using the platform.
Using a data-driven approach is key to achieving our business objectives. We just need to take the time to make sure we understand our goals and what metrics can help drive them. There are plenty of real world examples where the objectives haven’t been achieved while side effect outcomes are experienced. So we have to make sure our data-driven approach is robust and will help us to move in the right direction.