![]() |
Organizations have key business processes that they are constantly trying to re-engineer. These key business processes – loan approvals, college applications, mortgage underwriting, product and component testing, credit applications, medical reviews, employee hiring, environmental testing, requests for proposals, contract bidding, etc. – go through multiple steps, usually involving multiple people with different skill sets, with a business outcome at the end (accept/reject, bid/no bid, pass/fail, retest, reapply, etc.). And while these processes typically include “analytics” that report on how well the processes worked (process effectiveness), the analytics only provide an “after the fact” view on what happened. Instead of using analytics to measure how well the process worked, how about using predictive and prescriptive analytics to actually direct the process at the beginning? Instead of analytics that tell you what happened, how about creating analytics at the front of the process that predict what steps in the process are necessary and in what order? Sometimes the most effective process is the process that you don’t need to execute, or only have to execute in part. High-Tech Manufacturing Testing ExampleWe had an engagement with a high-tech manufacturer to help them to leverage analytics and the Internet of Things to optimize their 22-step product and component testing process. Not only was there a significant amount of capital tied up with their in-process inventory, but the lengthy testing processes also created concerns about excessive and obsolete inventory in an industry where product changes happened constantly. The manufacturer had lots of data that was coming off of the testing processes, but the data was being used after the fact to tell them where the testing was successful or not. Instead, our approach was to use that data to predict what tests needed to be run for which components coming from which of the suppliers manufacturing facilities. Instead of measuring what happened and identifying waste and inefficiencies after the fact, the manufacturer wanted to predict the likely quality of the component (given the extensive amount of data that they could be capturing but today was hitting the manufacturing and testing floors) and identify what tests where needed given that particular situation. Think dynamic or even smart testing. We worked with the client to identify all the data that was coming out of all the different testing processes. We discovered that nearly 90% of the potential data was just “hitting the floor” because the organization did not have a method for capturing and subsequently analyzing this data (most of which was either very detailed log files, or comments and notes being generated by the testers, engineers and technicians during the testing processes). At a conceptual level, their processing looked like Figure 1 with traditional dashboards and reports that answered the basic operational questions about how the process was working (see Figure 1).
We used a technique called the “By Analysis” to brainstorm the “variables and metrics that might be better predictors of testing performance”. For example, when examining components with a high failure rate, we started the brainstorming process with the following question: “Show me the percentage of component failures by…” We asked the workshop participants to brainstorm the “by” variables and metrics that we might want to test. Here are some of the results:
The data science team started building a score that could be used to predict the quality and reliability of a component from a particular supplier created from a particular manufacturing machine at a particular time of day/week/year under particular weather conditions tested by a particular technician, etc. Yea, I think you can quickly see how the more detailed data that you have, the more accurate the score. We were able to create this “Component Quality & Reliability” score that we could use prior to the testing process to tell us what tests we needed to conduct and in what order with a reasonable level of risk (see Figure 2). By using the Component Quality & Reliability score, we could determine or predict ahead of time what tests we thought we would need to run and in what order. The result was a dramatic improvement in the speed and cost of testing, with a minor but manageable increase in component performance risk. Baseball AnalogyI love sports, particularly baseball. Baseball has always been a game played with statistics, averages and probabilities. There are lots of analytics best practices that we can learn from the game of baseball. And one of the way that analytics is used in baseball is to determine the likelihood of your opponent doing something. For example, the best baseball fielders understand each individual batter’s tendencies, propensities, averages, statistics and preferences (e.g., where the batter he is likely to hit the ball, what pitches he prefers to hit) and uses that information to position himself on the field in place where the ball is the most likely to be hit. Then the fielder will make in-game, pitch-by-pitch adjustments based upon:
The same approach – predicting ahead of time what is likely to happen – works for many business processes, such as underwriting a loan or mortgage. We would want to learn as much as possible about the players involved in the underwriting process – borrower, property, lenders, appraisers, underwriters – so that we could build a “score” that predicts which steps in the process are important and necessary, and which ones could be skipped without significantly increasing the risk. For example:
With this information in hand, we’d be better prepared to know which types of credit applications need what level of scrutiny and what level of risks we would be willing to accept for what level of underwriting return. Just like the best baseball shortstops and center fielders! SummaryA few things to remember about using analytics to predict what process steps need to be executed and in what order, and which steps can be reduced or skipped given a reasonable increase in risk:
Using analytics to predict what components or applications need to be tested versus using analytics to measure process effectiveness can provide a magnitude improvement in your key business processes. In the long-term, it’s the analytics emitted from your key business processes (yielding superior customer, product, operational and market insights) that will differentiate your business. The post Do Your Big Data Analytics Measure Or Predict? appeared first on InFocus. |
