![]() |
This is another topic that has taken me a long time to write, but several conversations with Peter Burris(@plburris) from Wikibon finally helped me to pull this together. Thanks Peter! I’ve struggled to understand and define the Intellectual Capital (IC) components – or dimensions – of the new, Big Data organization; that is, what are the new Big Data assets that an organization needs to collect, enrich and apply to drive business differentiation and competitive advantage? These assets form the basis of the modern “collaborative value creation” process and are instrumental in helping organizations to optimize key business processes, uncover new monetization opportunities and create a more compelling, more profitable customer engagement. To start this discussion, let’s first start with an understanding of the economic value of data as outlined in the article “Determining the Economic Value of Data”: “Data is an unusual currency. Most currencies exhibit a one-to-one transactional relationship. For example, the quantifiable value of a dollar is considered to be finite – it can only be used to buy one item or service at a time, or a person can only do one paid job at a time. But measuring the value of data is not constrained by transactional limitations. In fact, data currency exhibits a network effect, where data can be used at the same time across multiple use cases thereby increasing its value to the organization. This makes data a powerful currency in which to invest.” This currency network effect phenomena – that the more that you share and use it, the more valuable it becomes – is true with analytics as well. Analytics benefit from the same network effect where the more that you share and use (and provide feedback into) the analytics, the more valuable the analytics become. Modern Sources of Intellectual CapitalThere are three sources of intellectual capital that is output by the modern organization:
Data, analytics and use cases are the three new asset or intellectual capital (IC) types that form the foundation of the modern Big Data organization. These three entities create the To see how an organization can align these three sources of Big Data intellectual capital, let’s go to my favorite guinea pig, er, uh, case study – Chipotle – and discuss how Chipotle could integrate these three Big Data sources of intellectual capital into a “Big Data value creation framework.” Step 1: Begin With An End In MindIn order to drive focus and prioritization across the organizations (see the “Big Data Success: Prioritize Important Over Urgent” blog), let’s start by identifying the organization’s key business initiatives. For our Chipotle example, let’s focus on Chipotle’s key business initiative of “Increase Same Store Sales” (especially in light of Chipotle’s recent e-Coli problems). Performing some rudimentary financial analysis based upon Chipotle’s 2012 annual report (the annual report that I use as the basis for my University of San Francisco School of Management class), we can calculate that a 7% increase in same store sales is worth about $191M annually for Chipotle (see Table 1). Let’s build upon this $191M opportunity by brainstorming the different use cases (i.e., clusters of decisions) that the key business stakeholders need to make in support of the “Increase Same Store Sales” business initiative (see “Updated Big Data Strategy Document” blog for the process of identifying the supporting use cases). The use cases that could support Chipotle’s “Increase Same Store Sales” initiative include:
Let’s now consider the potential value of each of those use cases vis-à-vis the value of the Chipotle “Increase Same Store Sales” initiative using the techniques discussed in the “Determining the Economic Value of Data” blog. The brainstormed financial impact of each use case are displayed in Table 2. Note: We have outlined the use case identification, brainstorming and valuation estimation process in the reference materials listed below (sorry, some assembly required). Classroom reference materials:
Step 2: Build Analytic ProfilesThe next step on the Big Data value creation framework is to build analytic profiles around the organization’s key business entities. Analytic Profiles are structures (models) that standardize the collection and application of the analytic insights for the key business entities. Analytic Profiles force an organizational discipline in the capture and application of the organization’s analytic efforts to minimize the risk of creating “orphaned analytics,” which are one-off analytics built to address a specific need but lack an overarching model to ensure that the resulting analytics can be captured and re-used across multiple use cases. Key business entities are the physical entities (e.g., people, products, things) around which we will seek to uncover or quantify analytic insights about that business entity such as behaviors, propensities, tendencies, affinities, usage trends and patterns, interests, passions, affiliations and associations. For our Chipotle example, we want to create analytic profiles for the following business entities:
The insights that are captured in the Analytic Profiles are created by applying data science (predictive and prescriptive analytics) against the organization’s data sources in uncover behaviors, propensities, tendencies, affinities, usage trends and patterns, interests, passions, affiliations and associations at the level of the individual entity (individual humans, individual products, individual devices). See Figure 2 for an example of a Chipotle Customer Analytic Profile. Next, we map the Analytic Profiles to the different Business Use Cases to determine which analytic profiles are most applicable for which use cases. The analytic profiles (Customers, Stores, Managers) created for an initial use case can be used – and subsequently strengthened or improved – as those analytic profiles are applied across multiple Business Use Cases (see Figure 3). Classroom reference materials:
Step 3: Identify and Prioritize Data SourcesThe final step in the Big Data value creation framework is to identify, prioritize and aggregate the supporting data in a data lake. Since not all data is of equal value, a prioritization process needs to be used to ensure that the most important data sources (where importance is defined as the importance of that data source to support the top priority use cases) are loaded into the data lake first. Figure 4 shows the results of an envisioning process to ascertain which data sources are most valuable to which Chipotle use cases. Finally, we leverage the use case value estimation work from Table 2 to determine the potential value of each of the data sources in support of Chipotle’s “Increase Same Store Sales” business initiative (see Figure 5). As we can see in Figure 5, we can create a rough order estimate for each of the data sources in support of each individual use case. We outlined the estimation process in the “Determining the Economic Value of Data” blog. Classroom reference materials:
SummaryFinally, let’s pull all three of these Big Data IP assets – data, analytic profiles and business use cases – together in a single graphic that highlights the critical role that data science plays in coupling data with analytic algorithms to create the analytic profiles that support the organization’s top priority use cases (see Figure 6). While this process may not be perfect, it does force the business and IT stakeholders to In the end, we want to align the data, analytic profiles and business use cases in a manner that supports the organization’s key business initiatives. Hopefully, this process of aligning the three Big Data intellectual capital components is easier than actually trying to align the colored cubes of the Rubik’s cube (and it’s no fair peeling off the colored stickers and pasting them onto the cubes in the right order!). The post The Big Data Intellectual Capital Rubik’s Cube appeared first on InFocus. |
