Please be aware this blog is a long chapter. Save the link for later reading.
Chapter 14: Transformational Measurement Action Plans
This chapter is composed of Transformational Measurement Action Plans (TMAPs) of emergent and transformational measures that are intended to stimulate your thinking, challenge your mental models, and, in some cases, provide examples and guidance for measurement. Review those that cover problematic areas in your organization, use them as ongoing references, and add your own emergent and transformational measures to the list.
1. Customer Experience
Meeting the high standards of doing business successfully today requires more than the standard customer satisfaction measurements. Customer experiences include not only the ‘‘core transaction’’ (the purchase) itself, but also everything that precedes and follows it—all of the points of interaction between the organization and its customers. For example, for an airline ‘‘transaction,’’ there may be as many as twenty individual experiences involved in making reservations, check-in, travel, and arrival that cannot be reduced to a single satisfaction rating.
Organizations are beginning to realize that providing a product or service is only the ‘‘tip of the iceberg’’ in terms of what the customer experiences about the organization. A Bain and Company study reveals just how companies tend to misread their customers’ perceptions: Of 362 firms surveyed, 80 percent believed they delivered a ‘‘superior experience’’ to their customers, but customers rated only 8 percent of the companies as delivering a truly superior experience.1
Transactions are easy to measure, but experiences aren’t. Most customer satisfaction measures average customer experiences, when in fact the customers don’t see the average, but they see every ‘‘defect’’ as an issue.
When used effectively, Customer Experience measurement can be a very powerful tool for transforming the way that organizations view themselves and their customers. For each incident made visible through a customer comment or complaint there might literally be hundreds of incidents that are never reported.
Changing from measuring and managing ‘‘customer service’’ (what you give the customer) to measuring and managing Customer Experience (what the customer gets) represents a major paradigm shift. This is a transformationally different approach from what is traditionally measured and managed, which tends to make organizations and their employees much more aware of the total cross-functional customer experience. Because it focuses on well-defined interactions, Customer Experience measurement is also more actionable than customer satisfaction measurement. A great example of this is at the Inn at Little Washington in Virginia, where restaurant staff assign a ‘‘mood rating’’ (from 1 to 10) to each customer party when they enter the establishment and throughout the meal. The goal is to raise the mood rating, with the standard that no one should leave the restaurant with a mood rating below a 9.2 Although subjective, this is a innovative tool to help the staff keep focused on Customer Experience and to obtain feedback on how well they are orchestrating the experience.
One of the best ways to measure Customer Experience in a less centralized environment is through the use of ‘‘event-driven surveys’’—surveys that are automatically deployed when a customer has completed a particular interaction, such as a reservation, a purchase, a service inquiry, or a refund.3 However, one of the biggest challenges with measuring complex customer interactions is how to do so relatively nonintrusively. Retail stores and banks use trained observers, video surveillance, and mystery shoppers to monitor Customer Experience. With some creativity and a good sampling strategy (of both experiences and respondents), organizations can still collect considerable data without offending customers or creating data overload.
2. Customer Engagement
One of the oldest measures to have dominated the business measurement landscape is ‘‘customer satisfaction.’’ However, increasingly, customer satisfaction has become acknowledged as a measure that tends to focus on and reinforce low expectations (the extent to which customers’ minimal expectations have been met). Furthermore, it tends to be based on the myth of the entirely ‘‘rational customer’’ who makes rational decisions. However, research indicates that buying decisions are also emotional. Organizations need to tap something deeper than mere ‘‘satisfaction.’’
New measures are beginning to acknowledge the customer in a more holistic way. That’s why the concept of Customer Engagement is a potentially transformational one. An ‘‘engaged’’ customer is very different from a merely ‘‘satisfied’’ one.
The Gallup Organization has developed and is distributing a Customer Engagement measurement instrument4 that endeavors to measure the strength of the emotional bond between a customer and a company or brand. The Gallup survey shown below includes eight statements to which customers are asked to respond on a 5-point scale (from ‘‘Strongly Agree’’ to ‘‘Strongly Disagree’’). The blanks should be filled in with the company or brand name being assessed.
- [ ______ ] is a name I can always trust.
- [ ______ ] always delivers on what it promises.
- [ ______ ] always treats me fairly.
- If a problem arises, I can always count on [ ______ ] to reach a fair and satisfactory resolution.
- I feel proud to be a [ ______ ] customer.
- [ ______ ] always treats me with respect.
- [ ______ ] is the perfect company for people like me.
- I can’t imagine a world without [ ______ ].
One of the key words in the survey is the use of the word ‘‘always,’’ which appears in five of the eight statements. Gallup’s research has shown that trust, respect, confidence, fair treatment, and the other practices have to be present all the time. According to the underlying theory, every time there is an interaction with a customer, the company is either building Engagement, or eroding it. Gallup researchers have found, in numerous industries they studied, that the proportion of ‘‘fully engaged’’ customers has ranged from around 6 percent to as high as 40 percent. In contrast, they found that 80 percent of customers reported ‘‘satisfaction’’ in these same industries.
3. Customer Delight
What is a ‘‘satisfied customer’’? How does customer satisfaction relate to other customer measures (such as customer retention, loyalty, and profitability)? The research says, ‘‘Not much!’’ Furthermore, what do different satisfaction scores mean? For example, what is the difference between a 3.7 and a 4.1 on a five-point scale? Let’s say that your organization finds that 80 percent of its customers are satisfied. What does that really mean? The reality is that 20 percent of your customers are dis-satisfied, and you probably don’t know which or why. Even more serious is that every one of those 80 percent satisfied might be a candidate for attrition.
Recent thinking has led many advocates of traditional customer satisfaction measurement to realize that its value is quite limited and that it might even distract organizations from focusing on what is most important: ‘‘delighting’’ (not just minimally satisfying) their most important customers. There is now a very strong body of research indicating that, if you implement a customer satisfaction survey with a five-point response format (very dissatisfied to very satisfied), customers who report being ‘‘very satisfied’’ are much different from merely ‘‘satisfied’’ customers. They aren’t just more satisfied, they are actually delighted. So, identifying the customer who are ‘‘very satisfied’’ or ‘‘delighted,’’ and discovering why (through interviews or focus groups with a sample of the segment) can provide transformational insight. Interviewing ‘‘very dissatisfied’’ customers can also be extremely valuable, if your organization has the stomach for it!
A Customer Delight Index has been developed by Dr. Darrel Edwards using a 5-point Customer Delight scale (Failure, Unsatisfactory, Satisfactory, Excellent, and Delightful).5 Only the final rating point is considered to represent true ‘‘delight.’’ According to Edwards, ‘‘When you Delight your customer, you create a strong emotional response that commits the customer to the product, brand or manufacturer. Commitment leads to loyalty.’’ Clearly this scale has raised the bar from the traditional satisfaction survey, with three positive choices, rather than just two. But what is more significant is that only one of them really counts.
The Net Promoter Score6 is a rating of Customer Delight with another name. Customer loyalty guru Frederick Reichheld has introduced a similar concept, but calls these delighted customers ‘‘Promoters’’—customers who are willing to recommend your product or service to friends (defined as those rating your organization a ‘‘9’’ or ‘‘10’’ on an 11-point scale). But that’s not all, there are very likely to also be ‘‘detractors’’—those customers who are unlikely to recommend your organization (defined as those rating your organization ‘‘0’’ through ‘‘6’’ on the scale). ‘‘Passive’’ customers are those who rate your organization ‘‘7’’ or ‘‘8.’’ A Net Promoter Score (NPS) is calculated by subtracting the percent of detractors from the percent of promoters. What is particularly intriguing about the Net Promoter measure (and there is a substantial body of recent research to back it up) is that it can be measured using one simple question: ‘‘How likely is it that you would recommend this organization to a friend or colleague?’’
4. Customer Loyalty
The importance of customer retention started being acknowledged when it became clear that even a small increase in retention rates can raise profits considerably. Many years ago, organizations developed the idea of ‘‘loyalty programs’’ as a way to increase customer retention. Airlines, hotels, supermarkets, and credit card companies invested aggressively in such plans. As we now know, these programs did not create truly loyal customers. Customers who buy according to incentives tend to follow the incentives and are attracted to the next good deal, no matter who is offering it. In addition, many customers who appear loyal on the surface are only staying because of the unavailability of acceptable substitutes. Clearly a new approach for understanding ‘‘loyal customer behavior’’ is needed.
A new transformational concept of Customer Loyalty has evolved over recent years, and is progressively being better understood and articulated. Much of this understanding has come from efforts at measuring it. When one starts measuring a construct, its meaning and significance become much clearer—and so it is with Customer Loyalty.
What caused particular interest in the loyalty construct is how consistently more profitable truly loyal customers tend to be because acquisition costs have already been amortized, there is less emphasis on discounting, and loyal customers typically provide recommendations, referrals, and other sources of indirect profit. Marketing expert George Day has said that ‘‘real profitability comes from keeping valuable customers by building deep loyalty that is rooted in mutual trust, bilateral commitments, and intense communication.’’7
Loyalty is really about the depth, not just the length of the relationship. Measuring the drivers of these relationships is the key to being able to create them. There is no universal agreement about what the drivers of Customer Loyalty are. However, a number of other transformational measures included in this chapter can be used to predict loyalty, such as Customer Delight, Customer Experience, Voice of the Customer, and Customer Relationship. And, of course, Customer Profitability and Customer Lifetime Value can determine the potential value of a particular customer’s loyalty.
5. Customer Relationship
Most customers have been traditionally viewed as short-term ‘‘transactions’’ rather than as long-term ‘‘relationships.’’ Transactions are rather easy to measure and manage, and the traditional assumption has been that positive transactions could be simply aggregated into positive relationships. However, the whole concept of a customer relationship is changing with the increase in long-term services relationships, and with more organizations wanting to capture larger shares of customers’ spending and their ‘‘lifetime value.’’
As Dow and Cook say, ‘‘The most fertile ground to grow your business lies in existing customers.’’8 Furthermore, as the cost of acquiring profitable customers goes up, the value of retaining and expanding existing customer relationships increases. Customer relationship is very much in vogue today as Customer Relationship Management (CRM) gains traction. The change from ‘‘transactional thinking’’ to ‘‘relationship thinking’’ is clearly potentially transformational. But realizing the transformation will require much more progress in measuring and managing relationships.9
Here is a sampling of the indicators that can be used to measure and manage the health and value of a customer relationship:
- Revenue: The ongoing flow of revenue through increased sales, crosssales, and up-sales is an indicator of a healthy relationship.
- Profits: A healthy relationship is a profitable relationship, because the customer appreciates the value being received, and doesn’t nickel and dime the other partner.
- Retention: The length of the relationship is an indicator of the quality of the relationship.
- Loyalty: The loyalty of the customer can be measured through longevity, frequency of purchases, and expressed loyalty.
- Communication: The frequency and positive nature of two-way communication between relationship partners is key to customer relationships. How positive is the communication?
- Commitment: A good indicator of relationship is demonstrated by unfailing commitment to the relationship despite negative experiences.
- Trust: Nothing better indicates the depth and quality of a relationship than trust, which can be self-reported or demonstrated by trusting behavior, such as sharing confidential information.
- Input: Willingness to make proactive suggestions and contribute to new product development, refinement, and trial can be of great value.
- Referrals: The referring of others to the relationship partner is a very strong relationship indicator.
- Community: Many companies—like eBay and to a lesser extent Starbucks—are building more than relationships; they are building ‘‘communities’’ of customers.
Clearly, there are multiple indicators that can serve as barometers of relationship quality. These indicators can be used in the form of a checklist (checking off the frequency with which each indicator occurs), an inventory (listing examples of behavioral indicators), and/or a scale that quantitatively rates the strength of each indicator.
6. Voice of the Customer
The real purpose of customer measurement should be to learn as much as possible about customers, and translate that knowledge into better and deeper relationships with the most valuable, and potentially valuable, customers. This stands in stark contrast to superficial quantitative customer satisfaction ratings that have been described as ‘‘Tell Us How Much You Love Us’’ exercises. They might make organizations feel good but tell them almost nothing valuable about what customers are really thinking.
Voice of the Customer measurement is a way to gain a more holistic (360 degree) customer understanding of what customers are really saying, to supplement (not necessarily replace) the quantitative data that most employees have difficulty interpreting (such as understanding the difference between a customer rating of 2.6 and 2.8). It is transformational because it attempts to move organizations from superficial ‘‘satisfaction scores’’ to ‘‘profound knowledge’’ of the customer.
The Voice of the Customer can be captured through some combination of interviews, surveys, telephone calls, focus groups and panels, observation, customer visits, warranty data, field reports, complaint logs, exit interviews with departing customers, ongoing employee interactions with customers, feedback options on websites, and so on.
Smart companies proactively gather customer feedback continuously and from multiple sources (sometimes referred to as ‘‘listening posts’’). Here are some other practices that are being used successfully to tune into the Voice of the Customer:
- In-depth interviews, or ‘‘personal dialogues,’’ are being used to explore customer experiences, attitudes, beliefs, and feelings; to create a rich ‘‘picture’’ of the whole customer; and to fill in the gaps in the quantitative data.
- Customer advisory councils are being established, representing all major customer sectors.
- Ethnography (the branch of anthropology that seeks to scientifically describe human cultures, societies, and organizations) is being used to observe customers using products in their natural setting, at work, at home, or while shopping.
- Psycho-physiological response measurement is being used to capture the emotions and unconscious thoughts of customers, using biofeedback devices to assess thoughts that customers might not be consciously aware of.
The real shift in perspective is away from atomism (reducing complex phenomena to simplistic elements) to holism (viewing things organically, as unified wholes that are greater than the simple sum of their parts). Those who embrace this kind of ‘‘Voice of the Customer’’ measurement welcome input from multiple sources, realizing that all data can be valuable, that there are multiple viewpoints, and that diversity is good.
When companies measure what’s important for the customer, rather than just for them, a transformation occurs. Chris Carey, the CEO of Datatec Industries, explained that they used to ask customers things that were important to Datatec. ‘‘But once we began talking to customers to understand what they cared about, everyone in our organization learned exactly what to concentrate on. Our measures took on a whole new look.’’10
7. Customer Profitability
If you asked an average person on the street how much profit a typical company makes, most would say around 25 to 50 percent. However, it has been found that ‘‘the real aggregate profit margins of companies in most developed industrialized companies lie dangerously close to zero.’’11
When Customer Profitability finally started to be measured, it surprised almost everyone that between 30 and 80 percent of all customers were unprofitable. Most businesses don’t realize that some of the customers they thought were ‘‘good’’ customers were actually ‘‘bad’’ (unprofitable), and many were profoundly unprofitable. Simply doing business with these customers was actually costing a lot of money! Retaining more customers doesn’t increase profitability, unless they are the right customers.
Profits come from customers, not from products, and yet profits have rarely been linked to customers. While companies have long known how much revenue individual customers provide, it has been impossible to link specific costs to customers. Traditional cost accounting doesn’t capture costs that can be assigned to particular customers, because most costs are simply amortized across the entire customer base. That is, until companies learned their ABCs! Customer Profitability measurement was originally enabled by the creation of Activity-Based Costing (ABC), through which it is possible to identify the costs of particular activities and assign them to specific customers. (See Activity-Based Costing in Section 23 of this chapter.)
When some pioneering companies started measuring Customer Profitability, executives were truly startled by all the activities that contribute cost to sales in addition to the actual product or service itself (such as promotions, discounting, rebates, salaries, commissions, bonuses, order processing, financing, credit checks, delivery fulfillment, installation, invoicing, collections, nonpayment, late payment, warranties, post-sales service, returns, and rework). One company found that just being too casual about expediting orders had made many potentially-profitable customers unprofitable. Hudson’s Bay Company in Canada found that at one of its stores 30 percent of customers accounted for 325 percent of profits!
What is so transformational about the Customer Profitability measure is the extent to which it can improve decision making. A profitability analysis with just a few well-selected customers will almost always shock company executives and motivate a change to how the company does business. This also allows companies to do customer profiling by targeting the ‘‘best’’ customers and prospects, and to migrate customers from unprofitable to profitable and from low profit to high profit.
8. Customer Lifetime Value
Customer Lifetime Value (CLV), a relatively new measure with great transformational potential, helps companies to view customers as potential streams of current and future value. Like most emergent and transformational measures, it is not a measure that you’re going to find on a balance sheet, an income statement, or anywhere else in the ‘‘official’’ corporate accounts. But it might actually have more value than most officially reported measures.
Customer Lifetime Value is essentially a forecast of the ‘‘potential value of the customer relationship,’’ based on past history and assumptions about the future. For example, according to Gupta and Lehmann, the lifetime value of a customer is equal to 1 to 4.5 times the annual margin of the customer.12 Some companies are starting to use CLV, both for customer selection and to target marketing investments.
More specifically, CLV is the ‘‘present value’’ of the future income stream generated by customers or customer segments. Customer Lifetime Value can actually be defined in terms of both total revenue and profit expected from customers over their lifetime. For example, an appliance store chain develops CLV profiles for each of their regular customers based on past purchases, cross-selling opportunities, the likely replacement intervals, and repairs over a ten-year period.
One common way to calculate CLV is by estimating the customer’s anticipated purchases (including post-sales service and replacement parts) multiplied by profit margin on all sales and service, multiplied by purchase likelihood (expressed as a percentage), multiplied by the anticipated longevity of the customer relationship, adjusted for present value of money. Sometimes, less tangible sources of customer value (such as trends in the relationship, advocacy, or referrals) can be factored in.
Although CLV isn’t a precise quantity, it can be of great value for planning purposes. As such, the major purpose of CLV is not so much to determine an accurate estimate of future profits, but to prioritize customers and customer segments, and decide how best to maximize opportunities. Of course, the danger of a measure like CLV is that it can become a wild guess based on questionable assumptions, such as the length of the relationship.13
9. Service Quality
As the service component of world economies grows in size (it now comprises about 80 percent of American jobs) and importance, the challenge of measuring and managing Service Quality is becoming increasingly important. But, this has proven to be no small challenge, since Service Quality is quite different from product quality.
Product quality has been the focus of most quality research and measurement. For a long time, there was little interest in developing standard measurement tools for services. Most service organizations (like hotels, restaurants, and airlines) have tended to use their own customized customer service assessments, making it virtually impossible to benchmark with other organizations.
Researchers Zeithaml, Parasuraman, and Berry developed an innovative and transformational approach to service quality measurement.14 First, they developed a model they called SERVQUAL.15 Then they developed a measurement instrument based on that model, with multiple items for each dimension. Their definition of service quality was not about ‘‘defects’’ (as it is in the manufacturing world), because in services ‘‘defects’’ are quite subjective. The quality of services depends on the perceptions of those being served, rather than absolute quality. Thus, the focus of SERVQUAL is on ‘‘perceived’’ rather than ‘‘objective’’ quality. It essentially measures the gaps between the level of customer ‘‘expectations’’ and the level of customer ‘‘perceptions.’’
The ten service quality dimensions originally measured by SERVQUAL (reliability, responsiveness, competence, access, courtesy, communication, credibility, security, understanding the customer, and tangibles) were eventually reduced to five, with the handy acronym RATER:
- Reliability (the consistency of service quality, lack of service defects)
- Assurance (the provisions for maintaining service quality and addressing service quality problems)
- Tangibles (the physical environment)
- Empathy (sensitivity to the customer)
- Responsiveness (speed and effectiveness of the response to customer
Although SERVQUAL has its detractors, it is widely used by progressive service organizations. Recent research in many different services settings indicates that the SERVQUAL instrument indeed represents accurate views of customer perception.16
Certainly, there will be other instruments developed for measuring service quality, and maybe even better ones. But for now, SERVQUAL is the standard, and it has already made a major contribution to service quality improvement.
10. Brand Equity
Brands have long been viewed as a major off-balance-sheet asset. The value of brands is powerfully exemplified by the large price premium and much greater demand that the Toyota Corolla commanded compared with the Chevrolet Prizm (which was essentially the same car) and by the success of the ‘‘Intel Inside’’ program (even though there wasn’t much difference between the Intel and competitor’s processors). When many companies are acquired, a substantial amount of the price is ‘‘goodwill’’ (much of that being the estimated value of any brands). For example, when Philip Morris acquired Kraft for $12.9 billion, $11.6 of it was for goodwill. According to Almquist and Winter, ‘‘The corporate brand is one of the last great underleveraged business assets.’’17
The traditional view of a brand was about an ‘‘image’’ created primarily through advertising. Hundreds of billions of dollars have been spent creating and promoting brand images. The traditional advertising-based approach to brand management was more about logos, taglines, and advertising copy than anything approaching real brand management. In fact, some have referred to it as ‘‘marketing narcissism’’ (‘‘Let me tell you how good we are.’’). But a new paradigm of branding is emerging that involves a lot more than customer messaging.
This new paradigm requires a high degree of alignment across the organization around its brand(s). To manage brands in this new paradigm, Brand Equity has become the transformational measure of choice and might soon be the only differentiator of most products and services. Brand Equity is actually what separates a product or service as a commodity from a premium product or service. Even nonprofits and government agencies can have brand equity.
The logic of the Brand Equity construct is essentially that perceptions (or beliefs) lead to attitudes, which reflect an ‘‘emotional connection’’ with the brand and behavioral intentions toward purchasing it.
However one analyzes it, Brand Equity is essentially a function of the brand’s image, the brand’s performance, and the brand’s added value. The following are some of factors that are typically measured:
- Distinctiveness (the brand’s differentiation from competitors)
- Quality (the reputation of the brand and how well it actually performs)
- Value (the strength of preference for the brand)
- Image (the extent to which the brand conveys the intended image)
- Loyalty (the degree of commitment to the brand)
Another factor (‘‘love’’) has even been suggested, and so-called lovemarks18 have been proposed to differentiate brands for which customers evidence passionate affection (as for Southwest Airlines and Starbucks).
Most of these factors (even love) can be fairly easily quantified. Of course, one of the keys is to determine the strength of the attitudes. One way to do this is through a technique called ‘‘conjoint analysis,’’ whereby choice situations (involving various trade-offs) are presented, requiring respondents to make fairly realistic choices, rather than just respond to standard questions.
11. Intellectual Capital
Intellectual Capital (IC) can be defined as all of the intangible resources that contribute to the creation of value to an organization that are not included on the balance sheet. It includes such sources of value as knowledge (both tacit and codified in the form of documents), intellectual property (patents), competence and skills of people, and working methods, processes, and systems. It can also include the culture that supports the people, the image in the market place, and relationships with customers, alliance partners, and suppliers.
The traditional approach is to ignore these sources of value, or to value only those that are easy to place a value on, like patents. Some companies have tried to communicate their intangible value by calculating the difference between their market value (based on share prices) and book value, and attributing the difference to Intellectual Capital. Almost everything that counts as an IC ‘‘asset’’ is traditionally paid for, and written off, as an overhead expense and charged against current profits.
It is critical to find credible ways to measure the Intellectual Capital that underlies so much of the value of today’s organizations. One method is to perform an inventory of the intellectual capital assets that exist in your organization, something that many organizations have never even done! Once the intellectual capital is inventoried, subjective ratings should be given to each major category of IC by knowledgeable internal or external ‘‘experts’’ (of course, you will first have to determine what constitutes an expert and develop guidelines for consistent ratings). It is then also possible, although admittedly difficult, to place a financial value on each major component of the intellectual capital inventory. The individual components can be assessed on either a value or a cost basis, or the overall intangible value of the corporation can be distributed among the IC assets. Clearly these measurement activities are primitive and time-consuming.
Another approach is a methodology for performing this inventory and rating process more systematically. The IC Rating methodology uses a standardized Intellectual Capital language and framework to help increase the consistency of the ratings. IC Rating was developed by experts in the field and has been validated through field work with over 270 ratings at more than 200 companies. The ratings are based on interviews with key stakeholders (both internal and external). Ratings are performed for current efficiency, renewal efforts underway, and the risk of each Intellectual Capital component.19
12. Strategic Readiness of Intangibles
One of the greatest challenges facing organizations today is the effective management of the multitude of intangible assets. Unfortunately, most of those assets are currently either managed tactically, or not managed at all. Intangibles must be carefully managed so that their realized value exceeds the cost of capital, or else they destroy value. Treating intangibles—such as employees, partnerships, and innovation—as assets necessitates a completely different approach from treating them as activities or costs. Another challenge with intangibles is that each one is different, and consequently they all need to be managed—and measured—differently.
Kaplan and Norton, of Balanced Scorecard fame, have recently started taking leadership in an areas they call the ‘‘strategic readiness of intangible assets,’’ and how this readiness can be measured.20 Intangible assets are said to be ‘‘strategically ready’’ when they can be used to support a strategic objective (like ‘‘increased new products’’), which in turn is linked with measures of strategic success (like revenue, profit, or market share).
It is not enough just to have intangible assets. The competitive advantage of organizations in the new economy is increasingly dependent on how ‘‘ready’’ their intangible assets are for deployment in supporting strategy. Intangibles assets that are not ready are like unused inventory. If they cannot be effectively used to support strategic objectives, their value is reduced, sometimes to zero. For example, employees who have the right strategic capabilities or skills (those whose skillsets are clearly aligned with one or more of the organization’s strategic objectives) are said to be ‘‘in a state of readiness’’ to contribute to strategic value creation. On the other hand, employees might be highly motivated and hard-working, but without the right strategy-related skills their ‘‘strategic readiness’’ is near zero.
Kaplan and Norton point out that organizations often have some categories of job (‘‘job families’’) that are more strategic than others.21 They recommend that much more attention be placed on those than on the myriad of more tactical jobs. In order to measure ‘‘human capital readiness,’’ Kaplan and Norton believe that the organization must first identify the most critical internal processes (those that support key strategic objectives), and then identify the set of competencies required to perform each critical internal process. ‘‘Strategic job families’’ are the categories of jobs in which these competencies can have the biggest impact on enhancing the organization’s critical internal processes, which are aligned with the strategic objectives.
Here is how it can be done: Link your most important intangible assets with major strategic priorities. Based on this linkage, rate each intangible asset (from 0 percent to 100 percent) in terms of ‘‘how well-aligned’’ it is with one or more of the components of your organization’s strategy. For example, if one of your organization’s intangible assets is ‘‘customer knowledge,’’ determine its alignment with relevant strategic objectives, such as ‘‘increased customer acceptance of new product development.’’ If ‘‘culture’’ is one of the key intangible assets, rate how ‘‘customer-centric’’ your culture is right now to support your ‘‘customer service’’ objective. Then, continue to assess each key intangible. The ‘‘alignment with strategy score’’ for each intangible asset constitutes the ‘‘strategic readiness’’ of the asset.
13. Innovation Climate
Organizations today have been struggling to find really good measures of innovation. As Davila, Epstein, and Shelton assert that in a recent survey of executives, ‘‘more than half rated their performance measurement system for innovation as poor or less than adequate.’’22 Typically, organizations have defaulted to measures they can count, such as number of innovation projects, cost measures, and number of patents—a measure which Art Kleiner has called a ‘‘clueless measure’’23.
Nothing is more important for innovation than a climate of innovation! Innovation Climate is an important emergent area, because it largely determines what will happen with innovation in any organization. It is a key leading indicator of innovation results. The Innovation Climate Questionnaire (ICQ)24 is a questionnaire for assessing the organizational climate for innovation. Adapted by the Innovation Centre Europe from the pioneering work of Goran Ekvall in Sweden, the ICQ has been completed by over 1500 respondents from organizations in the U.K. and other European countries. The instrument includes thirteen scales, listed here with brief descriptions:
- Commitment: Commitment to organizational goals and operations; work perceived as stimulating and engaging.
- Freedom: Opportunities to make own decisions, seek information, and show initiative; freedom from tight supervision.
- Idea-Support: People encouraged for putting forward ideas and suggesting improvements.
- Positive Relationships: People trust each other and get on well; absence of personal conflicts.
- Dynamism: Dynamic and exciting atmosphere.
- Playfulness: People laugh and joke with one another.
- Idea-Proliferation:People are perceived as having creative ideas and varied perspectives toward their work.
- Stress: People generally feel overburdened and under pressure at work.
- Risk-Taking: People are prepared to take risks and implement new ideas.
- Idea-Time: People have the time to generate and consider new ideas.
- Shared Views: There is open and adequate communication among more and less senior employees.
- Pay Recognition: People are satisfied with their remuneration.
- Work Recognition: People receive praise for their achievements.
With the exception of Stress, higher scores on each scale relate to more favorable organizational outcomes (including lower turnover intention, increased job satisfaction, and greater organizational commitment). Risktaking, Dynamism, and Freedom appear to account for the difference in a climate that supports radical innovation versus incremental improvements. Risk-taking appears to account for the biggest difference between the most and least innovative organizations.
As Dauphinais, Means, and Price insist, ‘‘Our experience suggests that the most predictive measure of whether an organization will be innovative is the level of trust between people in the organization.’’25 I agree that this might be a factor that is given too little attention in the ICQ, and you might want to consider the recommendations in Section 15, Organizational Trust, to enhance trust measurement.
Reputation has traditionally been dumped into a ‘‘general perception’’ category, remained the province of Public Relations or Advertising, and not given much corporate attention unless there is a Tylenol-like crisis. However, organizations are starting to become increasingly aware of the importance of a good reputation, and the perils of a bad one. According to Pate and Platt, ‘‘An enterprise’s reputation is a resource that must be preserved at all cost.’’26 Much of the lackadaisical attitude about reputation has been due to the difficulty of measuring it. Even if it doesn’t affect the bottom-line immediately, there is a large body of evidence that shows it will eventually. It is useful to view reputation as ‘‘reputational capital,’’ because this reinforces the financial implications of a good or poor reputation.
Reputation has been defined as how all stakeholders view the organization. Thus, measurement begins by measuring the perceptions of investors, employees, customers, vendors, business partners, government regulators, the community at large, and any other group for which the organization’s reputation might be important.
The most transformational measure of Reputation that I have found is the Reputation Quotient (RQ).27 It was developed by Harris Interactive in association with the Reputation Institute as an assessment tool that captures perceptions of corporate reputations across industries, among multiple audiences, and it is adaptable to countries outside the United States. A list of the top-50 corporations listed by RQ rating is also published.28 However, like many other measures, reputation is best reflected by changes over time, rather than as a snapshot at a particular moment in time.
The RQ measures stakeholder perceptions across twenty attributes that are grouped into six dimensions, which are:
- Vision and Leadership: clarity of vision, quality of leadership
- Financial Performance: record of profitability, growth prospects, risk, competitive performance
- Workplace Environment: quality of workplace, quality of employees, fairness
- Products and Services: quality, innovation, value, fulfillment of promises
- Emotional Appeal: feelings, admiration and respect, trust
- Social Responsibility: philanthropy, environmental and community responsibility
The attributes are rated on a 7-point scale, ranging from 7 (‘‘describes the company very well’’) to 1 (‘‘does not describe the company well’’). Harris Interactive solicits nominations of high-reputation companies and then performs interviews of the stakeholders of the nominated firms.
The same factors can be used by any company to perform their own reputation assessment. Such a ‘‘self-service’’ approach can provide your organization with deep insights into the perceptions of key stakeholder groups, which, in turn, can enable it to protect and enhance its reputation.
15. Organizational Trust
I have created the following definition of Trust by synthesizing it from a number of other definitions: ‘‘An expectancy held by an individual or group that promises will be kept and vulnerability will not be exploited.’’ Thus, Trust is an ‘‘expectation’’ of dependability and benign intentions.
Trust is typically viewed as a characteristic of personal relationships. But there is also trust in institutions, in roles, in information, etc. Increased dangers in the world, and increased media portrayals of breaches in trust, have contributed to making people increasingly reluctant to trust others, and much more skeptical about organizational relationships. In addition, reduced personal interaction due to increased globalization, less colocation, more home-based employees, and fewer face-to-face meetings are further reducing trust-building opportunities. Trust is becoming a scarcer commodity by the day.
In organizations, trust is typically seen as outside the domain of most managers and even Human Resource departments. Although public opinion surveys often ask questions about political and institutional trust, there are few, if any, measurements of ‘‘organizational trust’’ or ‘‘organizational trustworthiness’’ (other than its inclusion in some organizational climate and culture surveys, and on the occasional employee attitude survey). Until very recently, there has been little effort to measure trust as an organizational construct.
Fortunately, with an increased realization that trust is a crucial aspect of relationships with customers, employees, vendor, partners, and other members of the extended enterprise, the trust measurement gap is beginning to close. One of the most crucial applications of trust relates to supply chain performance. As Tom Brunell has said, ‘‘Trust is one of the most important tools within the supply chain today and it cannot be simply turned on or applied like other technological tools. . . . The technology tools are in place, it’s the trust that has to catch up.’’29 Furthermore, trust is highly situational, and, because trust is so fragile, it can be destroyed almost instantaneously by a single act that is perceived to be a ‘‘betrayal of trust.’’30
I have developed the questionnaire below for measuring Organizational Trust based on extensive research. The terminology can be adjusted to fit the terms used in your organization. And, as with all emergent measures, it is recommended that the items be tested and fine-tuned through pilot use, before broader implementation.
I suggest that you use the standard 5-point scale: 5 Strongly Agree; 4 Agree; 3 Neither agree nor disagree; 2 Disagree; 1 Strongly Disagree. Interpretation guidance follows the questions.
Organizational Trust Questionnaire
- I trust the expectations that have been communicated in this organization/group/team.
- I feel that people in this organization/group/team are honest.
- There is mutual respect among members in this organization/group/team.
- People in this organization/group/team are good at listening without making judgments.
- I feel good about being a member of this organization/ group/team.
- I feel that the people in this organization/group/team are competent.
- I feel confident that this organization/group/team has the ability to accomplish what it says it will do.
- People help each other learn in this organization/group/ team.
- Learning is highly valued in this organization/group/team.
- I feel that I can be completely honest in this organization/ group/team.
- Honesty is rewarded in this organization/group/team.
- There are clear expectations and boundaries established in this organization/group/team.
- Delegation is encouraged in this organization/group/team.
- People keep agreements in this organization/group/team.
- There is a strong sense of responsibility and accountability in this organization/group/team.
- There is consistency between words and behavior in this organization/group/team.
- There is open communication in this organization/group/ team.
- People tell the truth in this organization/group/team.
- People are willing to admit mistakes in this organization/ group/team.
- People give and receive constructive feedback non-defensively in this organization/group/team.
- People maintain confidentiality in this organization/ group/team.
- I can depend on people to do what they say in this organization/group/team.
- People are treated fairly and justly in this organization/ group/team.
- People’s opinions and feelings are taken seriously in this organization/group/team.
- I feel confident that my trust will be reciprocated in this organization/group/team.
Interpretation Key: Highest score is 125. High score range is 100– 125. Moderate score range is 70–110. Low score is below 70. Danger zone is below 50.
16. Partner Relationships
Clearly partnerships, alliances, and other relationships are crucial to success in business today, and being able to effectively manage them is becoming a strategic necessity. It is not longer acceptable to leave priorities like this to chance and to the good intentions of the partners. The key to competitiveness today is not so much about the advantage of a single firm, but the competitive advantage of networks of firms—partnerships and alliances of all kinds. Leonard Greenhalgh says that today, ‘‘relationships are the most crucial element of organizational architecture.’’31 This is yet another area in which traditional performance measures fall far short of the mark, which is particularly problematic for organizations wanting to achieve outstanding results through a high-performing ‘‘extended enterprise.’’
Successful partnerships and alliances require careful management, which requires thoughtful and collaborative measurement. In Getting Partnering Right, Rackham, Friedman and Ruff say, ‘‘Almost all of the successful partnerships we studied had spent considerable time and effort setting up measurement systems to track their progress.32
The most flexible measurement methodology for partnerships and alliances is the one used by Vantage Partners. In Measuring the Value of Partnering, Larraine Segil presents the Vantage Partner approach, which includes a comprehensive set of ‘‘metrics’’ to use as benchmarks throughout the alliance life-cycle to make sure that the alliance gets off to a good start and that it stays on track.33
There are two major types of metrics: those used during start-up (‘‘development metrics’’) and those used during implementation (‘‘implementation metrics’’). Some of the metrics are quantitative, but most are more qualitative, such as determining how well aligned the partners are. Just as in contemplating a marriage, if there are large divergences in values and expectations that are not reconciled, the alliance will be off to a rocky start and is likely to end in ‘‘divorce.’’ Lack of alignment on key alliance ‘‘metrics’’ is a major reason for alliance failures.
For example, before consummating the partnership, consider how consistent the partners’ missions and visions for the alliance are, how mutually beneficial the partnership is perceived to be, their expectations for things like ‘‘time to market,’’ their typical ‘‘time to decision’’ (decision-making cycle time), their ‘‘competitive positioning,’’ and their ‘‘project personalities’’ (management styles). Many of the most crucial considerations are usually masked by the emotions of the moment. Just like in pre-marriage counseling, forcing the partners to consider the ‘‘metrics’’ will very likely avoid a lot of heartache ‘‘after the honeymoon.’’ Many problems can be avoided or resolved through greater awareness. The mere act of measurement that brings the importance of these factors to awareness is often the most important part.
Collaboration is a powerful force that is transforming working relationships within teams, across functions, in all kinds of organizations, and in the extended enterprise. The international research company, The Aberdeen Group, has emphasized ‘‘the very strong correlations between collaboration and success—particularly when formalized.’’34
Because collaboration is so important to the success of organizations, both private and public sector, I looked for the measurement tool that will at least be a good starting point: the OMNI Institute’s ‘‘Working Together: A Profile of Collaboration’’ Assessment Tool.35 The tool is based on extensive research and has been successfully used in a variety of settings over many years. It contains forty questions and measures five dimensions of collaboration: the context, the structure, the members, the process, and the results.
Another related construct is Climate for Collaboration. A conducive climate is the primary condition required for effective collaboration. You can measure Climate for Collaboration by adapting items from the Wilder Collaboration Factors Inventory,36 which follows:
- There is a history of successful collaboration
- There is a shared vision and interest in achieving common goals
- There are sufficient resources
- There is skilled leadership
- Diversity is appreciated
- Clear expectations for collaboration exist
- Roles, responsibilities, and policies are clear
- Methods exists for addressing conflict
- There is a favorable political and social climate
- There is mutual respect, understanding, and trust
- Collaboration is seen as in everyone’s best interest
- Members have a personal stake in both process and outcome
- Members believe that the benefits of collaboration outweigh the risks
- There is a safe environment
- There is willingness to be flexible and adaptable
- There is open and frequent communication
- Sharing of ideas and information is encouraged
- There is incentive to collaborate
- There are no disincentives (penalties) for collaboration
- There is adequate time and a process for team building
You can ask respondents, ‘‘How confident are you that each of the following collaboration enablers are in place?’’ and use a rating scale such as ‘‘very confident,’’ ‘‘somewhat confident,’’ and ‘‘not confident.’’ This will certainly give you a good idea of what enablers need to be strengthened to foster a more collaborative environment.
Productivity is the ultimate measure that both nations and organizations tend to use to measure progress. In fact, entire countries often base their economic and social policies on increasing productivity over time. However, as a short-term measure of performance, it is virtually worthless.
The traditional approach to productivity measurement is to look at either the production of employees or organizational units or to look at the total production of the entire organization. The individual or functional approach tends to create ‘‘busy-ness’’ (‘‘See how long and hard I am working?’’), suboptimization (how much output can Function A produce for Function B to process?), while the organizational approach can lead to production of a lot of output (even if it goes into inventory), and cost reductions (which might look good in the short-term, but can hobble the organization longerterm).
Most organizations measure their total output divided by their total inputs (costs). Output can be anything from patients treated, tons of steel produced, airline miles flown, to revenue generated. The most familiar organizational productivity measure is ‘‘labor productivity,’’ which is simply dividing the total output by the number of workers, the number of hours worked, or the payroll of the workforce. In many cases, the typical actions aimed at increasing the productivity of labor don’t really increase their productivity, just their activity. Productivity also tends to focus on what is easy to count, and customers, quality, service, innovation, or other important factors are almost never considered. But the biggest problem with traditional productivity measurement is that it doesn’t do anything to identify what is constraining productivity, and what can be done about it.37
Eliyahu Goldratt has proposed a much better, and potentially transformational measure of productivity: Throughput.38 Throughput is basically revenue received from sales, or alternatively, the rate at which the system generates money through sales (for the public sector and nonprofits, it could relate to the other value received by clients or beneficiaries). At least in the private sector view, no Throughput can be claimed until the cash has been collected (which avoids the accounting dysfunctions, such as claiming credit for producing unsold inventory).
Throughput measurement facilitates more timely visibility of how value flows through the organization to the customer. But, most importantly, it helps people in the organization realize that increasing Throughput cannot occur just by people working harder or increasing overall capital investment in the system without regard for work flow. As Eli Schragenheim said, ‘‘The importance of the concept of Throughput lies in its ability to support decisions by predicting how much those decisions add to the bottom line.’’39 No traditional measure of productivity can do that.
Throughput is limited by ‘‘constraints.’’ These constraints form ‘‘bottlenecks’’ at various points in the system, which make it impossible for additional Throughput to flow to the customer, no matter how hard employees work or how much technology is applied at other points in the system. It is all-too-common for organizations try to improve everything, but miss the key constraint. Ironically, the way most organizations try to improve productivity actually reduces true productivity!
Using Throughput as the measure of productivity leads to more holistic thinking about the organization as one productive system, rather than a collection of units ‘‘doing their own thing’’ to increase their own production. This approach also makes prioritization of improvement options much easier, because, once the most immediate constraint to Throughput is identified, the decision about what to improve is obvious. With Throughput as the working measure of productivity, managers and employees can now finally do something about it, rather than wait for disappointing productivity numbers to be announced at the end of the year.
19. Organizational Agility
Agility is becoming increasingly important in today’s turbulent times. Many experts are proclaiming that the most successful organizations of the future will be the most agile ones, but few have offered a vision of an ‘‘agile organization.’’40 While some emergent measurement attempts have been make, the measurement of Organization Agility is still in its infancy.
I have attempted to synthesize some of the major research findings41 on Organizational Agility in the following questionnaire. I suggest that you use the standard 5-point rating scale: 5 Strongly Agree; 4 Agree; 3 Neither agree nor disagree; 2 Disagree; 1 Strongly Disagree. Interpretation guidance is listed at the end of the questionnaire.
Organizational Agility Questionnaire:
- This organization can implement changes in its business processes quickly.
- This organization can implement changes in its technology infrastructure quickly.
- This organization can implement small changes quickly.
- This organization can implement large-scale changes quickly.
- This organization has the capability to redeploy and retrain employees quickly.
- Major changes in this organization can be made relatively easily.
- Minor changes in this organization can be made relatively easily.
- This organization has a high capacity to adapt to change.
- There is a high degree of collaboration across boundaries in this organization.
- There is a great deal of modularity in this organization.
- This organization is quite flexible compared to its competitors.
- This organization does a good job of capturing knowledge.
- This organization encourages learning from experience.
- There is considerable error tolerance in this organization.
- This organization is breaking down barriers to cross organizational collaboration.
- This organization is not bureaucratic.
- In this organization, scenarios and guidelines are used more often than rules.
- In this organization, work is designed to permit experimentation.
- Problems are solved quickly and effectively in this organization.
- Decisions are made and implemented quickly in this organization.
- There is considerable cross-training being done in this organization.
- This organization is designed to enable change.
- The anticipation of change is a competency in this organization.
- There is fast feedback in this organization.
- Unpredictability, flexibility, and risk management is more highly valued than predictability, stability, and high assurance in this organization.
- This organization is designed to be simple, lean, and flexible.
- This organization is designed around processes, rather than functions.
- This organization is transitioning from stable jobs to more flexible roles.
- This organization is not reluctant to outsource non-core capabilities.
- This organization is quick to respond to market opportunities and threats.
- People in this organization are trained to deal with varied situations.
- This organization has a high ability to acquire or absorb innovation.
Interpretation Key: Highest score is 160. Very high score range is 100–160. High score range is 80–100. Moderate score range is 60– 80. Low score is below 60.
The most valuable use of this survey or some variation of it is for stimulating discussion. When repeated on a regular basis, it can help drive and track changes in Organizational Agility over time.
In today’s hyper-competitive world, organizations are realizing that waste is a major impediment to effective competition. The time when enormous waste could be tolerated, because profit margins were so high, is long gone. Measuring waste, in order to remove it, has become a competitive necessity.
Measurement for the purpose of waste reduction can be truly transformational, but it is not necessarily new. It has been around since the beginning of the Industrial Revolution. Unfortunately, while traditional measurement systems are great at measuring costs, they cannot really determine which costs are value-adding and which are not. Consequently, when ‘‘fat’’ is cut so is some of the ‘‘muscle.’’ Furthermore, historically, almost all waste measurement has been internally focused, concerned with reducing inefficiency, not necessarily with increasing effectiveness. That is why new, more holistic measurement tools were needed.
Although most waste reduction methods, like quality improvement methods, originated in the United States, it is the Japanese who have long been recognized as the leaders in measuring and reducing waste (or what they call ‘‘muda’’).42 Shigeo Shingo, a Japanese industrial engineer, and Taiichi Ohno developed the Toyota Production System (TPS), from which the ‘‘Lean’’ movement derived. Shingo defined ‘‘7 Wastes’’ of manufacturing: overproduction, inventory, motion, waiting, transportation, over-processing, and not doing it right the first time (which causes scrap, rework, and defects). He later added an eighth waste: the waste of human creativity.
In order to systematically reduce waste, rather than just cut costs, an organization needs to be able to ‘‘see’’ the waste. But waste is not always easy to see with existing quantitative or observational measures. People may look straight at waste without recognizing it, and they may see ‘‘waste’’ that is not really wasteful (such as excess capacity that increases flexibility).
That’s why Value Stream Mapping can be such a transformational measurement tool. It visualizes the ‘‘value streams’’ (all activities required to bring a product from vendors’ raw material into the hands of the customer) in your organization. It also enables the calculation of ‘‘lead times’’ for each activity. A transformation often occurs when people can ‘‘see’’ all the time that is wasted in the process—time that consumes resources but adds no value. It works backward from customer value, rather than just making more efficient what is already being done. It is not uncommon that valueadding activities comprise only 10 percent of the elapsed time, while as much as 90 percent of the elapsed time adds no value, increases inventory, and hides quality problems—not to mention, increases customer waiting time, which leads to dissatisfaction. Redesigning this value chain provides the customer with the same or better product or service more quickly, and at a much lower cost for the producer.
Lean production experts Womack and Jones observed to their chagrin that ‘‘despite a growing variety of better products with fewer defects at lower cost . . . the experiences of consumers seem to be deteriorating.’’43 This led them to a unique application of Lean principles and Value Stream Mapping to services. Because the customer is so integrally involved in the provision of services, Womack and Jones realized that the manufacturing ‘‘value stream’’ provided only a partial view. The customer is no longer just at the end of the process, but involved throughout.
As a result of this epiphany, they advocated the depiction of the ‘‘value stream’’ as two parallel maps: a Provision Map (from the perspective of the service provider) and a Consumption Map (from the perspective of the consumer). This enables service providers to compare the ‘‘lead times’’ from both their own and the consumer’s perspectives, providing a clear view of why so many customers become extremely frustrated. This new perspective provides a completely new lens, and that is exactly what transformational measurement is all about!
Inventory traditionally has been viewed as either a major source of waste or a necessary buffer against unanticipated demand. In a business classic, The Goal, the leading character, Jonah, a consultant, tells his client what a mess he’s made of his company: ‘‘Take a look at the monster you’ve made. It did not create itself. You have created this mountain of inventory with your own decisions.’’44 Inventory is not, in itself, bad. In today’s turbulent business climate, organizations need some protective buffer, but too much of inventory (‘‘the mountain of inventory’’) is clearly undesirable. Excessive inventory can place a heavy burden on the cash resources of a business, can use up space, can hide quality problems, and can become waste. But, insufficient inventory can result in lost sales, delays for customers, and lack of protection against supply disruptions and demand surges.
New thinking about production systems is that large inventories prevent production and supply chain innovation because it buffers an organization from the challenges that would otherwise stimulate innovation. It is easier to hold excess inventory than to improve planning, forecasting, production systems, and supply chain management. For example, while American auto companies were working at full capacity to produce supplies for inventory (and then selling it off through aggressive price reductions and rebates), Toyota was moving to a ‘‘just in time’’ manufacturing model that was producing to demand (which ‘‘pulled’’ the production process), rather than ‘‘pushing’’ it into inventory (‘‘just in case’’ it’s ever needed) like its American competitors.
The key to maximizing efficiency is to have the right amount of inventory available in the right spots in the organization, which requires the most appropriate measurement. Unfortunately, traditional inventory measurement isn’t very helpful, because it is accounting-driven and one-dimensional. Depending on what the accountants say, sometimes it is good to value inventory high and sometimes it is good to value it low. But this decision is always made after the fact, which doesn’t help to manage inventory during the process. To make matters worse, inventory is treated in accounting as an ‘‘asset,’’ so there is typically little motivation to reduce it.
One thing appears quite clear: If a company wants to reduce inventory, it is best to make inventory as expensive as possible. Otherwise, even high levels of inventory will be viewed as ‘‘acceptable,’’ and there will probably not be much motivation to reduce it.
Eliyahu Goldratt has developed a multi-dimensional measure of inventory: Inventory Dollar Days (IDD).45 Inventory Dollar Days, the cost of the inventory for each day that it sits, is calculated by multiplying the monetary value of each inventory unit on hand by the number of days since that inventory entered the responsibility of a particular link in the supply chain.
Contrast this with the traditional measure of inventory, which is based on volume or monetary value alone. Inventory could sit for weeks or months, and it is still counted the same. Goldratt’s approach makes it clear that excess inventory that sits around is negative, and the longer it sits in inventory, the worse it is. It can be quite transformational for a company to discover that there is more than $100,000 in Inventory Dollar Days in many locations, rather than simply knowing that the company is carrying $20,000 in inventory ‘‘assets.’’ Furthermore, Inventory Dollar Days is a cross-functional measure, because it measures all kinds of inventory, wherever it is in the supply chain.
22. Total Cost of Ownership
Purchased materials and services can account for 65 percent to 85 percent of operating costs in manufacturing companies, and 30 percent to 65 percent in service companies.46 It is an area of very high leverage, but it also one that is rarely scrutinized very carefully. It is an area ripe for transformational measurement.
For a long time, purchasing decisions have routinely been made on the basis of initial acquisition costs alone. The cheapest price tends to get the sale. Clearly getting the lowest price is important, but there are many other factors that should be considered. Lower cost suppliers might not provide best (or even acceptable) quality and on-time delivery.
Total Cost of Ownership (TCO) has become an important transformational measure. TCO is the total cost of a purchase through the entire period of ownership. It takes into account the enormous number of hidden costs in purchasing decisions. The initial purchase price is truly ‘‘just the tip of the iceberg.’’ The total cost of purchasing an item or service can include a vast number of items, such as other purchase costs (communication, contracting, invoicing), transportation and delivery costs, set-up costs, training costs, anticipated maintenance costs, repair costs (repair likelihood, cost to repair).
Most post-purchase costs are not anticipated, and therefore they are not managed. This often results in huge unanticipated costs—often three to ten times the initial purchase price! For example, a computer can cost $1,000, but the Total Cost of Ownership through its lifetime (including software, upgrades, maintenance, service, and replacement) can be as high as $10,000. Obviously, the key trade-off in purchasing decisions is between ‘‘total price’’ and ‘‘total performance.’’ TCO is a single measure that can reflect both sides of the trade-off. Much of the value of measurement of constructs such as TCO lies in the discipline they promote, and the visibility they provide. In Purchasing, there definitely needs to be more ‘‘full spend visibility.’’
23. Activity-Based Costing
It has been said that financial reporting systems provide tremendous detail about what has happened, but not much insight for what to do about it.
This couldn’t be more true of cost accounting. For example, there is a traditional belief that increased sales will almost automatically increase profits. However, the unprofitability of parts of the organization can come as a big shock to companies that assume an overall profit meant that everything was profitable. In fact, it has been shown that, in many companies, certain products, product lines, and customers may be draining a significant amount of profits because of extraordinarily high costs that are not detected using traditional cost accounting methods. Traditional accounting provides only one response: Cut costs ‘‘across the board.’’
Given the traditional cost accounting approach, there is really no other viable option, since costs are allocated ‘‘equitably,’’ but not ‘‘economically.’’ When costs are being cut across the organization, we often find valuecreating parts of the business robbed of capital they need, while business activities that are actually destroying value are generously funded.
Activity-Based Costing (ABC) is an accounting method that allows an organization to determine the actual cost associated with each product and service produced by the organization. Instead of using broad arbitrary percentages to allocate costs, ABC seeks to identify the cause-and-effect relationships between costs and activities in order to assign costs more objectively.47
The logic of ABC is as follows: Outputs (products, services, and customers) consume activities; activities consume resources; the consumption of resources is what drives costs. So, activities drive costs. Once the cost of each activity has been identified, it is attributed to each product, service, or customer to the extent that the product uses the activity. This allocation has been performed in various ways, but most ABC practitioners think that time studies of activities tend to produce the most accurate cost estimates.
ABC can identify the true drivers of cost. Understanding the cost drivers is powerful for maximizing value creation. ABC can also identify areas of excessively high overhead costs per unit for particular products, services, or customers. Identifying costs that do not add value focuses attention on these activities so that efforts can be directed at reducing specific cost drivers rather than cutting costs across-the-board. Even more impressive is that ABC can also be used to determine the costs associated with particular customers or customer segments, so that unprofitable customers can be stopped from draining resources (this topic is discussed in Section 7, Customer Profitability). Because activities are cross-functional, ABC is inherently a cross-functional measurement process, which facilitates cross-functional collaboration and decision making.48
However, Activity-Based Costing is not without challenges, such as the difficulty of assigning some shared costs and the need to satisfy traditional cost accounting conventions for external reporting. However, like many other emergent measures, even when it is not practical to use it fully, ABC can still be used conceptually. For instance, simply making your sales force more aware of customers’ costs-to-serve can improve customer profitability. Even with incomplete knowledge of activity and cost relationships, some organizations are using ABC thinking to make much better decisions about products and customers than they could have done under the old model.
24. Economic Value Added
Even though its revenue fell 12 percent, one company reported an increased net income of 74 percent, because the company benefited from an $11.4 billion asset write-off that cut its expenses, even as it posted steep declines in sales. Another company reported an unexpected profit for the fourth quarter occurred because of an inventory adjustment that triggered the payment of executive bonuses for the year. Reports like these explain why accounting systems are increasingly being viewed with skepticism and contempt.
Although profit might seem quite straight-forward, it is actually one of the most inconsistent financial measures. Accounting profit incorporates so many assumptions and adjustments that it’s no wonder people are confused, and that many accounting statements have more footnotes than a scholarly dissertation!
There are basically two types of profit: ‘‘accounting profit’’ (which includes only the explicit costs and revenues in its calculation) and ‘‘economic profit,’’ which measures the revenue minus both explicit and implicit costs (also referred to as ‘‘opportunity costs’’). The most distinctive characteristic of economic profit is that it includes an expense deduction for the ‘‘cost of capital,’’ which is really the ‘‘opportunity cost’’ of the capital tied up in the organization.
The problem is that people have tended to use capital as if it had no cost consequences. Many companies still report earning a profit, even when their ‘‘profit’’ does not exceed the cost of capital. While they might be able to report an ‘‘accounting profit,’’ they have not earned an ‘‘economic profit.’’ Peter Drucker said it this way: ‘‘Companies do not earn a profit until their revenue exceeds all costs. . . . By that measurement . . . few U.S. businesses have been profitable since World War II.’’49
Economic Value Added (EVA) is a specific form of economic profit that attempts to capture the true profit of an enterprise by removing some distortions from accounting profit. The abbreviation EVA is a trademark of Stern Stewart and Co., which has popularized the measure.50 Put most simply, Economic Value Added is net operating profit minus an appropriate charge for the opportunity cost of all capital invested in the enterprise. The actual formula for EVA is:
EVA = Net Operating Profit After Taxes (NOPAT) – (Capital Employed X Cost of Capital).
EVA is considered to be a good proxy for value creation. When EVA is positive, the firm is viewed as creating value for shareholders; when it is negative, the firm is said to be destroying shareholder value.
Most companies use a confusing array of measures to express financial objectives. EVA can eliminate this confusion by using a single financial measure that creates a common focus for all decision making: ‘‘How can we improve EVA?’’ Using EVA, all parts of an organization can become aligned around the value creation goal. Every business unit and every project can be assessed in terms of whether it is creating or destroying value.
25. Organizational Intangible Value
Intangibles are extraordinarily important today, including partnerships, suppliers, collaborations, skills, knowledge, innovation, patents and other intellectual property, leadership, reputation, and culture. Obviously there is a lot of value that is not listed on a company’s balance sheet. But how much value? The value of companies has been shifting markedly from tangible to intangible assets. These invisible assets are the key drivers of shareholder value in the new economy, but accounting rules do not permit the proper acknowledgement of this shift in terms of the valuation of companies. Statements prepared under generally accepted accounting principles (GAAP) do not record these assets. As a result, stakeholders are blind to the real value of a company.
Some are simply attributing the difference between market value (current market value of the company’s stock) and book value (the current value of the tangible assets of the company) to intangibles. Not only is this not a very accurate estimate, but it also provides little insight as to how the valuation was achieved.
New approaches to valuing intangibles are beginning to surface. More and more economists and business thinkers are beginning to (or, at least attempt) the difficult task of measuring the real and full value of a company.51 Ben McClure has come up with a way of doing this. He calls it Corporate Intangible Value, and he illustrates the approach using microprocessor giant Intel as his example.52 The approach goes something like this:
- Calculate average pretax earnings for the past three years. For Intel, that’s $9.5 billion.
- Go to the balance sheet and get the average year-end tangible assets for the same three years, which, in this case, is $37.6 billion.
- Calculate Intel’s return on assets (ROA), by dividing earnings by assets: 25 percent.
- For the same three years, find the industry’s average ROA. The average for the semiconductor industry is around 11 percent.
- Calculate the excess ROA by multiplying the industry average ROA (11 percent) by the company’s tangible assets ($37.6 billion). Subtract that from the pretax earnings in step one ($9.5 billion). For Intel, the excess is $5.36 billion. This tells you how much more than the average chip maker Intel earns from its assets.
- Calculate the three-year average income tax rate and multiply this by the excess return. Subtract the result from the excess return to come up with an after-tax number, the premium attributable to intangible assets. For Intel (average tax rate 34 percent), that figure is $3.53 billion.
- Calculate the net present value of the premium. Do this by dividing the premium by an appropriate discount rate, such as the company’s cost of capital. Using an arbitrary discount rate of 10 percent yields $35.3 billion.
Based on this calculation, the intangible value of Intel is $35.3 billion! As McClure rightfully says, ‘‘Assets that big deserve to see the light of day.’’
But the larger question is: So what? Once you have calculated it, what do you do with that information? The reason why I believe that this information is valuable, and potentially transformational, is that it provides all corporate stakeholders with a better understanding of the value they are managing, investing in, working with, or partnering with. We all know that you have to measure something in order to manage it effectively. When we can measure it, the scope of the management responsibility becomes clearer. However, the challenge is not to store up intangible value that is never used; it’s a matter of using that value, increasing it, and turning it into shareholder value and value for the other stakeholders of the corporation. Section 12, Strategic Readiness of Intangibles, deals with the more qualitative aspect of this important subject.
26. Project Scheduling
Much of the work done in organizations today is project work. Most organizations are full of project teams. Although project managers often are able to claim ‘‘on-time completion,’’ there is considerable evidence that projects, especially multiple projects, are late more often than they are on-time, because that ‘‘lateness’’ often doesn’t show up due all the ‘‘slack’’ built into project schedules. (One international study found that 91.7 percent of respondents admitted that their projects were finished late!53) Obviously, if you leave sufficient time, you will never be late, but doing so is very inefficient. That is why the way projects are currently estimated is so problematic.
The typical practice today is to ask every resource independently how long their tasks will take. Because of human nature and the punitive experiences people have had when their tasks have been late, each resource tends to estimate conservatively, based on a ‘‘worst case’’ scenario. This means that when all the estimates are rolled into the project plan there are implicit ‘‘buffers’’ built into everyone’s estimates. Furthermore, when the project is implemented, almost none of those resources will ‘‘give-back’’ their slack time if they don’t need it by waiting until the deadline date to report task completion—doing otherwise would be an admission that the original estimate was faulty. This method of project planning has resulted in a lot of ‘‘on-time’’ project completions of project that should have taken half that time!
Eliyahu Goldratt, who has developed as number of transformational measures as part of his Theory of Constraints, has created a brilliant solution to the problem of bogus project estimating. It is called the ‘‘Critical Chain’’ method.54 The key to this approach is to schedule each task in a project based on ‘‘average’’ time, rather than ‘‘worst-case’’ time. In order to mitigate the risk of certain estimates being wrong, a single buffer for the entire project allows for some activities to be late. This means that each resource doesn’t build its own ‘‘safety buffer’’ into its estimate, and that it is considered okay for some resources to miss their estimates because the project buffer will absorb that extra time.
The other related measurement innovation is the major tracking measure for the project: the Buffer Index, which provides timely information on the work completed as a proportion of the amount of buffer consumed. This way, the entire project team knows exactly where the project stands relative to on-time completion based on the proportion of time left in the buffer. This simple but powerful measure gives everyone a single number on which to gauge the status of a project. In addition, with this method projects have extraordinary on-time performance compared to what used to be the case.
27. Employee Engagement
While most organizations say that ‘‘employees are our most important assets,’’ their measurements seldom reflect it. Few organizations perform much more than a perfunctory employee satisfaction survey to measure how ‘‘their most valuable assets’’ are doing. And just as with the results from many customer satisfaction surveys, the results from employee satisfaction surveys end up in a data warehouse somewhere because no one really knows what to do with the data.
So, what should be done with the trusty old employee satisfaction survey? Should it just be thrown out? Many progressive organizations are beginning to come to the realization that just as ‘‘customer satisfaction’’ is an obsolete construct in today’s hyper-competitive marketplace, the same is true for ‘‘employee satisfaction.’’ The major problem is that, as with customer satisfaction, employee satisfaction tends to be a transactional (moment-to-moment) rating and doesn’t necessarily reflect any strong underlying emotion attachment.
Employee Engagement has been shown to be a construct that is linked to emotions, while satisfaction is simply a cognition (an opinion). It is also much more predictive of retention.
There are quite a number of Employee Engagement measurement instruments that can be used. Probably the best known is the one developed by the Gallup Organization, called the Q12 (12 question) survey.55 The survey questions are as follows:
- Do I know what is expected of me at work?
- Do I have the right materials and equipment I need to do my work right?
- At work, do I have the opportunity to do what I do best every day?
- In the last seven days, have I received recognition or praise for doing good work?
- Does my supervisor, or someone at work, seem to care about me as a person?
- Is there someone at work who encourages my development?
- At work, do my opinions seem to count?
- Does the mission/purpose of my company make me feel my job is important?
- Are my coworkers committed to doing quality work?
- Do I have a best friend at work?
- In the last six months, has someone at work talked to me about my progress?
- This past year, have I had opportunities at work to learn and grow?
It’s easy to perceive the difference between these questions and the typical ‘‘Tell us how much you like all the things we do for you’’ satisfaction surveys. This one is based on what Gallup found are most personally important to employees, regardless of organization. The respondents are asked about their ‘‘feelings,’’ not just their ‘‘thoughts.’’ This is another great example of how a relatively minor ‘‘mental model’’ shift can make a transformational difference.
Interestingly, Gallup’s research has found that in a typical organization, 19 percent of employees are ‘‘actively disengaged,’’ 55 percent are ‘‘not engaged,’’ and only 26 percent are ‘‘engaged.’’ On a traditional employee satisfaction survey, the ‘‘not engaged’’ employees might very well have indicated being ‘‘satisfied.’’
Other Employee Engagement surveys are offered by Satmetrix Systems, called the ‘‘Employee Acid Test’’56 (which is modeled after the ‘‘Customer Acid Test’’—see Section 4, Customer Loyalty) and Mercer Human Resource Consulting’s ‘‘Employee Commitment Assessment,’’57 which measures the following dimensions of the work experience: fit and belonging, status and identity, trust and reciprocity, economic independence, and emotional reward. As you can tell from these dimensions, this is also a far cry from the traditional employee satisfaction survey.
Obviously, measuring a construct like Employee Engagement won’t automatically make your employees more engaged. However, when you have begun to measure contributory factors, and when you know where you stand (the baseline), then you can begin to do something to bring the score up closer to the goal level you and your organization wish to attain.
28. Emotional Intelligence
For a long time, it was assumed that traditional (intellectual) intelligence (I.Q.) was all it took to succeed. I.Q. (Intelligence Quotient) tests have been the standard measurement tools for selecting people for educational placement and jobs. But recent research has indicated that there are ‘‘other intelligences’’ (such as verbal intelligence and spatial intelligence) that might be as important, or even more important, for personal and organizational success than traditional intellectual intelligence.
Originally popularized by Daniel Goleman, Emotional Intelligence (variously referred to as either EI, or EQ (for Emotional intelligence Quotient), has created considerable excitement in the fields of human resources and leadership.58 Emotional Intelligence has been shown to be a major differentiating factor in success.
EQ has been shown to be two times as important as IQ and technical expertise combined.59 Emotional Intelligence skills are distinct from, but synergistic with, intellectual abilities. These performance competencies together explain from 65 percent to 90 percent of ‘‘star performer’’ success in a professional field.60 According to John Grumbar, the most significant determinant of managerial failure was low EQ. Grumbar said, ‘‘Most people are hired on IQ, but fired because of EQ.’’61
Emotional Intelligence typically has four components (understanding yourself, managing yourself, understanding others, and managing relationships with others), and more than twenty competencies, including:
- Emotional Self-Awareness (recognizing our emotions and their effects)
- Accurate Self-Assessment (knowing our strengths and limits)
- Self-Confidence (a strong sense of our self-worth and capabilities)
- Self-Control (keeping our disruptive emotions and impulses under control)
- Trustworthiness (maintaining standards of honesty and integrity)
- Conscientiousness (demonstrating responsibility in managing oneself )
- Adaptability (flexibility in adapting to changing situations or obstacles)
- Achievement Orientation (the drive to meet an internal standard of excellence)
- Initiative and Optimism (readiness to act)
- Empathy(understanding others and taking an active interest in their concerns)
- Leveraging Diversity (cultivating opportunities through many kinds of people)
- Organizational Awareness (‘‘savvy,’’ understanding and empathizing issues, dynamics, and politics at the organizational level)
- Stewardship Orientation (recognizing and meeting customer needs)
- Developing Others (sensing others’ development needs and responding to them)
- Leadership (inspiring and guiding groups and people) Influence (wielding interpersonal influence tactics) Communication (sending clear and convincing messages) Change Catalyst (initiating or managing change)
- Conflict Management (resolving disagreements)
- Networking and Building Bonds(cultivating and nurturing a web of relationships, seeking partnerships)
- Teamwork and Collaboration (working with others toward shared goals)
Three popular EQ tests are the MSCEIT (Mayer-Salovey-Caruso Emotional Intelligence Test), the ECI (Emotional Competence Inventory), and the EQ-i (Emotional Quotient Inventory).62 They are showing that not only is Emotional Intelligence measurable, but it appears to be ‘‘trainable’’ to a greater extent than IQ.
Emotional Intelligence might be the most relevant transformational measure for this book, since it is a crucial factor in how well measurement is ‘‘socialized.’’ Those with high Emotional Intelligence are clearly more oriented toward socialization, and better able to make it happen throughout the organization.
29. Employee Safety
Workplace safety may sound like an unlikely area for transformational measurement, but you may be surprised about the significant individual and organizational impact the transformational measure can have.
As Daniel Patrick O’Brien explains, most safety measurements ‘‘measure past efforts, loss events, problem areas, and past trends. They are totally dedicated to how things were . . .’’63 Safety measurement tends to focus on injury statistics (such as lost-time accidents) and safety rule compliance issues (safety violations). Almost all accident data are failure-based measurements, such as: the number of injuries/deaths, number of lost work days, number of spills, cost of accidents, number of safety violations, and so on. To make matters worse, accident statistics tend to be incomplete, because some accidents aren’t reported because of peer pressures (‘‘We don’t want our safety streak ended . . .’’) or because they don’t result in lost-time injuries.
Trying to manage safety by counting accidents is like trying to fight a battle by measuring the number of casualties. Measurement used to show how well, or poorly, you did is going to be of little help for improving things. Furthermore, accident rates are due to chance as much as any other factor. This is because there are a lot of unsafe conditions in workplaces and people engage in many unsafe behaviors that do not result in accidents. Research has indicated that, on average, a worker would have to engage in an unsafe behavior 330 times before it resulted in an accident! When an accident does occur, it is often due to bad luck. Companies with low injury rates may actually have big safety problems; they are often just lucky.
The inherent problems with the traditional approach to safety measurement have given rise to a transformational approach to measuring safety. The measure is Safe Behavior,64 a primarily qualitative/subjective measure that can also be converted into quantitative statistics. The concept behind it is that it is much more useful to measure something positive that you want to happen than something negative, especially since measuring accidents is more a measure of ‘‘bad luck’’ than of employee behavior.
According to this new paradigm, measurement occurs during random ‘‘peer observations.’’ Rather than waiting for accidents and injuries to be reported, the observers proactively look for critical ‘‘safe behaviors’’—those that would prevent the most common accidents—listed on a behavioral observation form. Although the observers are not specifically looking for ‘‘unsafe behavior,’’ if they do happen to see it, they will give helpful feedback (but not criticism), and they will not record the unsafe behavior on the observation form. Only the safe behaviors listed on the observation form are recorded. Although the measurements are based on subjective judgments, observers are trained to recognize the Safe Behaviors, and their observations tend to be quite accurate.
Further, because the measurement is positive, there is little or no defensiveness by those being observed. Safe Behavior scores are computed for each team (not individually). These scores are recorded on a scorecard and trends are graphed and displayed, so that teams can see their progress. Employees are encouraged to discuss the measurements and exert ‘‘positive peer pressure’’ to encourage one another to work more safely. When predetermined ‘‘goal levels’’ are reached, some recognition is often provided. The idea is to use ‘‘the power of positive measurement’’ to increase Safe Behavior, not just to monitor it. And, unlike in traditional ‘‘accident measurement,’’ Safe Behavior is completely under the control of the employees.
30. Employee Presenteeism
Current measures in attendance and health ignore probably the most serious problem faced in the workplace: Presenteeism.65 While the presenteeism problem has existed in some form or another for centuries, the term itself is relatively new. Presenteeism occurs when people show up for work with illnesses and other issues that reduce their productivity and spread disease. Like any other construct, Presenteeism cannot be effectively managed until it is measured.
Presenteeism is widely thought to be caused by a fear of loss of income or employment on the part of the employee. Many companies do not offer sick leave benefits for illnesses lasting three days or less. On top of that, the recent dramatic increases in health insurance rates and skyrocketing health care costs have caused many employees to be more reluctant to seek medical attention.
Presenteeism can have catastrophic effects on a company’s output, as well as present hidden long-term costs and wider social problems. Employees who arrive at work ill may operate at only a fraction of their normal capacity despite receiving the same wages and benefits as employees operating at 100 percent. They may also be more prone to mistakes and injuries, and they are more likely to transmit contagious diseases to fellow employees, causing even more work efficiency problems.
Now that Presenteeism has been identified and defined, it can be measured and managed. When the Employers Health Coalition of Tampa, Florida first studied the problem and analyzed seventeen diseases, it found that lost productivity from Presenteeism was 7.5 times greater than productivity loss from absenteeism. For specific problems, like allergies, arthritis, heart disease, hypertension, migraines, and neck or back pain, the ratio was more than 15 to 1.66
Researchers at the Institute for Health and Productivity Studies at Cornell University found that up to 60 percent of the total cost of employee illnesses came from Presenteeism.67 Studies such as the above are in the forefront of emergent efforts that are beginning to deal with this problem, which has been ignored for so long because it was never measured.
31. Learning Effectiveness
Despite the more than $300 billion American companies spend annually on training, there is little data to show any positive impact of learning on business results. In fact, most companies and government agencies don’t even try to measure the impact, either because they don’t know how or feel that it is too difficult. That is why training and other learning programs are still measured by such indicators as the number of programs run, the number of participants, the number of course days, training investment per capita, and end-of-course satisfaction surveys.68 These are not really measures of learning effectiveness; they are measures of learning activity.
While almost everyone believes that there must be a causal relationship between training and business results, few have seriously looked, and even fewer have been able to find one. There has been recent attention to isolating the ROI of training programs, but most of that activity has focused on ‘‘easy pickings,’’ like showing that basic job skills training of employees improves performance. Actually, that is pretty obvious without an ROI calculation!
Furthermore, in this era of knowledge management, coaching, and mentoring, traditional training programs are becoming more difficult to isolate from everything else that is being done to improve employee performance. That is why I developed Learning Effectiveness Measurement (LEM) at IBM to address the weaknesses in the traditional approach to training measurement.69
One of the biggest challenges in learning has been how to bridge the gap between learning and real organizational impact. To bridge this gap, a systematic process was needed in order to trace the chain of causality between typical learning measures (acquisition of knowledge and skills) and more results-oriented organizational measures. The centerpiece of LEM is the concept of ‘‘causal chains,’’ diagrams that are used to trace the impact of learning through a ‘‘chain’’ of causes and effects: from ‘‘acquisition of knowledge and skills,’’ to ‘‘behavior change,’’ to ‘‘individual or team performance improvement,’’ to ‘‘organizational performance improvement,’’ and culminating with ‘‘organizational results measures.’’
What is most important is not the diagram, of course, but rather the understanding that is obtained through the process of developing, examining, and interacting. The causal chain provides a roadmap for designing more effective learning programs and a measurement plan for tracking the impact of the learning programs to the desired results. This causal understanding has long been the missing link in training that attempts to achieve a business impact. Not only does this causal logic help identify measures that can be used for tracking all the key links in the ‘‘learning to business impact chain,’’ but, more importantly, it provides visibility to the critical linkages needed for driving that impact.
LEM is more than just a conventional learning measurement methodology. It is an approach for planning and managing the learning and performance improvement process to achieve the desired organizational impact.
32. Information Orientation
While organizations can measure almost every aspect of the operation of their IT infrastructure in excruciating detail, the nontechnical aspects of information use are rarely measured at all. Existing measures tell us little or nothing about how well a company profiles the information needs of employees, filters information to prevent overload, identifies key knowledge sources, trains employees to use information, shares information, or reuses information. What people do with information is as important as, or more so than, the technology they use to manage it. Without the ability to measure ‘‘information use,’’ most of what twenty-first-century organizations actually do can’t be managed.
Information Orientation (IO) is an emergent measure of how well an organization uses the information it has. It is based on the extensive research by Donald Marchand, William Kettinger, and John Rollins, who have studied IO in hundreds of companies in many industries worldwide.70 Until the development of new measures such as this, the indicators of effective information use had been largely invisible.
The IO of an organization comprises three ‘‘capabilities,’’ only one of which relates to IT application and infrastructure management. The other two IO capabilities are concerned with 1) ‘‘managing information’’ over its lifecycle and 2) the ability of the organization to instill and promote ‘‘behaviors and values’’ conducive to the effective use of information.
IO is composed of the following practices, behaviors, and values:
• ITPractices(ITP):IT for operational support (controlling operations); IT for business process support (deployment of hardware, software, and expertise to facilitate business processes); IT for innovation support (hardware and software support for employee creativity); IT support to facilitate management decision making
• Information Management Practices (IMP): sensing information (how information is detected and identified); collecting information (gathering relevant information); organizing information (indexing, classifying, and linking information); processing information (accessing and analyzing information prior to decision making); maintaining information (re-using, updating, and refreshing information)
• Information Behaviors and Values (IBV): information integrity (improving security and reducing manipulation of information); information formality (increasing the trustworthiness of formal information); information control (disclosure of business information to appropriate stakeholders); information sharing (facilitating the free exchange of information within functions and across the enterprise); information transparency (increasing trust and honesty relative to information); information proactiveness (increasing the propensity of people in the organization to seek out and enhance information)
Information Orientation isn’t just about doing the things listed above, but about doing them well. While organizations with low IO do many of these things, they do not do them well or thoroughly. It’s not enough to just collect and organize a lot of data; you must be able to turn the data into the right knowledge and action.
33. Information Proficiency
Charles Leadbeater, author of The Weightless Society, said, ‘‘Our capacity to generate information far outstrips our ability to use it effectively.’’71 Information work will not really become true knowledge work until individuals and organization develop better capabilities for transforming data into information, information into knowledge, and knowledge into wisdom.
I know of no measures of employee ‘‘information use’’ other than what Thomas Buckholtz calls Information Proficiency. For the most part, organizations hire smart people, and then throw them into a system that is drowning in data. Most of the employees barely stay afloat, much less do anything proactive or creative with this data. I know of no curriculum on Information Proficiency, though it might exist somewhere.
Ironically, a company might have the greatest IT infrastructure in the world, but if Information Proficiency is low, it is likely a cause of enormous waste. Most organizations are measuring ‘‘information availability’’ rather than ‘‘information use.’’
According to Buckholtz, ‘‘Information Proficiency is the effective use of information to define and achieve goals. Operationally, information proficiency denotes quality in making and implementing decisions.’’72 There are two aspects of Information Proficiency:
- Measuring proficiency with information to make decisions
- Measuring proficiency through information to implement decisions
The measurement method suggested by Buckholtz is an interesting one, involving the reflection on a representative decision in which the respondent was involved. The complete questions and scoring system are contained in his book. To give you a sense of the measures, the following questions are used by Buckholtz to measure ‘‘proficiency with information to make decisions’’:
- Objectives were clear relative to the decision.
- The right participants were involved in the decision-making process.
- An effective decision-making process was used.
- There was appropriate management of the decision-making process.
- Progress of the decision-making process was appropriate for the priority of the decision.
- The key issue in the decision was determined early.
- The participants were well coordinated during the decision-making process.
- Communication around the decision-making process was appropriate.
- The decision was made at the right time for optimal impact.
- The decision (or nondecision) was well communicated.
- Learning occurred from the decision-making process.
- There was learning from past decisions.
- The decision was reviewed at an appropriate time.
- The decision included a plan for implementing it.
- Sufficient information was used for making the decision.
- The quality of information used in making the decision was appropriately verified.
- ‘‘Meta-information’’(information about information)was appropriately used.
- The quality of the information used for decision-making was appropriately considered.
- Optimal results on organizational goals were achieved from the decision.
Although Buckholtz’s response options are fairly complex and tailored to each item, I see no reason why a standard 5-point (5 Strongly Agree to 1 Strongly Disagree) rating scale, or another appropriate rating scale, could not be used.
Information Proficiency measurement can have a truly transformational impact on an organization. I can’t imagine an individual, team, function, or entire organization not improving the quality of its decision making if it were to diligently use this construct and spend time developing plans for improving it.
34. Knowledge Flow
No one doubts that better management of knowledge within an organization will lead to improved collaboration, innovation, and competitive advantage. It has been pointed out that, while an organization’s ‘‘data’’ resides in its computer systems, its ‘‘intelligence’’ is found in its social systems. In this knowledge-intensive economy, organizations need better understanding of how knowledge is being shared so that they can manage it more effectively. In the future, ‘‘who knows what’’ and ‘‘who shares with whom’’ will be more important than the traditional symbol of status, ‘‘who knows whom.’’
Social Network Analysis (SNA), originally called Organizational Network Analysis (ONA), is the mapping and measuring of relationships and information flows between people in a social group or organizational network.73 The insights resulting from such analysis are often quite compelling and counter-intuitive.
This is how SNA works: Those selected for an analysis complete a survey asking them questions about with whom they share knowledge, and what kind of knowledge they share. As a result of the survey data, knowledge networks are mapped that uncover interactions within and across the boundaries of the organization.74 This analysis results in a map of how knowledge and expertise is shared. Each person in the analysis is represented by a ‘‘node’’ on the network map. The primary nodes are the people who are most central to this network. They are typically the acknowledged experts, who are sought out for critical information and knowledge, or people who are just prolific networkers.
In addition to the network maps, there are a number of measures that are computed by the software. Several of the measures relate to the ‘‘centrality’’ of nodes. These measures help determine the importance, or prominence, of a node in the network. It is always interesting to see that network location often differs significantly from location in the formal hierarchy or on the organization chart. ‘‘Degree centrality’’ is the measure of network activity for each node using the concept of ‘‘degrees’’ (the number of direct connections).
Contrary to what people may think, in personal networks, having more connections is not always better. What really matters is where those connections lead. ‘‘Betweenness centrality’’ is the measure of how many connections a particular node is between. For example, someone who is between many other connections is seen to play a ‘‘‘broker’’ role in the network. A node with high ‘‘betweenness’’ has great influence over what flows in the network.
‘‘Closeness centrality’’ has to do with how close a node’s connections are. Those with the shortest paths to others are in a particularly good position to monitor the information flow in the network; they have the best visibility into what is happening in the network. ‘‘Network centralization’’ provides insight into a node’s location in the network. A centralized network—one that is dominated by one or a few central nodes—presents a dangerous situation, since the removal of any of these nodes could cause the network to fragment. ‘‘Hubs’’ are nodes with high degree and betweenness centrality. A highly centralized network is at risk, and can abruptly fail, if any hub is disabled or removed. ‘‘Average path length’’ is the average length of the paths in a network. Research indicates that shorter paths in the network are the most important ones.
This brief description of some of the key SNA concepts and measures should give you an idea of how much data can derive from such an analysis. SNA can be used for many purposes, including: mapping personal influence, identifying innovators in particular areas, mapping the interactions of people involved in a change effort, improving the functioning of project teams, discovering emergent communities of interest, identifying crossborder knowledge flows, exposing possible terrorist networks, and locating technical experts in a field. Most importantly, this measurement enables more effective management of social networks.