Leading and lagging measures

I read and hear advice saying that you can manage organizational performance more effectively with the use of leading and lagging measures (sometimes called leading and lagging indicators). If a manager relies only on a lagging measure to manage performance, it could be too late to fix things if the measure shows that performance is not satisfactory.

Some advice on identifying leading and lagging measures is not very sound. Here is one discussion of this topic that I found on the internet:

Definition of leading indicator: Activities that should be trended as they predict the outcomes (i.e., lagging indicators). Quotas or goals should only be placed on lagging indicators and never on leading indicators. Placing a goal on a leading indicator will result in gaming and generate the wrong results.

Examples:

  • Average Speed of Answer
  •  Average Handle Time
  • First Contact Resolution
  • Number of contacts

I have couple of problems with this definition and its advice.

  • First, it is a mistake to think that a measure is by its nature either leading or lagging. Each of the above examples could be a lagging measure for certain processes. For example, average speed to answer could be a leading indicator for customer satisfaction and a lagging indicator for number of staff available.  Number of contacts could be a leading measure of number of sales made and a lagging indicator for number of marketing ads aired.
  • Second, placing a quota (which is a performance target) on a performance measure may or may not result in gaming.  A performance target itself does not cause gaming. What causes gaming is the punitive use of performance measures by management. Gaming can occur with any performance measure—leading, lagging, or other.

“Leading” and “lagging” are relative, not absolute notions.  A simple way to understand leading and lagging measures is to use the metaphor of a stream or river to imagine how work flows through your organization. When you think about where to measure the performance of a work process, you can stand at one point and look both upstream and downstream.

A leading measure is “upstream” in a work process. A lagging measure is “downstream” in the same process. The job of the leading measure is to be an early warning signal of process performance in time to adjust the work-in-process to achieve the desired result on the lagging measure. But a work process can have a number of linked, dependent activities.

To identify a pair of leading and lagging measures, begin by identifying a lagging measure at some point in the process. It is reasonable for that point to be the final outcome of the process, but it could be earlier.  Then look upstream from that point to identify a measure that would signal that the process was beginning to go wrong relative to the lagging measure.

For a leading measure to do its job there has to be a real and direct relationship between the performance attribute it is measuring and the performance attribute the lagging measure is measuring. A leading measure and lagging measure work together not by how they are quantified, but by how the performance attributes they are measuring are linked.

Here are examples of leading and lagging measures for different work processes:

  • In production, a leading measure can be amount of product produced. A lagging measure can be level of inventory.
  • In customer service, leading measures can be employee satisfaction and product quality.  A lagging measure can be customer satisfaction with the product.
  • In employee training, leading measures might be the number of employees trained and the percent of trainees giving positive ratings to the training. A lagging measure can then be the degree of improvement in work processes due to training.

As always, I invite comments below or by e-mail.

Using an Agree-Disagree scale on a questionnaire–a repost with update

Note: This topic was the subject of a post several years ago and is one of the most frequently read posts in this blog.  I am posting it here again with an update at the end.

On employee and customer surveys, it is very common to use Agree-Disagree items with the following 5 response options:   Strongly Agree—Agree—Neither Agree nor Disagree—Disagree—Strongly Disagree

This type of item is referred to as a “Likert item” because it was introduced by an organizational psychologist named Rensis Likert in 1932 in a journal article titled “A Technique for the Measurement of Attitudes.” Others used this type of item at that time, but Likert was the first to assign numbers from 1 to 5 to the response options. [By the way, Likert’s name is often mis-pronounced “Lie-kert;” it is “Lick-ert.”]

In recent years, the Likert item has become a topic of debate. The debate centers around the use of what is called the “neutral middle” response option—“Neither agree nor disagree.” I have participated in the debate from time to time on the side of not using the neutral middle response option.  My surveys use 4 response options with an option to not respond:  Strongly Agree—Agree—Disagree—Strongly Disagree—No Response

I do not use or recommend the “neutral middle” for the following reasons:

  1. Marking  “Neither agree nor disagree” is the same as not responding to the item.
  2. When using a 5 point scale to score responses, the neutral middle (non-responses) gets a score of 3. When      calculating item means and standard deviations, the 3’s have the effect of pulling the item means to the middle of the scale.  This causes items on a survey to look more similar.
  3. Including the neutral middle encourages the “lazy” respondent to simply mark all the  middle options with the effect on the item data discussed in 2 above.
  4. In a survey, I want to want respondents to think about the item and report an opinion, no matter how weak the opinion is. Not including the neutral middle option encourages reporting an opinion.
  5. Including  a “No response” or “No opinion” option next to the 4 point scale still allows for respondents who truly have no opinion, but they are off the scale.

When analyzing the data, I do not include “No responses” in the scoring of the item, but I do calculate the percentage of non-respondents.  I prefer to calculate percentages for each option rather than calculating the item mean and standard deviation.  Percentages display the actual distribution of opinions for an item, whereas the item mean hides this distribution.

Update on November 30, 2014

The Lickert scale is an ordinal scale (see my posts on April 2012 and May 2012 for a discussion of types of measurement scales).  It would be correct to argue that it is inappropriate to calculate the mean and standard deviation for an ordinal scale.  I understand the correctness of this argument.  A Lickert scale has no unit of measure, therefore its scale values are not actual quantities that can be added.

But I wish to make a different point here.  Calculating the mean of a five point attitude item on a questionnaire is an example of a common criticism of statistics—statistics can be misleading.

Consider this example.  On a 5 point Licker scale, suppose half of the respondent say Strongly agree (a score of 5) and half of the respondents say Strongly Disagree (a score of 1).  Their mean would be a score of 3.  On the Likert scale, a 3 means “neither agree nor disagree,” yet respondents have strong opinions indeed.  By its nature, the mean hides the variation in the data.  To understand what these respondents truly think, it would be better to calculate the percentage of responses for each level of the Agree-Disagree scale.

As always, I invite questions or comments by clicking on “Comments” immediately below this post, or you can send an e-mail.

 

Performance measurement and Senge’s learning organization

Recently I revisited Peter Senge’s popular book, The Fifth Discipline:  The Art and Practice of the Learning Organization, published in 1990.  It had been some years since I last read it.  I was pleased to find that the five “disciplines” that define his concept of a learning organization seem to be very compatible with my notions of the best uses of performance measurement and a data-savvy culture.  Each of the five disciplines is really a set of principles and practices that can be mastered and they all participate in and contribute to managing with measures.

Below is a very brief description of the five disciplines along with how I think they correspond to successful use of performance measures.

Systems Thinking is the thinking that keeps a full picture in our minds of how an organization functions to achieve its goals, including how the many processes of the organization are interrelated and interdependent.

Senge’s concept of systems thinking corresponds to the process of constructing a performance logic model as a key task when choosing the right performance measures.  Examples of performance logic models can be found in Step 2 in Managing with Measures:  How to Choose the Right Performance Measures for Your Organization.

Personal Mastery is the process of individual learning and growth, and of continually clarifying our vision of what we are trying to achieve.  It requires a questioning attitude and an awareness of what we don’t know.  It requires self-confidence and a willingness to learn and investigate when things are not going well.

Senge’s concept of personal mastery corresponds to the need for a manager to have the confidence to set specific performance expectations, and to want to know the facts, whether good or bad.  These qualities of a manager are discussed in Leadership Principle #3 in Managing with Measures:  How to Choose the Right Performance Measures for Your Organization.

Mental Models are the frameworks managers and employees use to think about, interpret, and explain the organizational behaviors that make up the organization’s culture-in-action.  These frameworks are composed of images, assumptions, stories, generalizations, and explanations that determine what we choose to see and how we choose to act.

Senge’s concept of mental models corresponds to an organization’s corporate culture, especially when it transcends internal politics and game playing.  If we zoom in on the particular images, assumptions, stories, generalizations, and explanations that drive the effective use of performance measures, we will be looking a data-savvy culture.  A data savvy culture is discussed in several recent posts to this blog.

Shared Vision is a picture or image carried in our heads and hearts about what an organization can become.  It is a collective caring that puts personal goals in sync with organizational goals.  It fosters risk-taking and experimentation as it pulls all personnel toward the better organization they want to achieve.

Senge’s concept of shared vision corresponds to the notion that successful implementation of a system of performance measures requires a united leadership team that can communicate about measurement for employee buy-in.  Why this is important is discussed in Step 7 in Managing with Measures:  How to Choose the Right Performance Measures for Your Organization.

Team Learning is a formal structure for accumulating individual and group learning through open communication, thinking together, and group discussion to reach agreements.  It results in collaborative problem solving, shared meaning, and shared understanding.

Senge’s concept of team learning corresponds to the use of a performance measurement system that engages all employees.  It is at the heart of a data savvy culture.  Fostering a data savvy culture is discussed as Leadership Principle #4 in Managing with Measures:  How to Choose the Right Performance Measures for Your Organization.

My take-away from my review of Senge’s excellent book is that an organization that has a data savvy culture can be understood as an example of a learning organization.

I invite your comments below or by email to mail@managingwithmeasures.com

Fast and slow thinking for better perormance measures

Recently I read a very informative 2011 book by Daniel Kahneman titled Thinking, Fast and Slow.  Kahneman was the winner with Amos Tversky of the 2002 Nobel Prize in Economics for “having integrated insights from psychological research into economic science, especially concerning human judgment and decision-making under uncertainty.”

His book is nearly 500 pages and summarizes what he and others have learned in the last several decades about how the human mind works, especially in making judgments and decisions.  Kanneman and other researchers have come to think of our mental life in terms of two types of thinking—System 1 thinking which is fast and intuitive , and System 2 thinking which is slow and deliberate.  In our daily life, we rely mostly on System 1 fast thinking he says, and this got me wondering how these two types of thinking relate to “managing with measures.”

What is fast and slow thinking?

  • System 1 is automatic with little or no effort or control.  It relies on intuitions and heuristics that we develop out of our experiences and which we store away for quick and easy use in day-to-day living.  Examples of fast thinking are being able to recognize instantly if someone on the telephone is angry, or when you enter a room, immediately sensing that the group in the room was talking about you.  Experts in a field, such as a physician or a chess master or a firefighter, also rely on System 1 thinking to come to conclusions quickly based on years of accumulated knowledge, experience, and practice.

The problem with fast thinking is that it is susceptible to systematic errors, that is, bias, because its nature is to believe what we already know and to confirm what we already believe.  It jumps to conclusions.  It can ignore evidence.  It is influenced by our emotions.  Kahneman reports that “emotion now looms much larger in our understanding of intuitive judgment and choice than it did in the past.”  System 1 thinking allows us to think with our feelings.

  • Slow thinking, System 2 thinking, involves reflection and requires concentration.  Its nature is to be systematic and orderly to figure out how to deal with a problematic situation that System 1 thinking has never dealt with.  It is a willful choice to “think it through,” and it takes time and effort.  We have to focus, to pay attention until we figure the situation out.

The problem with slow thinking is that, because it requires us to pay attention, our thinking can be disrupted when our attention is diverted or distracted.  It is difficult to stay focused. And because it takes a lot of effort and concentration, we have only so much energy to give to it.  Thus, we tend to avoid System 2 thinking if we can, and if we do make the effort, we find it hard to sustain it.

How might fast and slow thinking relate to “managing with measures?”  Here are some speculations:

  • Deciding what to measure and measuring only what is important for high performance requires thoughtful effort, System 2 thinking.  My book, Managing with Measures, explains how to do it in seven steps, but I did not title the book, “7 easy steps to managing with measures.”  It takes time and effort to think through each step.
  • Managers who don’t measure performance may prefer System 1 thinking because it takes less effort than System 2 thinking.  Or they measure what is easy, avoiding the effort needed to choose the right measures for their performance priorities.
  • The fact that so many organizations are “data-rich and information-poor” (i.e., they collect data on lots of measures that they ignore) may also be due to a preference for System 1 thinking.

I invite your comments below or by email to mail@managingwithmeasures.com

A data-savvy culture in a warehouse

In this post, I am continuing to explore the concept of a data-savvy culture, pointing out its distinctive characteristics and describing its benefits.  The example this month is a food distribution warehouse in a large food distribution company as described by Clark and Sink (1989).

The researchers framed three guiding questions for this case study and addressed these to the management of the company:

  • How would you know if your organization is healthy and improving over time?
  • How would you know if the resources you spend on improvement initiatives are actually having the desired impact on organizational performance?
  • How would your employees know whether or not what they do every day has any impact at all on the system-level measures and goals you set for them?

Clark and Sink theorized that a Visual Measurement System (VMS) could help answer these questions.  To test this theory, they designed and implemented a VMS in one warehouse in a company with 12 warehouses serving thousands of retail outlets.  The study took 18 months.

The VMS they implemented had three parts—a monthly chartbook, a weekly chartbook, and visible erasable whiteboards located throughout the warehouse.

 The monthly chartbook contained this information:

  • A History of critical incidents in the life of the warehouse to help future planning.
  • A summary of recent customer feedback.
  • A summary of recent vendor feedback.
  • A matrix of major improvement interventions crossed with key performance indicators to show which intervention is affecting which indicator.
  • Action plans for the major improvement initiatives showing who owns them, what they plan to do, when it will be done, and which KPIs will be impacted.
  • Employee perceptions of performance from asking a dozen random employees every month whether the warehouse is getting better, how they know, and on what basis they made this decision.

The weekly chartbook contained some 20 control charts for key performance indicators.  These are a few of those indicators:

  • shipped tons per direct labor hour by week
  • grocery receiving cases per hour by week
  • percent on-time departures from warehouse by week
  • percent on-time arrival at customer by week
  • errors per 1000 cases shipped by week
  • dollar value of grocery damages adjusted for recoup.

The sources of the data for the weekly chartbook were erasable white boards placed throughout the warehouse.  Employees began each shift at their area’s board to plan the work for the shift.  Based on customer orders for that shift, employees decided with minimum supervision how to assign duties.  They updated the board as work progressed, and met at the board at the end of the shift to review the day’s performance.  Each week, data from the boards was entered into the weekly control charts and posted in the warehouse.  At weekly shift meetings, all employees analyzed the previous week’s performance.  Each month, a leadership team composed of managers, supervisors, union, and employees created the monthly chartbook to share the status of improvement initiatives and their results on total performance.

The researchers noted that creating and using a VMS is a non-linear, recursive process that involved creating control charts for available KPIs and thinking through how to collect and portray data for KPIs that were not available.  To quote the authors’s conclusion for the study, “Sharing information, knowledge, and power through the Visible Measurement System has enabled employees to plan, execute, study, and improve their daily work, giving them responsibilities unheard of in a prior era and increasing performance on key distribution system indicators like total cost per shipped ton and total quality error rate by 20-30% in just 18 months.”

Reference:

Clark, L. A. & Sink, D. S. (1995). Visible measurement systems improve performance. Paper presented at the 49th Annual Quality Congress. Cincinnati, OH.

I invite your comments below or by email to mail@managingwithmeasures.com

A data-savvy culture using financial data

In this month’s post I am continuing to explore the topic of a data-savvy culture in the context of managing organizational performance. It is one thing for managers to be the primary analyzers and users of performance data, and another for managers to share the results of their analyses with employees.

But it is a quite different thing for managers to include employees in the analysis of performance data, inviting them to understand their influence on performance measures, and involving them in setting performance goals based on performance reviews; these practices are characteristics of a data-savvy performance culture.

For this post, I reviewed a book titled The Power of open-book management: Releasing the true potential of people’s minds, hearts, & hands.  The “book” referred to in the title is the organization’s accounting system.  In open-book management, employees are taught to read and understand financial reports and participate in regular discussions of how the organization is doing financially.

My interest here is to explore what an open-book organization actually looks like at an operational employee level.  How does this idea actually work with financial data?  According to the authors, the foundation for open-book management is a solid initial phase of employee education on how to read and understand essential financial reports, such as a cash flow report and, in particular, an income statement that has sections for revenue, direct expenses, and indirect expenses.  This training can take one or more months depending on the extent to which the training goes beyond the study of financial reports to study market and industry economics, customer attitudes and behaviors, and the organization’s competitive strategy.

The operational heart of the open-book management method is the following:

  • Employee teams determine which numbers in the organization’s financial statement their work influences and how they specifically affect an income number or an expense number.  This process links employee performance to organizational performance.
  • Department-level and team-level forecasting plans are developed for budgeting purposes.  Sales and income-influencing teams develop their plans.  Expense-influencing teams do the same.  This is a form of “bottom-up forecasting.”
  • Teams and managers meet weekly to review the latest income, expense, and cash flow data, focusing in particular on the numbers they influence and whether they are going in the right financial direction.  Forecasts and performance changes are sent to management and to the rest of the company.
  • Managers in higher positions, who are not close to the weekly fluctuations in the numbers, receive department and team reports and make decisions to maintain or adjust performance at their organizational level.  The weekly team reports are a type of “executive dashboard.”
  • Department and teams also meet monthly to stay informed about the overall financials of the organization and to share their updated performance plans.  At these meetings, organization-wide performance measures are reviewed.

The authors explain that open-book management is done in the context of an organization-wide communication system that shares financial information with employees on an ongoing basis.  Staff meetings are held between managers and employees to review this information and keep everyone “in the know.”  Everyone knows which numbers they affect because they have been part of reporting those numbers.  They also know the numbers of others.  They know what the numbers mean.

Reference:  Schuster, J. P., Carpenter, J., & Kane, M. P. (1996) The power of open-book management: Releasing the true potential of people’s minds, hearts, & hands. New York, NY: John Wiley.

I invite your comments below or by email to mail@managingwithmeasures.com

 

A data-savvy culture in a factory

In my April 2014 post to this blog, I discussed the concept of a data-savvy culture, pointing out its distinctive general characteristics and describing its benefits.  In this post, I want to share an early version of a data-savvy culture called the visual factory developed in the late phase of TQM at a time when interest in lean manufacturing and ISO standards was growing.

In his 1991 book, Michel Grief describes how a visual factory uses a variety of methods to present information to workers about the production processes that they are responsible for.  Here are examples:

  • Work process standards are displayed where the work is performed.  Workers are involved in developing standards and making suggestions for their modification.  Standards are not solely intended to define methods.  Their function is to inspire improvement as well as maintain quality.
  •  Schedules for delivering components from inventory are posted on a board where the components are received and the board is updated daily by the workers to notify the warehouse when more components are needed.
  • On a wall chart, a worker records the results of product inspections throughout the day.  This allows collective monitoring because all workers in the group are continuously made aware of whether the group is performing acceptable work.
  • The status of machine maintenance is charted and displayed on each machine.
  • Problems with suppliers are posted and color coded on the wall in the office where meetings are held with suppliers.  Suppliers can see their performance as well as the performance of other suppliers.
  • When a serious problem cannot be resolved promptly on the production line, a worker records it on a board and raises a red flag on top of the board.  Everyone including management is aware that production has encountered a critical problem that needs special attention.
  • An employee group defined key parameters, selected indicators, defined rules for computation, and specified ways to display results. One data collection instrument was an evaluation form to be completed by a member of the group team.  The form listed key inspection parameters in a work area such as oil spots on the floor, extraneous inventory items in the area, order in the aisles, storage of spare parts in cabinets, neatness in the break area, and conditions of trash cans.

Not all of these methods of almost 25 years ago involve the type of organizational performance measures we talk about in this blog.  However, they all embrace the central idea of a data-savvy culture which says that if employees are given information about how their organization functions and performs, and are trained in how to collect, analyze and use that information, they can be empowered to work collaboratively to achieve or exceed performance goals.

Reference:  Grief, M. (1991).  The visual factory: Building participation through shared information.  Cambridge, MA:  Productivity Press.

I invite your comments below or by email to mail@managingwithmeasures.com

A data-savvy culture for performance measurement

A data-savvy culture is one in which all employees at every level are comfortable in the presence of performance measures.  They understand the benefits of measurement.  They pay attention to data and are committed to “moving the needle” as evidence of performance progress.  They trust management’s use of data.

To build trust, managers need to make the use of performance measures visible so that employees can see what is done with the data.  A performance data system can be closed (only managers see the data), semi-open (managers see the data and provide selective reports to employees), or fully open (employees and managers see the data).

In a fully-open and visible data system, employees have access to data files for review.  This access allows employees to evaluate organizational performance along with their managers.  It allows employee teams to take the initiative to review data relevant to their work and propose corrective action when needed.  It allows managers to involve their employees in performance reviews and to cultivate a “data-savvy culture.”

What does a data-savvy culture look like in the context of an open, visible performance data system?  Here are characteristics you might observe if you took a tour:

  • Employees and managers are able to discuss the performance measures related to their jobs.
  • Employee teams can explain how their department level data are related to organization-wide measures and how they use the data in performing their work.
  • Managers are meeting with their employees to review data reports, evaluate how their unit is doing, and plan short-term work priorities.
  • Employees are collaborating to maximize measured performance and “move the needle.”
  • When data suggest that performance is unsatisfactory in some area, the responsible employee team is given the first opportunity to determine how to respond.
  • Employees are reviewing trends in team performance data and recommending changes in how work is done backed up by data.
  • Employees and managers are setting quantitative performance targets together.
  • Specially-chartered performance improvement teams are undertaking studies of data trends to identify ways to boost future performance.
  • Managers recognize and applaud the efforts of employee teams that meet or exceed management’s expectations.
  • Everyone shares pride of ownership in a high-performing organization.

Open access to visible data has several benefits.  Allowing access to performance data can encourage all personnel to pay attention to the “big picture” of organizational performance and understand how their particular responsibilities fit in.  Asking employees to review measurement data relevant to their own work can reinforce their sense of responsibility as “partners” rather than “hired hands” in achieving high performance.  Access to an open data system invites and permits a wider range of involvement by all personnel in achieving performance goals.

I invite your comment below or by sending it to  mail@managingwithmeasures.com

(Still more on) Linking Employee Metrics and Operational Metrics

In my November 2013 post to this blog, I used a police department example to discuss how a measure of the performance of an employee could be linked to a measure of the performance of the department. The bottom line of my discussion was that linking these metrics is done by linking a performance objective at the employee level with a performance objective at the operational level. It is not simply a technical matter of connecting their scales. Stacy submitted a thoughtful comment to my post and I appreciate her interest. I want to respond to her comment here. She made two basic points, which if I understand her correctly, are the following:

  • Employee performance cannot be measured, because it requires employee performance be independent of the performance of the work process the employee is a part of, and this is never the case.
  • Measures are tools for people in their work, not rods for their backs.

First of all, I hope that I did not suggest or imply in any way that measures should be used as “rods” to be applied to the backs of employees. Stacy and I are in full agreement that measures are tools for people, both managers and employees, to use in their work, not instruments for motivating through fear. So, let’s focus on her first point. In any type of system, human or mechanical, the whole is composed of parts that are connected. The performance of the whole depends on all the parts working together. Everything is connected. Everything is dependent. No part functions alone. I am saying in my own words what Stacy is saying. BUT, this does not mean that how well one part in the system is performing cannot be observed and analyzed, i.e., measured. Indeed, without monitoring the parts, there is no way to diagnose a problem if the whole is not performing well. An automobile engine depends on all its parts working together, no part operating on its own, but we can and do measure the performance of some of the parts—oil pressure, water temperature, tire pressure, etc. I do not agree that because a work process depends on its employees, that this precludes monitoring individual employees. In my November post, I gave examples of what this might look like in a police department. In one example, the department objective was to increase the understanding of residents in a neighborhood about how to help the police deter crime. Each officer was responsible for contacting a certain number of residents and providing them with information about how to be helpful to the police. Is it useful for the manager of the department to monitor each office in this regard? I think so. Here is what it might look like:

  • Did an officer contact all assigned residents? (Measure 1: # of residents contacted)
  • Did the residents become better informed? (Measure 2: Survey of residents)
  • Do the residents become more helpful in practice? (Measure 3: # of tips, leads)
  • Is the safety of the neighborhood maintained or increased? (Measure 4: # of crimes)

Measures 1 and 2 are at the employee level and indicate individual officer performance. Measures 3 and 4 are at the department level and indicate the degree to which police and residents work cooperatively to reduce crime. The departmental objectives cannot be achieved unless the employee objectives are achieved. As Stacey suggests, everything is connected, but the connections are embedded in a continuum of cause and effect that can differentiate between the performance of individuals and the performance of the process.

As always, I invite your comments below or by email to mail@managingwithmeasures.com.

Performance measures and undesirable consequences

In my November 2013 blog post, I used a police department illustration to discuss how a measure of employee performance might be linked to a measure of operational performance.  That discussion received two comments from readers which I posted to my blog that month.  I appreciate that they took the time to share their thinking with me.  This post addresses the first comment from Ronell.  I will discuss Tracy’s comment in the future.

Ronell said.  “Measures create incentives which can have desirable or undesirable results. Hence, one must be careful when selecting measures.  For example, a measure about “increasing convictions” may have the undesirable result of convicting an innocent person and overfilling the jail capacity.  The latter is happening in Williston ND.  What would be a better measure?”

Here are my thoughts in response to Ronell.

1.  In his example, Ronell mentioned two possible undesirable results from increasing convictions:  (1) convicting innocent people, and (2) overfilling the jail capacity.  He reports that the latter is happening in Williston ND.  My thinking about Williston is that overfilled jails is the result of poor planning, not measuring.  If you set out to increase convictions, you better have a place to put all those you convict just in case you succeed.

2.  Convicting innocent people is an undesirable outcome, but not of the measure.  It is my view, discussed in my book Managing with Measures: How to Choose the Right Performance Measures for Your Organization (click here), that organizational performance is driven not by measures themselves, but by management expectations to achieve a result.  Ronell got it right in the phrase “increasing convictions” because this is a goal, not a measure.  Goals supported by plans drive organizational performance, not measures.

3.  On the other hand, IF management announces a target to increase convictions by a specific number in a specific period of time, AND in no uncertain terms makes it clear that employees are expected to achieve the number, AND the target is not accompanied by a plan or guidance on appropriate methods for achieving the number, AND employees get a subtle message from management to do whatever it takes, it could happen that employees could manipulate trial evidence in an attempt to achieve the target for fear of reprimand.  This would be wrong and unethical, an undesirable consequence of unprincipled management, not measuring.

I invite your comments below or to mail@managingwithmeasures.com.