Benchmark Drives Behavior…. mitigating the risk (part 3)

Risk

This is the third article in the series related to benchmarks and their impacts on the behavior of those subjected to those benchmarks. In the first article we addressed how benchmarks could incentivize individuals to do things that might not be in the best overall interests of an organization. In the second article we discussed specific examples of where benchmarks had created aberrant outcomes and linked these to how the benchmarks were derived, implemented, and monitored. In this article I have changed tack and as opposed to pointing out the risks and concerns and discussing where things went off the rails, I have provided some tips on how to minimize the risk of problematic outcomes of benchmarks that are in place or are created and implemented by an organization.

The suggestions are by no means perfect and as benchmarks will continue to have problems the objective is as stated to further minimize
potential problems and their negative outcomes. Striving for excellence in performance and/or results must be measured and somehow linked to a benchmark or other type of measurement process. There will naturally be mistakes made along the way but working towards minimizing those mistakes is a laudable goal. As the saying goes, show me someone who has not made any mistakes and I will show you someone who has not done
anything.

So, lets dive in and discuss some potential steps that can be taken to further enhance the efficacy of benchmarks and their use. Also, as part of this third in the series of articles I have selected several thought leaders and writers who have dedicated time and effort to creating
effective benchmarks. I have summarized their approach for the selected methods and referenced them. There are many more approaches available, but I wanted to demonstrate the level of thought that has already gone into creating effective benchmarks.

As noted, when one starts looking into benchmarks and how they are developed it reveals that there has been a significant amount of work completed in this area and it can quickly become confusing. For example, the Lucid chart and the Lucid content team have a document referred to as the 8 steps of the benchmarking process, Pradeep Kumar Mahalik IXSIGMA talks about a 10-step approach, Jeffery Berk of workforce.com talks about a 6-step approach and Mercy Harper talks about a 4- step model. All these approaches have a solid base for approaching a benchmarking exercise, have things to commend them and are a great starting point for a benchmarking exercise. The benchmarking frameworks identified above and elaborated on below set out leading practices on how to develop benchmarks, but their focus is not on the “what could go wrong” (WCGW) risks. A suggested enhancement to the process, from my point of view, is a greater focus on the inherent risks resulting from the WCGW factors that organizations and individuals potentially face with respect to setting benchmarks.

Details on the 4 selected sample frameworks

The 10 Step Approach

Pradeep Kumar Mahalik IXSIGMA 1 2 addresses a four phased approach using 10-sub steps to arrive at a suitable benchmark as set out below

Planning Phase

  • Identify opps and prioritize (what to benchmark)
  • Deciding the benchmarking org (whom to benchmark)
  • Studying the superior process

Analysis Phase

  • Finding reasons and devising approved processes
  • Goal setting for the improved processes

Integration Phase

  • Communicate findings and gain acceptance
  • Establish new functional goals

Action Phase

  • Develop action plan for implementation
  • Implement specific actions and monitor progress
  • Keep the process continuous

The 8 Step Lucid Approach 3

While conceptually very similar the Lucid approach is a little more compressed than the previous example.

  • Select a subject to benchmark. …
  • Decide which organizations or companies you want to benchmark. …
  • Document your current processes. …
  • Collect and analyze data. …
  • Measure your performance against the data you’ve collected. …
  • Create a plan. …
  • Implement the changes. …
  • Repeat the process.

The 6 Step Approach

Jeffery Berk of workforce.com 4 5 came up with “the 6 benchmarking steps that you need”.

These steps are stated as being:

  • Step One: Select the process and build support.
  • Step Two: Determine current performance.
  • Step Three: Determine where performance should be.
  • Step Four: Determine the performance gap.
  • Step Five: Design an action plan.
  • Step Six and beyond: Continuously improve.

The 4 Step Approach 6

Finally, the work done by Mercy Harper 7, published November 13th, 2019, provides a good general framework to assess the types of benchmarks that organizations can use. Ms. Harper identified and defined four types of benchmarks in her blog that I feel could be a useful starting point for a benchmarking exercise. Ms. Harper started by breaking benchmarks into 4 distinct categories being:

  • performance
  • practice
  • internal
  • external

Four steps, 6-steps, 8-steps, 10-steps and several different models which we have not even considered. These differing approaches may appear complicated, but the basic premise of mitigating benchmark risks by focusing on WCGW risks often appears to have taken a back seat. Despite the various benchmarking models, the model for developing benchmarks is not the issue. I believe the risk exists in not executing against the chosen process or model in enough depth and considering the totality of the risks when deciding which benchmarks to develop and adopt. With each of these models we should consider how we can incorporate an enhanced level of scrutiny for the downside risks of benchmarks. This is a risk mitigation step to limit or help identify early on those issues that might let benchmarks go off the rails. As pointed out previously there are many
more models out there and regardless of the model selected/used to develop a benchmark it is essential to seriously consider the WCGW risks.

To this end, I have set out some potential additional steps or enhanced steps and focusing more on the risk component with respect to the downside risk portion of the benchmarking exercises. Some of the suggestions below may already be covered at least in part by the above frameworks implicitly and for some even explicitly, regardless, I believe that there should be more focus on what could go wrong (WCGW) when developing new or assessing existing benchmarks. With respect to the benchmarking frame works identified above they set out leading practices on how to
develop benchmarks, but their focus is not on the WCGW risks. The suggested enhancement to the process results in a greater focus on the inherent risks and WCGW scenarios organizations potentially face with respect to setting benchmarks which are meant to drive their success.

These additional or enhanced steps are not intended to reinvent the wheel with respect to creating benchmarks but, to enhance the existing processes through helping individuals that are creating benchmarks think more broadly and encouraging them to use both the WCGW methodology along with pre-mortems.

A pre-mortem is working from the assumption or position where one imagines that the patient has died, or things have already gone wrong with a view to building better safeguards. It is a great enhancement to the WCGW methodology if used in tandem. The benefits that can be obtained by considering the benchmarks in the context of the raison de etre’ of the organization and linking them back to mission statements etc. will also help to ensure that benchmark creators are aware of the bigger picture.

So, what are some suggestions on how to minimize some the risks that are inherent in benchmarking? This can, in my view be a critical issue with respect to achieving desired outcomes. I also stress that in dynamic environments there is a need to constantly reassess benchmarks to ensure that they are still achieving what they were intended to achieve when they were first implemented. The suggestions below can be used for both the existing benchmarks and new benchmarks.

The suggestion is, that in addition to using one of the developed frameworks for creating benchmarks as outlined above, that process also consider the steps bulleted below as part of that process to ensure the integrity and validity of the benchmarks being used are further enhanced. A few suggestions:

  • There needs to be a clear objective of what the benchmark is supposed to achieve, and this should be distilled to writing. It should also be provided to all the groups listed below and all parties that will be impacted by the benchmark as part of the process with the explicit question as to how can this benchmark be manipulated?
  • Linking the benchmarks back to the corporate mission statement and corporate code of conduct so that there is a clear line of sight between those key documents and the benchmarks proposed.
  • Conducting an inventory of the stakeholders or people that will potentially be impacted by the benchmark and then ensuring that those individuals have input into the process. That input should include sessions to, at a minimum, determine how it will impact those individuals risks for gaming the benchmarks, the ability for future adjustments, etc. Ultimately those measured by the benchmark may not agree with it or may not be comfortable with it but they must believe that it is fairly administered. Ensure that they have are comfortable with the benchmark.
  • Ensure that a proper tracking, monitoring, and measurement system is in place to ensure the system is being utilized as intended and minimize gaming. (Gaming should result in consequences.) Equity and fairness needs to be front of mind here.
  • If possible, pilot the benchmark somewhere in the organization that will enable an adequate representation to iron-out the difficult areas and facilitate acceptance upon full implementation.
  • Consider an independent review of the benchmark by internal audit with a focus on the controls surrounding the benchmark and the downside risks. This should be done prior to roll out and post the roll out.
  • Have compliance review the benchmark for the risks that it might drive from a compliance perspective.
  • Conduct a premortem of the benchmark to facilitate the identification of the downside risks.
  • Create a schedule for the regular review of the benchmarks in place to determine if they are still adequate for achieving the objectives of the organization or if they need updating.
  • Determining if, because of changes in the operating environment, new downside risks relating to exiting benchmarks have emerged.
  • Ensure that the metrics and ability to track the metrics produced by the benchmarks are driving the desired outcomes and are transparent and equitable.
  • Ensure controls around the benchmarks are robust and properly monitored.

It is acknowledged that some of these steps may not be practical for certain organizations or entities, but at a minimum the WCGW process should be implemented and there should be robust controls surrounding the benchmarks related not just to creating the benchmarks but also to monitoring, reassessing, and tracking the existing benchmarks. It is not being asserted that this this process must be utilized for each tracking mechanism an organization has in place but, it should be considered for those benchmarks that are key for the long-term organizational objectives and those that are being utilized to incentivize behaviors related to what is considered long term success for the organization.

Conclusions

As noted in the previous articles there is, and will always be, a requirement to benchmark. It is not just a part of business but standard in everyday life. In the first article I quoted both Einstein and Drucker and noted that you get what you measure sometimes to the exclusion of all else.

Einstein said, “Not everything that counts can be counted, and not everything that can be counted counts.” 9 Something we must keep in mind when creating benchmarks. Drucker was purported to have stated (but apparently this was inaccurately attributed to Peter Drucker) “you can’t manage what you cannot measure.” 10

From a conclusory perspective it reinforces the need to make sure that what we are measuring is driving the organization forward in accordance with its mission statement, the expectations of its’ stakeholders, and that it is not driving aberrant outcomes.

It is my opinion that whilst we have a solid benchmark creation process library we can choose from when benchmarks fail and/or cause aberrant behavior it is generally attributable to these factors:

  • A failure to recognize the downside risks of what could go wrong in the development stage,
  • A failure to properly monitor the benchmarks that are being utilized,
  • A failure to regularly reevaluate the benchmarks as the world changes,
  • A failure on the part of individuals to understand the purpose of the benchmark,
  • A risk that those being benchmarked see their benchmark as being to the exclusion of all else.

I hope that these articles have helped identify some of the risks that individuals and organizations face when they are measured or are being measured against a benchmark. All of us collectively have an obligation to question and consistently ask what could go wrong with a benchmark whilst simultaneously using benchmarks to help drive our organizations to the next level. I hope that you have gained some additional insight(s) from this series of articles and enjoyed reading them as much as I enjoyed writing them.

Guido van Drunen 
Shopping Cart
Scroll to Top