Does Data Driven Peer Benchmarking Really Work? Frankly – Nope!

When municipal governments conduct Value-for-Money operational reviews, these reviews almost always require the execution of a service delivery peer benchmarking analysis.  These peer benchmarking analyses typically require the selection of apples-to-apples peer jurisdictions – featuring similar demographics, population densities, and service profiles.  Benchmarking proponents invariably claim that service delivery processes, unit costs and quality outcomes can be meaningfully compared across the peers and “best practices” can be identified and emulated.  It sounds compelling, but of course the devil is in the details.  Allow me to elaborate.

In my municipal government career preceding the creation of Performance Concepts in 2001 I served as the original Project Manager for the Regional CAOs Benchmarking Initiative in Ontario – an exercise subsequently rebranded as OMBI.  My experience in the benchmarking trenches was telling – it took years to establish reasonably consistent accounting & data collection protocols for peer benchmarking.  Cost accounting and data comparability skirmishes were the rule rather than the exception across most service areas.  Participants never really adjusted their own “back home” financial accounting anomalies to meet the harmonized benchmarking definitions.  For instance small capital spending activity such as crack sealing was present in some roads operating budgets, while other municipalities insisted all capital spending be segregated in capital projects accounted for outside the operating budget.  Year-over-year data trends were erratic across jurisdictions.  OMBI data is still blatantly non-comparable for some services ten years later.  There are virtually no documented cases of OMBI data being used to identify and document superior service delivery practices in one jurisdiction that have been emulated across others.  Emulation occurs, it just has very little to do with comparative data.  In my experience municipalities prefer it when the respective data sets of peers all cluster within a non-threatening “herd” pattern where nobody stands out.  OMBI benchmarking has established a low risk/don’t rock the boat bureaucratic culture across the Expert Panels that are supposed to drive the exercise.

I have come to the conclusion that there is a much better ROI for a given municipality to focus its limited performance measurement capacity/energy on measuring its own progress against itself over time – using its own consistent accounting and data collection systems.  More on this “measure against ourselves” model to come in another blog entry.

Recent Posts

Start typing and press Enter to search