When is an apple not an apple?

So, when is an apple not an apple? The answer to this somewhat juvenile riddle is, of course, when it is an orange, pear, pomegranate, or any other type of fruit you care to mention.
    
This might seem a strange way to start an article that is ostensibly about benchmarking, but my objective is to demonstrate that the sentiment behind it is entirely germane to what can be one of the most powerful weapons in the facilities manager’s armoury.
    
Before starting to develop this proposition, it is perhaps appropriate to review some of the things that benchmarking can be used for – and here I refer back to the BIFM’s Good Practice Guide to Benchmarking, which I was fortunate enough to be involved in the production of.

Potential uses of benchmarking
According to the Guide, benchmarking can be used to provide a baseline against which to measure service improvements (in terms of cost, process, quality, etc). It is a means of ascertaining the competitiveness of the services being provided when compared to an open market situation. And it is a means of demonstrating to the ‘parent’ organisation that facilities management (FM) services are being provided efficiently and economically.

Benchmarking can also be an alternative to a more resource hungry (and hence more costly) re-procurement exercise (why go to the market if your services are already being provided at a competitive rate?). In the case of a ‘new’ building, it is a way of generating a realistic estimate of what the required FM services ought to cost for a given service quality. When assessing FM tenders in an outsourcing/re-tendering scenario, benchmarking can be a methodology for assessing whether the tender returns are both in‑line with open market expectations and will provide the required service level.
    
To those readers who are paying attention, it should be evident that there is a common theme running through the potential uses of benchmarking as mentioned.

And that common theme is comparison, but not just comparison on any old basis. If we are to draw meaningful conclusions from a benchmarking exercise, then the comparison needs to be undertaken on a strict like‑for‑like basis. To achieve this we need to have two things in place: a classification protocol and a standard method of measurement.

Easy in theory
On paper, neither of these sounds overly difficult to ensure. However, years of FM experience generally have amply demonstrated to me that what sounds easy in theory is often a lot more difficult to achieve in practice.

In terms of a classification protocol, it is quite easy to come up with a generic definition of, say, services maintenance or cleaning and there are a number of ‘standard’ protocols available that can provide a good starting point. The devil is often in the detail, though. For example, where do you capture the cost of maintaining catering equipment? Some organisations include this with services maintenance whereas others will include it with catering.

Sticking with services maintenance, other ‘anomalies’ that you might come across include: misunderstanding/confusion with terminology – for one of my clients it was not obvious that radiators should be included as part of the heating installation. And inclusion of minor project works/plant replacements – self-evidently the inclusion of these can seriously distort the cost of regular day-to-day maintenance activity.

Classification protocol
These particular issues ignore the more generic difficulties occasioned by accounting systems that have not been structured to allow detailed analysis of FM cost and, of course, the old favourite: miscoding of invoices. In reality, the only way for the benchmarker to ensure that the classification protocol has been correctly applied is to access the base data and audit it themselves. This is not as arduous as it sounds; the advent of spreadsheets and databases has made data reallocation a relatively painless exercise.
    
With regard to the method of measurement, it is fairly common to refer to ‘hard’ FM services (primarily maintenance) in terms of cost per square metre, whereas ‘soft’ FM (eg catering, reprographics) is typically talked of as cost per capita, full-time equivalent (FTE) or workstation.

Again, this sounds simple enough, but in reality there are all sorts of different square metres: gross external, gross internal, net lettable, gross occupiable to name but a few.

Clearly, if one data set is being expressed in terms of gross internal area (GIA) and another by way of net internal area (NIA), 
then any comparison between the two is going to be entirely fallacious.
    
In a recent example of this, we were asked to benchmark the cleaning costs of two almost identical office buildings. Making allowance for some minor differences between the two, we were expecting the cleaning costs per m² to be broadly similar. However, those associated with the second building consistently came out 15 per cent higher than we were anticipating; it transpired that we had been given the GIA for one building but the NIA for the other. Similar caution needs to be employed when benchmarking ‘soft’ FM services. For example, if ‘per workstation’ is the chosen measurement parameter, then we need to be specific about what sort of workstation we are referring to: is it in use eight hours a day, 24 hours a day, used as a hot-desk or for drop-in visitors, etc? Any or all of these factors are going to affect the benchmark results generated.

Useful tool
So, to return to my theme, when used properly (on a strict apples-for-apples basis) benchmarking can be an extremely useful tool in assisting the FM to achieve demonstrable value for money in service delivery.  As we’ve seen, care needs to be taken to ensure that comparison is undertaken strictly like‑for‑like. Too often, benchmarking results have been discredited when, on deeper inspection, it was found that conclusions had been drawn by making apple-to-pear comparisons.
    
A final point, apple-to-apple comparison is fine when you are benchmarking at a relatively high level, but in some instances it will be necessary to undertake the investigations to a greater degree of granularity. In this instance, adherence to the classification protocol and method of measurement becomes even more critical by enabling comparison (to continue the analogy with fruit) at a Granny Smith-to-Granny Smith or Cox’s Pippin-to-Cox’s Pippin level.

Mike Packham is a BIFM technical expert and partner at FM consultancy Bernard Williams Associates

Further information
www.bifm.org.uk

Event Diary

DISCOVER | DEVELOP | DISRUPT

UKREiiF has quickly become a must-attend in the industry calendar for Government departments and local authorities.

The multi-award-winning UK Construction Week (UKCW), is the UK’s biggest trade event for the built environment that connects the whole supply chain to be the catalyst for growth and positive change in the industry.