products Product News Library
Skip Navigation Links.
This book shows how Business Centered Maintenance (BCM) methodology can be used to audit and improve the management systems of industrial maintenance departments.
Maintenance Management Auditing
(Bench Marking)

Acquire this item
   by Anthony Kelly
Published By:
Industrial Press Inc.
Industrial managers will be better able to audit their own maintenance departments themselves, or better interface and direct audits by external consultants. SALE! Use Promotion Code TNET11 on book link to save 25% and shipping.
Add To Favorites!     Email this page to a friend!
 
Page   of 1   

Benchmarking, Benchmarks and

Performance Indices

 

Introduction

 

I indicated in the previous chapter that, at the analysis stage of an audit, ‘benchmarking’ to compare the company’s performance with industry best practice may be usefully employed. Before moving on to the case studies section of the book we will therefore describe this activity, especially as it is applied to maintenance, in more detail.

 

Benchmarks and Benchmarking

 

The dictionary definition of the benchmark is:

 

A surveyor s mark , cut in a rock , to indicate a reference point in a line of levels for the determination of altitudes over the face of the country.

 

This is the original, land surveying, definition of the benchmark—a reference point for subsequent measurements. In the 1970s this idea was interpreted in the world of industrial management as follows:

 

Benchmarking is the search for industry best practices that lead to superior performance.

 

The benchmarking procedure involves the following main steps:

 

  1. Identify that division of your own organization (e.g., the maintenance department) the management and/or performance of which is to be benchmarked.

 

  1. Identify a best-performing counterpart, elsewhere but in the same industrial sector, that can be used as a standard of comparison.
  2. Identify the practices, structures and systems that best describe the operation of the division under study. The benchmarking procedure involves the following main steps:

 

  1. Identify the benchmarks that can be used to quantify and measure the management and/or performance of the division under study.

 

  1. Identify the practices, structures and systems that best describe the operation of the standard of comparison.

 

  1. Use the identified benchmarks to compare the management and/or performance of the division under study with that of the ‘standard of comparison’ and identify the reasons for any observed shortfalls or performance gaps.

 

  1. In the light of the findings from Step 6, propose an action plan for improving the practices in the division of your own which is under study.

 

1 C. E. Bogan and J. M. English, Benchmarking for Best Practice, McGraw Hill, 1994.

  

A  generic model of a benchmarking process proposed by Camp2 is shown in Figure 4–1.

FIGURE 4–1 Steps in Generic Benchmarking Process

 

One of the most publicized applications of benchmarking in maintenance was carried out by DuPont Chemicals who, in 1986, benchmarked sixteen of its chemical plants against an equivalent number of industry leaders. It showed that the company’s maintenance cost was up to 30% higher than that of the best comparative companies. In addition, DuPont employed 20% more tradeforce, 50% less planning personnel and 15% less maintenance support staff. The company accepted that there was a performance gap and set in place an improved, cost-reducing, maintenance strategy (see Figure 4–2).

FIGURE 4–2 Improvement in Maintenance Cost as a Result of Benchmarking

 

Using the Audit Procedure for Benchmarking

 

The auditing procedure outlined in Chapter 3 provides a means not only of modelling and mapping a given maintenance department but also of identifying the appropriate benchmarks for comparing that department against the best of its kind in industry, i.e., for ‘benchmarking’ it (see, for example, the Key Performance Indicators (KPIs) identified in the audit aide-memoire , Appendix 1). The following are brief descriptions of applications of the latter kind.

FIGURE 4–3 International Comparison of Power Station Maintenance Benchmarks

 

  1. Benchmarking the Maintenance Departments of Coal Fired Power Stations

TABLE 4–1 Extract from an International Comparison of Power Station Benchmarks

 

A postgraduate project was aimed at benchmarking the maintenance epartments of base-load coal-fired power stations operating steam-driven 300 MWe turbo-alternators.4 Fourteen stations were benchmarked world-wide, using the fingerprint audit technique (see page 155) Key benchmarks were used to compare performance (see Figure 4–3 and Table 4–1). In addition, information was obtained on the key maintenance practices, structures and systems in use at each station. Figure 4–4 shows an extract from the tables of overhaul cycles maintenance strategy practice), Figure 4–5 an extract from the modelling of the organizational structures. Similar comparative models were assembled for the resource, work planning and documentation structures and systems. The conclusion from this simple application of benchmarking was that caution is needed in interpreting some of the top level key benchmarks as indicators of maintenance performance. For example, a possible interpretation of the data displayed in Figure 4–3 was that Station M scored highest in terms of maintenance productivity. On closer investigation, however, it was found that the main reason for this apparent superiority was that the designers of this station had selected the most reliable and maintainable plant. That is not to say that the benchmarking information was not important; it could be used to influence decisions regarding future plant acquisition, for example. The maintenance practices at Station M were by no means the ‘best of the best’, however.

FIGURE 4–4 Extract from an International Comparison of Power Station Overhauls

 

This result is not untypical of my experience in trying to use performance indices to benchmark best practice. Even when comparing the maintenance of power stations of roughly the same size there are many differences—in detailed design, technology, manufacture, age—that exercise greater influence on the indices than do such aspects as the maintenance life plan or work planning systems. Normalization techniques can be used to compensate for some of these differences, but care must still be exercised when evaluating the results.

 

Nevertheless, much information that was useful for the sponsoring company came out of this exercise. They were able to see clearly that they were overmanned and would need to improve inter-trade and operator-maintenance flexibility, and reduce the number of first line supervisors on site (via empowerment). In addition they could see that the organizational trend (in particular in the USA) was towards contractor alliances.

 

2. Benchmarking Operator–Maintainer Teams

 

I recently carried out a full audit of maintenance at a chemical plant (Fertec B Ltd), see page 89. For several years the company had been using operator–maintainer teams. Self-empowered and plant-oriented, they were expected to both operate and carry out simple first line maintenance tasks for their own area of plant.

 

The audit revealed that the teams were not working well. However, a sister plant (Cario Ltd) within the same organization, was reported to be using such teams successfully. I decided to carry out a simple benchmarking exercise, to compare the practices of the teams of Fertec with those of the teams at Cario.

 

2R. C. Camp, Benchmarking , American Society for Quality, 1989.

3 B. Holmes, Benchmarking Best Practice in Maintenance , EIT Maintenance Conference, Australia, 1997.

4 K. Gallagher, MSc Thesis , University of Manchester, 1991.

 

FIGURE 4–5 Extract from an International Comparison of Power Station Administrative Structures

 

Fertec B Ltd —was an integrated chemical complex made up of six plants. The administrative structure was de-centralized and each plant was considered as a distinct manufacturing unit. Figure 4–6 shows the administrative structure for one of the main plants, Figure 4–7 the corresponding resource structure. The selfempowered teams had been introduced some five years earlier using the conventional wisdom of the time (see, for example, Figure 4–8 for the guidelines for moving from traditional supervision to self-empowerment). A Team Manager (see Figure 4–6) was brought in, mainly to help to resurrect the training process in the operator teams. The following were some of my principal observations regarding the team operations:

 

25% of the process teams were incorporated ex-tradesmen.

 

The process teams undertook no first-line maintenance tasks despite this being a part of their responsibilities.

 

The process teams seemed to be a law unto themselves, with a wholly negative human factors impact.

 

The ratio of managers, planners and facilitators to ‘on-the-tools’ technicians in the maintenance teams was 2.8 to 1.

 

The planner was introduced when a new computer system was installed; it was regarded as user unfriendly and the associated training was poor.

 

The maintenance teams did not operate as self-empowered. They had reverted to the traditional structure (where the facilitator is the supervisor).

 

The facilitator and planner positions were permanencies.

 

There was a genuine confusion over the roles of the planner, facilitator and some of the technicians. In addition, there was no clear understanding of the roles of the plant engineer and process engineer and their relationships to the teams. Job descriptions were not used or were not available.

 

The teams didn’t monitor themselves nor did they get involved in continuous improvement activities.

 

Flexibility between mechanical trades, and between instrument and electrical trades, was good. Demarcation remained strong, however, between the two technological cultures (mechanical and electro-instrumentational).

 

The maintenance technicians were rotated, on a two-yearly basis, around the teams of the different plants.

 

There was no rotation between maintenance and process technicians.

 

Maintenance technicians were on an annualized-hours scheme; the process technicians were not (which didn’t help co-ordination when work was required out of hours).

 

The plant was some thirty years old and gave rise to a great deal of first ine high-priority maintenance. Because of their involvement with this, the maintenance teams (on days) only carried out 50% of the planned work per ‘period—and it was often the preventive routines that, as a result, were omitted.

 

    Out of hours emergency work was covered by a call-out system that was regularly used.

 

    Team-working had been introduced into a ‘brown field’ site where there was a considerable history of industrial relations problems. There was no human factor profiling in the selection of team members. Considerable training was used at first but rapidly fell away.  

 

 

FIGURE 4–6 Administrative Structure, Ammonia Manufacturing Unit

 

 

FIGURE 4–7 Resource Structure, Fertec B Ltd

 

FIGURE 4–8 The Five Steps from Traditional Supervisor to Self-Empowered Team with Facilitator

 

Cario Ltd—was a similar company to Fertec B but only six years old. The administration is shown in Figure 4–9 and the resource structure in Figure 4–10.

 

As regards its team operations:

 

Some 40% of the process teams were ex-tradesmen.

 

The process teams carried out minor preventive work (lubrication, inspection) and small emergency corrective jobs.

 

The process teams (on shifts) had a good relationship with the maintenance teams and human factors were mainly positive. They were the highest paid of the shop floor workers.

 

In each team there was a planner, team-selected every three-months, who spent little time on the tools. Overall, the ratio of planners to on-the-tools tradesmen was 1 to 5.

 

The planners and tradesmen had been trained up to a high level of competence in the use of the computer system—which was far from being the most advanced of its kind and was not installed enterprise-wide (it was user friendly, however, and was therefore used).

 

The maintenance and process teams were self-empowered, accountability for duties and responsibilities being shared across each team

 

The two planners (electrical and mechanical) worked out of the same office. In effect, the planners became the facilitators and worked closely with the day-shift Production Planner (PP) to co-ordinate plant outages etc.

 

The PP was volunteered from shifts, for a one-year period and without losing shift allowance, being replaced on shifts by a maintenance technician (with the necessary experience), and who in turn was replaced by a contractor. This exchange helped to break down the barriers between

 

Maintenance and Production.

 

The roles of the team, Planner and Engineer had been clearly identified and written up as job descriptions. The Engineer considered himself to be very much part of the team.

 

The teams spent a proportion of their time on design-out maintenance and improving life plans, these tasks being carried out in conjunction with the Engineer.

 

The electrical and mechanical teams worked separately but gave each other considerable assistance when needed.

 

The payment scheme was the same for both the maintenance and the process teams.

 

The plant was six years old and gave rise to very little high-priority maintenance.

 

95% of the teams’ workload was planned. In fact, one of their objectives was have ‘no

call-outs—ever!’.

 

The teams had been formed during the commissioning phase of the plant. Profiling of skills and human factors was used to select the team members.

 

FIGURE 4–9 Administrative Structure, Cario Ltd Explosives Plant

 

FIGURE 4–10 Resource Structure, Cario Ltd Explosives Plant

 

Once the teams had been set up, a group from the company (including team members) were given the equivalent of six months full-time team-training at a similar site overseas that was regarded as a world benchmark.

 

Comment

This was not a full benchmarking exercise. Only a limited number of benchmarks were used, viz.:

 

percentage of tradesmen in operator–maintainer teams,

 

ratio of planner–facilitators to ‘on-the-tools’ tradesmen,

 

maintenance team planned work (expressed as a percentage of each period),

 

maintenance team emergency work (expressed as a percentage of each period).

 

The fingerprint audit of the Cario teams was limited to just over half a day’s work, nevertheless, the gap between the performances of the teams at Cario and of those at Fertec was clear.

 

Although much can be learned from this exercise about how best to set up teams, this was not the problem Fertec B faced. Their process teams were not working well and were refusing to carry out first line work. Consequently, the day aintenance teams were carrying out all such work, which was disrupting their planned work. The preventive work was being neglected, causing more corrective work and a downward spiral in plant condition. We recommended that a key action would be to insist that the process teams carried out first line work. This would need a careful study of the process teams’ work profiles and a resurrection of the skills training. It might also be necessary to re-structure the teams, perhaps removing the ‘bad apples’—all of which might have worsened industrial relations, which was the reason these actions had not been taken sooner. An additional recommendation was that the duties, responsibilities and accountability of the teams, and their relationships with other members of the administration, should be clearly defined.

 

In the longer term other improvements could have been made, e.g., bringing Operations and Maintenance into the same payment system; improving computer training so that the facilitator does the planning; etc.

 

The Uses and Limitations of ‘Universal Maintenance Performance Indices’ in Maintenance Auditing

 

In general, the conventional audit procedure does not involve comparing the audited plant directly with a ‘best practice’ plant. Instead, at the analysis stage of the audit the practices of the plant under study are compared against management principles and guidelines, standard models, the auditors’ own experience of best practice and published or unpublished ‘ maintenance management performance indices ’—benchmarks by any other name. I call these ‘ universal benchmarks .’ It will be instructive to examine them in more detail.

 

Several index-based approaches to measuring maintenance performance have been developed. These published methods were developed for use in controlling the maintenance effort (setting objectives, measuring performance and correcting as necessary—see page 23) rather than for inter-firm comparison. Industrial examples of the use of these methods for either control or inter-firm comparison are hard to find.

TABLE 4–2 Organizational Efficiency KPIs—Eastman Ectona

 

I have come across a number of unpublished methods for inter-firm comparisons. Table 4–2, for example, has been extracted from Eastman Ectona’s indices for measuring organizational efficiency.7 The indices have been developed for a specific type of chemical plant and process. Table 4–3 is an extract from the Fluor Daniel ‘best-of-the-best’ maintenance benchmarks.8 There are 24 benchmarks in total, some of which, e.g., Availability, have been measured for different types of processes. Fluor Daniel emphasized that they can identify the best practices that go along with the top quartile benchmarks. Tables 4–4(a) and (b) have been extracted from the results of a world-wide benchmark study of ammonia plants of a similar size.9 A UK consultancy company has carried out ‘self assessment’ audits of maintenance managements for many years and has built up a considerable benchmark data base—although this has not been published.

 

TABLE 4–3 Table of Quartiles for ’Best of Best’ Benchmarks by Fluor Daniel

 

In general, benchmarks such as these are not sufficiently well-defined for general use. The ammonia plant benchmarks, however, are sufficiently specific to industry, process and plant size to be of use if it were possible to get hold of a full set (with the qualifying assumptions). The Fluor Daniel benchmarks are only usable if that consultancy company itself carries out the benchmarking exercise. In addition, they need clearer definition and greater orientation to the type of industry being audited.

 

TABLE 4–4(a) International Benchmarking Study of 1000 tpd Ammonia Plants — Output Factors (extract)

 

TABLE 4–4(b) International Benchmarking Study of 1000 tpd Ammonia Plants—Cost of Production Factors (extract)

 

5 A. K. S. Jardine, Operation Research in Maintenance , Manchester University Press, 1970.

6 C. C. Rostad and P. Schjoberg, Key Performance Indicators , Euromaintenance Conference, Gothenburg, 2000.

7 I. Bendall, Maintenance Control. Paper given on a short course at Manchester School of Engineering, 1999.

8 Fluor Daniel, Best of Best Maintenance Benchmarks. Internal paper, 1998.

 

 

Some Thoughts on Developing Maintenance Benchmarks

 

Not enough thought has gone into identifying and ranking benchmarks that adequately measure the performance of the maintenance department. They should be based on the maintenance objective and on the derived hierarchy of objectives shown in Figure 1–5. Figure 4–11 shows this model extended and developed into a hierarchy of performance indices. The indices indicated at the lowest level in Figure 4–11 could be developed further via consideration of each of the sections of the Appendix 1 aide-memoire. When developing indices for comparing one firm with another it should also be borne in mind that:

 

_ the indices may well need further separate definition for different types of industry or process,

_ the indices shown in Figure 4–11 have been developed for the purposes of maintenance control (‘ Are we getting better/worse, and, if so, why? ’) and may need modification.

FIGURE 4–11 Hierarchy of Maintenance Performance Indices

 

The highest level indices are the KPIs, but these are much more difficult to define than those at lower level. In the power generation industries I have seen a maintenance productivity index ( MPI ) defined as:

 

the period extending at least from one major overhaul to the next. In the food processing industry, however, an equipment replacement value (ERV) features strongly in its performance indices, i.e.

 

Indices of both kinds present problems. In the former case the period of comparison has to be at least six years if it is to allow adequately for the various lengths of time between overhauls; in the latter the ERV is extremely difficult to define in a way that can be used consistently for inter-company comparisons. The best way of measuring the higher level indices may well be to profile the lower level ones. The benchmarks suitable for profiling the effectiveness of the maintenance of a production-limited ammonia plant, for example, could be those in the following list—

 

Time between major planned shutdowns.

 

Shutdown duration.

 

Downtime between shutdowns.

 

Shutdown duration overrun.

 

Overall availability.

 

Shutdown budgeted cost.

 

Shutdown overcost.

 

Cost of maintenance between shutdowns.

 

Wastage cost due to incidence of inadequate product quality.

 

During shutdowns, planned work as a percentage of total work.

 

Between shutdowns, planned work as a percentage of total work.

 

Between shutdowns, preventive work as a percentage of total work.

 

List of CBM techniques in use.

 

Equipment integrity index (safety).

 

Copyright © 2006 Industrial Press Inc.

 

Page   of 1   
er