Deviant engineering

I have just finished reading a dense but interesting book on the 1986 Challenger space shuttle disaster [1].  In a nutshell, the craft was destroyed 73 seconds after launch by a seal failure on one of the solid rocket boosters.  There had been concerns about the seal for years and it had been the subject of intense investigation.  By adhering strictly to rigorous NASA procedures for multi-stakeholder review the many people involved were genuinely persuaded (with a couple of exceptions) that the shuttle was safe to launch.  And the couple who had doubts were not sufficiently convinced about their reservations to overcome the cultural constraints on speaking up.

The conclusion of the book was that none of the individuals were to blame, nor was it easy to blame the NASA system that had been deliberately set up to encourage four levels of rigorous review through a semi-adversarial system with a great reliance on robust science and engineering.  The net effect was that deviance (off-spec performance of the seals) had become normalised.  This conclusion is in contrast to the commonly held view that the engineers and managers involved made a calculated and immoral decision to accept the risk in the interests of furthering NASA’s goals.

Reading some of the quotes from the various enquiries you can sense both the honesty and anguish of key players who never dreamed that they were accepting the risk that lead to the disaster.  If that sounds implausible it is because it is hard to summarise such a complex matter; the full story in the book is quite convincing.

A key extract:

The answer to the question of “good” people and “dirty” work … is that culture, structure and other organisational factors … may create a worldview that constrains people from acknowledging their work as “dirty”.

In other words, the NASA and contractor engineers did not set out to cheat the system.  On the contrary, because they complied so comprehensively with their highly rigorous procedures they simply never recognised that the decisions they were making were “deviant”; deviance had become normalised.

All of which makes me wonder about the culture of the pipeline industry and our approach to risk management.  I think we are doing all that we can and don’t have deviant practices but if we are embedded in a system and culture that blinds us then we wouldn’t know would we?

Even though most of us think the safety management study approach and its outputs are right we should continue to wonder whether that is true because it would be much better to work out any deviance for ourselves than to have it explained to us by a commission of enquiry (or a sociology researcher) after a catastrophe.

(1)  The Challenger Launch Decision:  Risky Technology, Culture and Deviance at NASA, Dione Vaughan, University of Chicago Press, 1996.

Advertisements
This entry was posted in Eng'g philosophy, Risk assessment. Bookmark the permalink.

6 Responses to Deviant engineering

  1. Mark Harris says:

    Peter

    Should we be doing auditing of our SMS’s by external risk auditors (external to the pipeline business) to ensure we are not getting “stuck in any ruts?” or developing blind spots?

    Regards

    Mark Harris | Business Development Manager – Resources
    FYFE Earth Partners

    http://www.fyfe.com.au
    Level 3, 80 Flinders Street Adelaide SA 5000 | T: 08 8201-9717 | M: 0438 890 968 | F: 08 8201 9650

    ADELAIDE | ALICE SPRINGS | BALLERA | BRISBANE | CHINCHILLA | COOLUM | DALBY | DARWIN | MOOMBA | ROMA

  2. petertuft says:

    I agree that is worth discussion. Maybe a topic for another SMS facilitators’ forum! If anyone decides to go down that path then selection of the auditor would be critical. You would need someone known to be very broadminded about risk methods, not a standard risk analyst. It might be better to get audits done not on individual pipeline SMSs but the SMS process (either AS 2885 as a whole or the procedures of specific companies).

  3. Shane Becker says:

    There is a risk in assuming the AS 2885.1 SMS process is inherently safer than more prescriptive approaches. This is only true if the right expertise is represented throughout the entire process. Companies that generally have good expertise and safety practices in place in one area (such as plant facilities) can assume their practices and their experts will produced equally good results in another area (such as pipelines). Although AS 2885 is clear about competency, companies can assume that the integrity of the SMS process is maintained provided their safety experts are represented. This is not necessarily true and pipelines often isn’t recognised as the specialised area it is. Also, the risks associated with following a risk based approach, without ensuring the right expertise is present, is often not appreciated.

    AS 3788 “Pressure Equipment – in Service Inspection” contains a warning about the danger of using Risk Based Inspection (RBI) methods without the right expertise throughout the process:

    “It is recognized that the use of these RBI methodologies is acceptable and consistent with
    the philosophies expressed elsewhere in this standard and that RBI may therefore be validly
    used to supplement or to modify some of AS/NZS 3788 requirements. However, the validity
    of RBI methods relies on the effective judgement of the RBI practitioners based on their
    expert knowledge of likely failure mechanisms. The use of RBI with inadequate skills and
    knowledge can lead to inappropriate, and potentially dangerous, conclusions. RBI must
    only be implemented by experienced and qualified personnel who are familiar with the RBI
    methodology and knowledgeable in the specific issues affecting the plant and equipment
    under study. This would normally require a multi-disciplinary team which can advise on
    process, maintenance, corrosion, inspection, metallurgical, instrumentation/control and
    mechanical engineering issues.” Appendix B, AS 3788.

    The RBI and SMS approach both rely on the right expertise being represented and having the appropriate authority. Appendix B of AS 2885.1 speaks of various ‘pitfalls’ but I don’t know if there is a similar warning that the process can actually lead to “inappropriate, and potentially dangerous, conclusions” if the right expertise isn’t represented (or if the pipeline expert in the room is overridden by ‘safety experts’ from other fields!).

    • petertuft says:

      Thanks Shane – a very useful point of view. I think you are right that AS 2885 recognises the importance of competence but doesn’t emphasise it enough. The quote from AS 3788 is very pertinent and useful.
      Competence in general is something the industry is looking at more and more, as is the management system behind pipeline operation (as in AS 2885 Part 3). So it seems reasonable to expect that it is something we should look at in the next revision of Part 1.

  4. Chris Hughes says:

    Just to correct Shane slightly, I don’t think anyone has said that the AS2885 method is safer than other methods, just that it is not less safe.

    I fully agree that competence is the real factor involved. Having facilitated many SMS workshops over the years for many different clients I tend to find that in too many cases I have to guide the participants into looking at and understanding the factors which really affect safety and not wasting time on matters which are not really relevant. This is certainly not helped by the way AS2885 happily mandates ‘competence’ and ‘experience’ without ever defining (or even hinting at) how this is to be determined or measured.
    I also think that there are many pipeline designers who do not understand that the SMS process is an integral part of design, in that I have turned up at too many workshops to discover that no-one has pre-populated a threat list for the pipeline: instead they work from the generic list of threats that I always keep handy for such occurrences and decide in the workshop which ones are applicable and which ones aren’t. What is your experience like here, Peter?

    • petertuft says:

      I agree entirely that the SMS process should be an integral part of the design. However the majority of the workshops I facilitate these days are not for routine pipeline design jobs – they are either encroachments (where they want a facilitator who is independent of both the pipeline and the development) or operational review workshops. In neither case do I expect the engineers to come with a pre-populated list of threats.

      Encroachment workshops almost always take place at an early stage of design for the development (and rightly so) and it is only in the workshop itself that all parties are learning about the interaction between the development and pipeline. Threat identification before that is difficult, although I usually make my only private checklist of things that may or may not be confirmed as relevant by the workshop.

      For operational reviews the participants clearly have a fair idea of the major issues but I still like to engage in a brainstorming session and then use my own checklist to prompt for issues that have not emerged spontaneously. Of course, for location-specific threats (crossings) there is often a pre-populated database but that’s not the part of the process that requires lateral thinking.

      Not sure if this has answered your question …

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s