How to study the anatomy of industrial disasters

Safety psychologist David Broadbent explores the anatomy of industrial disasters, and strategies to diagnose and prevent them.

In my last blog I revisited the Texas City plant disaster and the awful loss of life in a single incident that should never have happened.

There were more than enough warning signs. The evidence afterwards, pointed towards the employer learning very little from other incidents, and from its own incidents.

We are now at the fifth anniversary of the Gulf of Mexico Deepwater Horizon oil rig disaster. On the day the rig exploded, the senior leadership of BP was celebrating seven years of excellent safety performance according to their lost time injury frequency rate (LTIFR).

Meanwhile the installation caught fire and killed eleven people. I have heard it said they were actually cutting the LTIFR cake when one of their operations blew up.

From the investigation, we know that there was a witnessed discussion on the drill deck in which Dewey Revette was drawing attention to a significant deviation in process operations.

See detail on the incident investigation here;
http://www.oilandgasiq.com/integrity-hse-maintenance/white-papers/deepwater-horizon-anatomy-of-a-disaster-a1-printab/

This was a heated discussion and the management delegate firmly instructed the drillers to “keep going”. They did, and they died.

If only a fraction of high reliability organisation (HRO) thinking was practiced on the Deepwater Horizon, eleven people would still be alive, BP and petrochemicals would have less reputational loss, the Texas Gulf environment would be a lot more healthy, and petrochemicals more sustainable.

Some warning signs that all was not well on the rig had been present for some time, and had become “normal operating procedure”. That situation should be very scary.

When it becomes “normal” to operate a process in a dangerous state, it becomes part of risk tolerance and culture.

The process could even ‘deviate’ toward safe, and culture or the management system would bring it back to high risk because that is the norm.

Some of the results of a dust explosion at Imperial Sugar Refinerly in Port Wentworth, USA. Many risks are well understood, but become more and more tolerated, or 'normalised', while their management measures lapse.
Some of the results of a dust explosion at Imperial Sugar Refinerly in Port Wentworth, USA. Many risks are well understood, but become more and more tolerated, or ‘normalised’, while their management measures lapse.

Keep looking for warning signs
TransformationalSafety.Com has developed the Anatomies of Disaster program to actively assist organisations to develop internal capacities to recognise and act on warning signs of industrial disaster.

Andrew Hopkins makes the point that “there will always be warning signs that surface before things go wrong. If you have a system which is going to pick up those warning signs, then you will be averting disaster.”

Who was it who said, “Those who ignore history are doomed to repeat it”? Never a truer collection of words, and how often we ignore the wisdom. We are hard of learning, especially if the risk seems to be collective.

The Anatomies of Disaster program draws on some of the world’s leading resources to collectively explore precursors to real industrial disasters. These things never happen in isolation, and there is great value in getting “under the hood” of organisational reliability and failure.

Growing list of industrial disasters
Some of the well-known incidents that teach us the fabric of industrial disasters, are the Titanic, Longford Gas Explosion (Victoria, Australia), Challenger disaster (NASA, USA), Union Carbide toxic release (Bhopal, India), Gretley Mine disaster (NSW, Australia), Texas City Refinery explosion (USA), Imperial Sugar explosion (Port Wentworth, USA), Deepwater Horizon explosion (BP, Gulf of Mexico), Pike River (New Zealand), Spanish train disaster (Santiago de Compostela, Spain), and a very long list of others.

Sadly the list of available material is never ending. Our goal is through the continued application of known organisational and safety interventions, to reduce the man-made industrial disaster rate, to shorten the list of infamous places and infamous employers, save lives, and minimise injury and disease.

A high reliability and industrial disaster prevention program should contain at least;
[] accident causation sequences
[] Role of senses, data, and understanding in decision making
[] Failures of leadership and contributions to unsafe outcomes
[] Risks of collective decision making (groupthink)
[] Role of culture as an active contributor to incidents large and small
[] Functional appreciation of the power of leading indicators as predictor variables
[] Hands-on experience of using Process Safety and Operational Risk Management techniques
[] Application of some world-leading high reliability strategic initiatives to add layers of protection and resilience to operational systems
[] and some fun stuff to activate your safety culture.

'Anatomy' of a dust explosion (Wall Street Journal). Managers and operators should also understand the 'anatomy' of human factors, such as groupthink, rationalisation, and safety culture, writes David Broadbent.
‘Anatomy’ of a dust explosion (Wall Street Journal). Managers and operators should also understand the ‘anatomy’ of human factors, such as groupthink, rationalisation, and safety culture, writes David Broadbent.

The quality of the thinking, skills, and interventions in your health and safety programme, have significant impacts at all levels of accident causation, and on the resilience of your culture.

You have to kidnap the attention of participants. The material must compel us to understand how to prevent industrial disasters.

Workplaces that suffered disasters were once normal, everyday sites. A casual glance does not reveal the risk.

You have to explore the underlying processes and how to tell whether they are functioning or not; identify areas that could fail, and re-design control systems.

Disaster prevention tools and human factors
Tools from the armory of Process Safety, Operational Risk Management, and High Reliability Organising (HRO) should be ready to hand and familiar to our touch.

Apart from job training in the mechanics of process safety, workers need health and safety identification, assessment, reporting, management system, social and cultural skills. They are not taught these skills at school or even university.

Management at every level needs to explore the role that people and human factors play in processes. The most optimal process safety measures in the world are only as good as the weakest link among the people who implement them.

The range of factors that contribute to the human factors side of the equation is wide and scary, and prone to being ignored in industry.

In many cases the final action in the accident causation sequence involves a person or group decision. The error is thinking that was the “cause”. Convenient, I know!

Horrifically dangerous, though. The “error” was just one of many and maybe the most visible, or most liable, but investigation, prosecution and conviction does not prevent industrial disasters.

Work safe and work well.

• David G Broadbent is a safety psychologist, founder of TransformationalSafety.Com, and serves several employers in the Pacific Rim and Africa.

Related Posts Plugin for WordPress, Blogger...

The following two tabs change content below.
The Creator of the Worlds foremost Safety Culture Improvement Systems.

4 thoughts on “How to study the anatomy of industrial disasters

  1. Excellent article. The Texas City incident as you rightly mentioned was laid at the door of the employer, however many items led up to this disaster, one of them was a level indicator which was still in use, but old in design, that the designer was responsible for. Among others, if we use Difford’s single root cause theory, it would not touch management, but the designer.
    No matter how many disasters past and future, that human element of error, and some say of greed, I fear will be with us. Regards, Shane

  2. David, thank you very much for the article. It is undisputable that there is no smoke without fire; the same goes for disasters. There is no disaster without warning signs. For example, before the Titanic sank; several warnings of icebergs were sent by other ships. A smaller ship, California, had stopped in the ice field nearby. Its crew saw distress rockets sent up by the Titanic in the distance. But the captain did not grasp what they meant. Signs were there, but they were ignored by the captain.

    Last year disaster recovery experts met at the National Press Club in Washington D.C. to carve out a new paradigm for disaster recovery. The expects unanimously agreed that a new paradigm had to be established so that the roots of a strong recovery are planted before the events occurs. “My message to you is: Recovery does not start in the response phase. It starts well before the response phase” said Joseph Nimmich, associate administrator at Office of Response Recovery (ORR). The new paradigm would enable people to use predictive data on weather patterns and to anticipate storm cycles and their likely affects. For example, if a hurricane is anticipated, government can become more proactive in signing contracts will private companies for services that will be needed after the disaster.

    In a nutshell, disasters have to be proactively managed. Brad Kieserman of the ORR lamented that the federal government in the USA had always responded to disaster the same way it did in 1802 when the fire ripped through the city of Portsmouth, New Hampshire, destroying a seaport that was a viable vehicle for commerce.

  3. I believe that a high standard of initial training, especially the basics and then continuous refresher and advanced training, theoretical and PRACTICAL, is the key to improved safety. If the worker understands his equipment and there is a good ratio of experienced to new workers, to pass on the knowledge, then many safety issues will be quickly identified. I’m disappointed at the standards of tertiary educations. There seems to be an emphasis on the numbers of passes rather than on maintaining high standards to produce quality passes.
    The second problem is the lack of knowledgeable experienced managers/Supervisors. These individuals should come up through the ranks to ensure they fully understand all the risks. They should understand the need for knowledgeable experienced workers and the importance of not only continuous training but also to retain them. I believe if a survey would be done it would be found that the most successful companies are those that value their employees more than profit.
    Thirdly there is the issue of excessive documentation and processes. Eventually the ‘paper work’ becomes a drag and becomes an after the fact paper pushing episode. We rely so much on made computers and forget that our brains are the most advanced computer systems. Much of the paper work is aimed at covering the employer should something go wrong or to prove he’s doing his job. This all brings me back to my first point; initial quality training and experience!

  4. A very compelling article indeed! Time and again, as pointed out by David, we are shown warning signs and fail to react, especially when it comes to lessons learnt. In fact, it could be said that the lesson we learn most often is that lessons aren’t learnt!
    Many times this is due to the fact that the reports that are issued are carefully couched as to avoid repercussions on the Employer. In this context, we fail to identify the true lesson to be learnt and thus never learn.
    It is often difficult for the H&S Professional to keep both his (or her) integrity and job in finalising an investigation report and it is his conscience that is torn, not management’s, when the repeat incident occurs.
    If anyone has a solution to that conundrum, it would be a total game changer in the field of Health and Safety

Comments are closed.