Safety psychologist David Broadbent explores the anatomy of industrial disasters, and strategies to diagnose and prevent them.
In my last blog I revisited the Texas City plant disaster and the awful loss of life in a single incident that should never have happened.
There were more than enough warning signs. The evidence afterwards, pointed towards the employer learning very little from other incidents, and from its own incidents.
We are now at the fifth anniversary of the Gulf of Mexico Deepwater Horizon oil rig disaster. On the day the rig exploded, the senior leadership of BP was celebrating seven years of excellent safety performance according to their lost time injury frequency rate (LTIFR).
Meanwhile the installation caught fire and killed eleven people. I have heard it said they were actually cutting the LTIFR cake when one of their operations blew up.
From the investigation, we know that there was a witnessed discussion on the drill deck in which Dewey Revette was drawing attention to a significant deviation in process operations.
See detail on the incident investigation here;
This was a heated discussion and the management delegate firmly instructed the drillers to “keep going”. They did, and they died.
If only a fraction of high reliability organisation (HRO) thinking was practiced on the Deepwater Horizon, eleven people would still be alive, BP and petrochemicals would have less reputational loss, the Texas Gulf environment would be a lot more healthy, and petrochemicals more sustainable.
Some warning signs that all was not well on the rig had been present for some time, and had become “normal operating procedure”. That situation should be very scary.
When it becomes “normal” to operate a process in a dangerous state, it becomes part of risk tolerance and culture.
The process could even ‘deviate’ toward safe, and culture or the management system would bring it back to high risk because that is the norm.
Keep looking for warning signs
TransformationalSafety.Com has developed the Anatomies of Disaster program to actively assist organisations to develop internal capacities to recognise and act on warning signs of industrial disaster.
Andrew Hopkins makes the point that “there will always be warning signs that surface before things go wrong. If you have a system which is going to pick up those warning signs, then you will be averting disaster.”
Who was it who said, “Those who ignore history are doomed to repeat it”? Never a truer collection of words, and how often we ignore the wisdom. We are hard of learning, especially if the risk seems to be collective.
The Anatomies of Disaster program draws on some of the world’s leading resources to collectively explore precursors to real industrial disasters. These things never happen in isolation, and there is great value in getting “under the hood” of organisational reliability and failure.
Growing list of industrial disasters
Some of the well-known incidents that teach us the fabric of industrial disasters, are the Titanic, Longford Gas Explosion (Victoria, Australia), Challenger disaster (NASA, USA), Union Carbide toxic release (Bhopal, India), Gretley Mine disaster (NSW, Australia), Texas City Refinery explosion (USA), Imperial Sugar explosion (Port Wentworth, USA), Deepwater Horizon explosion (BP, Gulf of Mexico), Pike River (New Zealand), Spanish train disaster (Santiago de Compostela, Spain), and a very long list of others.
Sadly the list of available material is never ending. Our goal is through the continued application of known organisational and safety interventions, to reduce the man-made industrial disaster rate, to shorten the list of infamous places and infamous employers, save lives, and minimise injury and disease.
A high reliability and industrial disaster prevention program should contain at least;
 accident causation sequences
 Role of senses, data, and understanding in decision making
 Failures of leadership and contributions to unsafe outcomes
 Risks of collective decision making (groupthink)
 Role of culture as an active contributor to incidents large and small
 Functional appreciation of the power of leading indicators as predictor variables
 Hands-on experience of using Process Safety and Operational Risk Management techniques
 Application of some world-leading high reliability strategic initiatives to add layers of protection and resilience to operational systems
 and some fun stuff to activate your safety culture.
The quality of the thinking, skills, and interventions in your health and safety programme, have significant impacts at all levels of accident causation, and on the resilience of your culture.
You have to kidnap the attention of participants. The material must compel us to understand how to prevent industrial disasters.
Workplaces that suffered disasters were once normal, everyday sites. A casual glance does not reveal the risk.
You have to explore the underlying processes and how to tell whether they are functioning or not; identify areas that could fail, and re-design control systems.
Disaster prevention tools and human factors
Tools from the armory of Process Safety, Operational Risk Management, and High Reliability Organising (HRO) should be ready to hand and familiar to our touch.
Apart from job training in the mechanics of process safety, workers need health and safety identification, assessment, reporting, management system, social and cultural skills. They are not taught these skills at school or even university.
Management at every level needs to explore the role that people and human factors play in processes. The most optimal process safety measures in the world are only as good as the weakest link among the people who implement them.
The range of factors that contribute to the human factors side of the equation is wide and scary, and prone to being ignored in industry.
In many cases the final action in the accident causation sequence involves a person or group decision. The error is thinking that was the “cause”. Convenient, I know!
Horrifically dangerous, though. The “error” was just one of many and maybe the most visible, or most liable, but investigation, prosecution and conviction does not prevent industrial disasters.
Work safe and work well.
• David G Broadbent is a safety psychologist, founder of TransformationalSafety.Com, and serves several employers in the Pacific Rim and Africa.
Latest posts by David Broadbent (see all)
- How to spot risk tolerance at work - 19 July 2016
- How to spot major incidents and Black Swans in advance - 6 July 2016
- Manage safety culture, not safety recruitment - 2 February 2016