One Step Back...

In continuing this little series I've got going here, I'd like to just quickly go back over a couple of points from last time. I'm trying to keep these posts relatively short. So that means I may have moved on to my next point a little too quickly. I guess the crux of the last post was that a graduated consequence scale is inappropriate in an aviation safety context. My two main points to back up that statement were:

  • the potential for a catastrophic event is persistent to the primary aviation activity of flying from A to B; and

  • that given aviation is a complex socio-technical system, risk conditions (call them hazards, events, or even just risks) upstream of the ultimate condition (death by aviation) cannot be categorised effectively.

I tried a few of these arguments out on some colleagues and they seemed unconvinced. So, I'm going to work on them a bit more here - this blogging thing is much more for my benefit than yours but thanks for stopping by anyway ;).

One step back...

Vulnerability

I made two objections to my vulnerability argument - the variety of outcomes flowing from common risks and that the outcome of a risk may vary with the aircraft size/occupancy. My responses to these points were brief. Probably too brief but this is meant to be a blog, not a dissertation. Let's go over them again.

I don't want to simply re-state my last post but the concept that catastrophe could have occurred because there exists no inherent limit to the consequence below this, is my best point. But let's look into it a bit further with an example, a runway overrun.

The vast majority of runway excursions do not end in death but was this because of some recovery measure which set an absolute maximum to the consequence? I don't think so, in fact, I think it was simply a further reduction of the likelihood of a completely catastrophic outcome - and now we have introduced likelihood into the consequence side of the equation. Is this complexity my own doing? Am I over-thinking this? Probably, but bear with me, please.

We seem to be back to an argument I put up in my first post on this issue. Risk, in an aviation safety sense at least, is not a discrete score - it is a continuum. At the very end of that continuum, always, is the most final of all outcomes. It may be have a very small likelihood attached but it is always there - persistent vulnerability.

Now again, I hear you saying (or they might be the voices in my head), but the aircraft occupancy may vary. Yes, you could construct a matrix with the consequence dimension graduating from one death to 500 deaths as required and such a matrix would have its uses. This type of matrix could be used to distinguish between the risk posed by individual operators or sectors of the industry for a variety of purposes such as surveillance planning, high-level regulatory standards development or safety performance measurement.

But it would not be useful within operational safety risk management - by that I mean, when you get into the operational sphere of stuff happening, this type of matrix wouldn't assist in the decision-making process when one designs and implements safety measures. (I don't want to just drop this dimension - it is important and it will pop up again later.)

The matrix you have in the above case only tells you about the risk associated with the final outcome. It does not assist in assessing risk conditions upstream.

So what do I mean when I say "upstream"?

Proximity

Aviation has a plethora of accident causation models. They have their differences, their pluses, their minuses and, of course, their similarities. I think I can say that the one thing all modern accident causation theories agree on is that accidents are never caused by a single act. They are the coming together of many acts with some being quite remote from the accident in terms of both space and time.

For this post, I'm going to run with the ol' tried & true, Swiss-cheese model1. It's not my favourite but it is well-known and serves my purposes here.

What the SCM brought to the world was the awareness that decisions made at the top of an organisation have an impact on frontline safety. When combined with the knock-on and discrete effects from all other levels of the organisation, one could say that, in some circumstances, the frontline operators were doomed from the beginning of their operation.

Swiss-cheese Model

Swiss-cheese Model

Examples of these latent conditions include decisions to reduce the maintenance budget, outsource certain functions and even more obscure concepts as failing to inculcate a strong and positive safety culture. How does one risk assess conditions such as these? The link to any tangible consequential outcome is extremely tenuous even with all the accident reports which cite contributory factors such as these.

So now its time to think of solutions and last time, I said I thought there were a couple. I'm still working on a couple of these ideas but they will have to wait until next time - I'm already way past my target word count.

More to come...

1. This paper is a critique of the model by a Eurocontrol team which included the inventor, Dr James Reason. It is a good read.

Dan Parsons

Dan is an airport operations manager currently working at Queenstown Airport in beautiful New Zealand. His previous roles have included airport and non-process infrastructure operation manager in the mining industry, government inspector with the Civil Aviation Safety Authority and airport trainer. Dan’s special interests include risk management, leadership and process hacks to make running airports easier. 

http://therunwaycentreline.com
Previous
Previous

Influential Behaviour

Next
Next

Vulnerability & Proximity