Posts

Creating a Free Airport Safety Reporting & Management System

Creating a Free Airport Safety Reporting & Management System

Safety reporting is the life-blood of a modern safety management system. In the early days of implementation, a great deal of effort was (and still is) expended in increasing the reporting of safety events (incidents and other occurrences) and hazards. As an industry, we’ve discussed and debated no-blame and just cultures. We’ve promulgated policies and waved flags, telling our team members that we can’t manage what we don’t measure. And we’ve implemented safety occurrence reporting systems to capture all this information.

If we’ve been successful in these endeavours, we’ve then faced a new problem - what do we do with all these reports? A classic case of be careful what you wish for!

Safety Governance Systems

Safety Governance Systems

Once upon a time, I went around the countryside auditing aerodrome safety management systems and dutifully asking SMS-related questions of all and sundry. It didn't matter who they were, I asked them what they knew about the aerodrome's SMS, how they managed risks, and what did they do to make sure everything was being well managed. I didn't ask everyone the exact same questions, like asking the guy mowing the grass how he ensured enough resources are available to manage safety, but I did bang the SMS gong at/to anyone who was around or would listen. I'm not so sure that was the right approach.

Noun-based Regulation

The modern world is definitely in love with its noun-based activities. Each week, a paradigm-shifting approach to some human endeavour is announced with a title like value-based health care or outcome-based education. When I delve into the details, I am generally left either confused as to what they are selling or how they are different at all. Regulation is no different. Just plugging "based regulation" into Google yields, on the first page alone, principle-basedresults-basedperformance-basedoutcomes-based and output-based regulatory approaches.

A World without Reason

A World without Reason

Recently, I have felt like I'm in danger of becoming complacent with the bedrock of my chosen field. I'll admit that in the past, I've been fairly vocal about this bedrock's limitations and mantra-like recitation by aviation safety professionals the world over. But the recent apparent abandonment of this concept by one of the first Australian organisations to go "all-in" on it, gave me cause for reflection. I am, if you haven't guessed it, talking about the "Reason Model" or "Swiss Cheese Model".

Unnecessary Segregation or Pragmatic Isolation?

Unnecessary Segregation or Pragmatic Isolation?

I've been out in the "real" world for the past six months or so and in that time, my thinking on risk management has changed a little bit. So here it comes, a confession... I have being using a PIG recently and I have felt its use has probably helped with effective management of overall risk.

No Man is an Island

No Man is an Island

I've been a bit out of the loop over the past couple of months as I try to get a handle on my new job and the (almost overwhelming) responsibility that goes along with it. But I can't ignore the action over at the Federal Senate's Rural and Regional Affairs and Transport References Committee's inquiry into Aviation Accident Investigations

Image by https://fshoq.com

BTIII: Assessing Uncertainty

BTIII: Assessing Uncertainty

I can't lie to you. I have been turning myself inside out trying to get a handle on risk evaluation in the aviation safety sphere for close to five years now and I still don't feel any closer to an answer. And I say "an" answer and not "the" answer. Since you are always assessing risk in terms of your objectives, there can and will be multiple approaches to assessing the risk of the same scenario depending on whether you are considering your safety, financial or legal objectives.

Systems Modelling

When I joined the aviation safety regulator I was introduced to the concept of systems-based auditing (SBA). Before this I had been carrying out aerodrome inspections and I thought becoming an Aerodrome Inspector for the government was going to be more of the same. How wrong I was! Even after four years, my concept of systems-based auditing is still evolving. I coming to discover, and it seems everything I read will attest, that most things in life tend to be more complex than we initially think - SBA is no different.

Regulation, The Final Frontier?

The week before last, I finished a 4-year stint with the aviation safety regulator. Even though I'm heading back to industry, I'm not going to stop writing this blog. I believe that the role of the national regulator is the next safety frontier (not the last ;)) and I like the idea of exploring new territory. As the industry continues to explore concepts like safety management, systems-based this, risk-based that and outcome-based whatchamacallit as well as safety culture, we are all going to come to the realisation that safety can be greatly affected (more than we ever imagined) by the approach and actions taken by a national regulator.

BTII: Control-freak*

BTII: Control-freak*

As a follow-on to my first post on the Bow-Tie risk assessment method, I thought I'd concentrate on controls (or barriers or whatever else you would like to call them). This is, after all, where all the action happens. Risk controls are how we spend most of our time - they are the practical aspect of managing risk.

Lessons from Taleb's Black Swan

Having just finished reading Nassim Taleb's The Black Swan, I initially thought about writing a not-so-in-depth assessment of the book's positive and negative points - but I'm not much of a book reviewer and a comprehensive critique is probably beyond my capabilities (at this stage). So, instead I thought I would focus on just a couple of the book's significant concepts and explore how they may apply in the aviation context. Background

The crux of the book, if it can be boiled down to a single paragraph, is that in this modern, complex world we are unable to predict the future when that future involves Black Swan events. Black Swans are those events previously thought extremely rare, if not impossible. The term comes from the standard assertion that all swans are white made prior to the discovery of black swans in Australia.

Taleb's specific definition for a Black Swan has three attributes: it lies outside of regular expectations, it carries an extreme impact and it is subject post-hoc explanation making it appear predictable.

This third attribute is the first significant talking point that I'd like to address.

Retrodiction

When humans look back at a past event, the tendency to create a narrative is strong. It helps us make sense of the world and assists with recall. But in doing so, especially in a complex world, we are likely to introduce a few errors and fall into a few bear-traps.

The big one is over-simplification. The complexity of the typical human's operating environment is growing. Even aviation, which was pretty complex to begin with, has become a close-coupled, global transport system practically unfathomable to the individual. In dealing with this complexity, people tend to identify a limited number of factors and over-attribute their causal influence. Often, this over-emphasis comes at the cost of environmental influences which are outside the control of the event's main players.

Taleb, being an economist, cites examples from the finance world but I couldn't help thinking of accident investigation while reading this. Generally, I felt rather positive to the aviation industry's approach to post-hoc analysis of aircraft accidents - a type of black swan event.

While the development of a narrative is typical, most accident investigation bodies do go beyond the basic "what happened in the cockpit" and look at the latent conditions which contributed to the operational environment. We have the widespread use of the Reason model to thank for this. Some accident investigation bodies, like the ATSB, shy away from the use of the word cause and instead opt for contributory factor or something similar. This is in recognition of the fact that direct causal relationships between identified precursors and the accident cannot always, if ever, be proven in a post-hoc investigation.

Prediction, Shmidiction

Taleb has a real problem with prediction and he puts up quite a few arguments against it. One of my favourites is the "nth billiard ball" - so let me butcher it for you.

The level of accuracy required to make predictions increases significantly with only small increases in system complexity.

For example, let's say you want to calculate the movement of billiard balls. The first couple of collisions aren't too much of a problem but it gets really complicated, very quickly. I won't profess to understand the maths behind these calculations but Michael Berry has apparently shown that:

  • in order to calculate the ninth collision, you need to include the gravitational pull of the man standing at the next table, and
  • in order to calculate the fifty-sixth collision, you need to consider every single particle in the universe in your calculation.

And this is a simple problem! Now consider the dynamic and socio-technical aspects of aviation to really make your head hurt.

Scalability

The third significant concept I wanted to touch was scalability. I'll probably also murder this nuanced concept like those above but here goes.

A scalable is something in which the scale of the outcome is not limited by the nature of the act.

The concept was introduced to Taleb in terms of employment so let's start there. A non-scalable job is one where you are paid by the hour or according to some other unit of work. For example, a barber gets paid per haircut. There is no way for he or she to get paid a total amount that is more than the physical limitation of performing the required service. A scalable job is one where pay is not directly linked to the unit of work performed. In this case consider an author, he or she writes a book but they may receive in return $1 or they may make $1,000,000.

It took me a while but I started to see aviation accident contributory factors in the same light. Some acts, errors, mistakes, etc. will only impact on the single activity being undertaken at the time - a pilot forgetting to put the landing gear down will only contribute to his or her own accident. But others may have a scalable impact and could contribute to many - a poor policy decision relating to training may result in all crew carrying the same deficient knowledge, which in the right circumstances, could contribute to many accidents.

Pulling it Together

Taleb brings together these and numerous other concepts and outlines his approach to financial investment - he calls it the Barbell Strategy. In recognising the problems with predicting outcomes in complex, dynamic socio-technical systems, he takes both a hyper-conservative and hyper-aggressive approach. He invests significantly in low risk investments and then places numerous small bets in extremely speculative opportunities that carry a significant pay-off - he tries to catch as many positive black swan events while minimising his exposure to negative ones.

So what's our Barbell Strategy for aviation safety?

We need to invest in things that we know are closely related to bad stuff happening - say, runway safety, CFIT, etc. - and we need to invest in things that can have a scalable impact on safety - e.g. poor training standards, inappropriate regulations, etc.

How much we should invest in each is an open question but the basic concept sounded pretty good to me. Actually, it almost sounded familiar...

Confirmation Bias? You Betcha!

The more I thought about Taleb's strategy in the aviation safety context, the more I thought it sounded a lot like scoring risk according to proximity and pathways. My still incomplete concept of risk evaluation sought to identify more critical risk conditions according to either their proximity to the ultimate outcome of death and destruction or the number of pathways the risk condition could result in catastrophe.

Proximity is important to those non-scalable conditions that contribute to accident occurrence and ranks them higher the closer they are to that ultimate condition. This avoids those nasty prediction problems Taleb keeps talking about. Pathways considers the scalable conditions that may contribute to accident occurrence but where prediction of direct causal relationships is impossible. Instead, you simply consider the scale of potential contributory pathways as a measure of criticality.

I have a few threads of thought coming together at the moment in this area. I'm excited to find out how they all tie together and whether I can get them out of my head and on to this blog.

BTI: Dressing up for Risk Assessments

BTI: Dressing up for Risk Assessments

I've been doing a lot of pondering on the Bow-Tie method of risk assessment for a project at work. Bow-Tie is a tool used by many, especially in the oil & gas industry, to create a picture of risk surrounding a central event. It's got a few positives and a few negatives but these can be overcome if you understand the limitations of the model being used.