Having just finished reading Nassim Taleb's The Black Swan, I initially thought about writing a not-so-in-depth assessment of the book's positive and negative points - but I'm not much of a book reviewer and a comprehensive critique is probably beyond my capabilities (at this stage). So, instead I thought I would focus on just a couple of the book's significant concepts and explore how they may apply in the aviation context.
The crux of the book, if it can be boiled down to a single paragraph, is that in this modern, complex world we are unable to predict the future when that future involves Black Swan events. Black Swans are those events previously thought extremely rare, if not impossible. The term comes from the standard assertion that all swans are white made prior to the discovery of black swans in Australia.
Taleb's specific definition for a Black Swan has three attributes: it lies outside of regular expectations, it carries an extreme impact and it is subject post-hoc explanation making it appear predictable.
This third attribute is the first significant talking point that I'd like to address.
When humans look back at a past event, the tendency to create a narrative is strong. It helps us make sense of the world and assists with recall. But in doing so, especially in a complex world, we are likely to introduce a few errors and fall into a few bear-traps.
The big one is over-simplification. The complexity of the typical human's operating environment is growing. Even aviation, which was pretty complex to begin with, has become a close-coupled, global transport system practically unfathomable to the individual. In dealing with this complexity, people tend to identify a limited number of factors and over-attribute their causal influence. Often, this over-emphasis comes at the cost of environmental influences which are outside the control of the event's main players.
Taleb, being an economist, cites examples from the finance world but I couldn't help thinking of accident investigation while reading this. Generally, I felt rather positive to the aviation industry's approach to post-hoc analysis of aircraft accidents - a type of black swan event.
While the development of a narrative is typical, most accident investigation bodies do go beyond the basic "what happened in the cockpit" and look at the latent conditions which contributed to the operational environment. We have the widespread use of the Reason model to thank for this. Some accident investigation bodies, like the ATSB, shy away from the use of the word cause and instead opt for contributory factor or something similar. This is in recognition of the fact that direct causal relationships between identified precursors and the accident cannot always, if ever, be proven in a post-hoc investigation.
Taleb has a real problem with prediction and he puts up quite a few arguments against it. One of my favourites is the "nth billiard ball" - so let me butcher it for you.
The level of accuracy required to make predictions increases significantly with only small increases in system complexity.
For example, let's say you want to calculate the movement of billiard balls. The first couple of collisions aren't too much of a problem but it gets really complicated, very quickly. I won't profess to understand the maths behind these calculations but Michael Berry has apparently shown that:
- in order to calculate the ninth collision, you need to include the gravitational pull of the man standing at the next table, and
- in order to calculate the fifty-sixth collision, you need to consider every single particle in the universe in your calculation.
And this is a simple problem! Now consider the dynamic and socio-technical aspects of aviation to really make your head hurt.
The third significant concept I wanted to touch was scalability. I'll probably also murder this nuanced concept like those above but here goes.
A scalable is something in which the scale of the outcome is not limited by the nature of the act.
The concept was introduced to Taleb in terms of employment so let's start there. A non-scalable job is one where you are paid by the hour or according to some other unit of work. For example, a barber gets paid per haircut. There is no way for he or she to get paid a total amount that is more than the physical limitation of performing the required service. A scalable job is one where pay is not directly linked to the unit of work performed. In this case consider an author, he or she writes a book but they may receive in return $1 or they may make $1,000,000.
It took me a while but I started to see aviation accident contributory factors in the same light. Some acts, errors, mistakes, etc. will only impact on the single activity being undertaken at the time - a pilot forgetting to put the landing gear down will only contribute to his or her own accident. But others may have a scalable impact and could contribute to many - a poor policy decision relating to training may result in all crew carrying the same deficient knowledge, which in the right circumstances, could contribute to many accidents.
Pulling it Together
Taleb brings together these and numerous other concepts and outlines his approach to financial investment - he calls it the Barbell Strategy. In recognising the problems with predicting outcomes in complex, dynamic socio-technical systems, he takes both a hyper-conservative and hyper-aggressive approach. He invests significantly in low risk investments and then places numerous small bets in extremely speculative opportunities that carry a significant pay-off - he tries to catch as many positive black swan events while minimising his exposure to negative ones.
So what's our Barbell Strategy for aviation safety?
We need to invest in things that we know are closely related to bad stuff happening - say, runway safety, CFIT, etc. - and we need to invest in things that can have a scalable impact on safety - e.g. poor training standards, inappropriate regulations, etc.
How much we should invest in each is an open question but the basic concept sounded pretty good to me. Actually, it almost sounded familiar...
Confirmation Bias? You Betcha!
The more I thought about Taleb's strategy in the aviation safety context, the more I thought it sounded a lot like scoring risk according to proximity and pathways. My still incomplete concept of risk evaluation sought to identify more critical risk conditions according to either their proximity to the ultimate outcome of death and destruction or the number of pathways the risk condition could result in catastrophe.
Proximity is important to those non-scalable conditions that contribute to accident occurrence and ranks them higher the closer they are to that ultimate condition. This avoids those nasty prediction problems Taleb keeps talking about. Pathways considers the scalable conditions that may contribute to accident occurrence but where prediction of direct causal relationships is impossible. Instead, you simply consider the scale of potential contributory pathways as a measure of criticality.
I have a few threads of thought coming together at the moment in this area. I'm excited to find out how they all tie together and whether I can get them out of my head and on to this blog.