menu

Thursday, 26 March 2015

The risk of flying


Norman Fenton, 26 March 2015

I have just done an interview on BBC Radio Scotland about aircraft safety in the light of the GermanWings crash - which now appears to have been a deliberate act of sabotage by the co-pilot*. I have uploaded a (not very good) recording of it here (mp3 file - it is just under 4 minutes) or here (a more compact m4a file)

Because this type of event is so rare classical frequentist statistics provides no real help when it comes to risk assessment. In fact, it is exactly the kind of risk assessment problem for which you need causal models and expert judgement (as explained in our book) if you want any kind of risk insights.

Irrespective of this particular incident, the interview gave me the opportunity to highlight a very common myth, namely that “flying is the safest form of travel”.  If you look at deaths per million travellers then, indeed, there are 50 times as many car deaths as plane deaths. However, this is a silly measure because there are so many more car travellers than plane travellers. So, typically, analysts use deaths per million miles travelled; with respect to this measure car travel is still 'riskier' than air travel, but the death rate is only about twice as high as plane deaths. But this measure is also biased in favour of planes because the average plane journey is much further than the average car journey.

So a much fairer measure is the number of deaths per passenger journey. And for this, the rate of plane deaths is actually three times higher than car deaths; in fact only bikes and motorbikes are worse than planes.

Despite all this there is still a very low probability of a plane journey resulting in fatalities - about 1 in half a million (and much less on commercial flights in Western Europe). However, if we have reason to believe that, say, recent converts to a terrorist ideology have been training and becoming pilots then the probability of the next plane journey resulting in fatalities becomes much higher, despite the past data.

*I had an hour’s notice of the interview and was told what I would be asked.  I was actually not expecting to be asked about how to assess the risk of this specific type of incident;  I was assuming I would only be asked about aircraft safety risk in general and about the safety record of the A320.

Postscript: Following the interview a colleage asked:
"Did you have the mental issues of the co-pilot on the radar when you replied? "
My response: Interesting question. A few years back we were involved extensively in work with NATS (National Air Traffic Safety) to model/predict risk of mid-air collision over the UK airspace. In particular NATS wanted to know how the probability of a mid-air collision might change given different proposals for changes to the ATM architecture (e.g. ‘adding new ground radar stations’ versus ‘adding new on-board collisions alert systems’). Now - apart from three incidents in the late 1940’s which all involved at least one military jet - there has not been any actual mid-air collisions over UK airspace (so negligible data there) and the proposed technology was ‘new’ (so no directly relevant data there) but there was a LOT of data on "near misses" of different degrees of seriousness  and a LOT of expert judgment about the causes and circumstances of the near misses. Hence, we were able with NATS experts to build a very detailed model that could be ‘validated’ against the actual near miss data. What is very interesting are what factors NATS needed in the model. The psychological state and stress of air traffic controllers was included in the model as were certain psychological traits of pilots. It turns out that certain airlines were more likely to be involved in a near-misses primarily because of traits of their pilots.

Tuesday, 24 March 2015

The problem with big data and machine learning


The advent of ‘big data’, coupled with fancy statistical machine learning techniques, is increasingly seducing people to believe that new insights and better predictions can be achieved in a wide range of important applications, without relying on the input of domain experts. The applications range from learning how to retain customers through to learning what makes people susceptible to particular diseases. I have written before about the dangers of this kind of 'learning' from data alone (no matter how 'big' the data is).

Contrary to the narrative being sold by the big data community, if you want accurate predictions and improved, decision-making then, invariably, you need to incorporate human knowledge and judgment. This enables you to build rational causal models based on 'smart' data. The main objections to using human knowledge - that it is subjective and difficult to acquire - are, of course,  key drivers of the big data movement. But this movement underestimates the typically very high costs of collecting, managing and analysing big data. So, the sub-optimal outputs you get from pure machine learning do not even come cheap.

To clarify the dangers of relying on big data and machine learning, and to show how smart data and causal modelling (using Bayesian networks) gives you better results, we have collected together the following short stories and examples:
The whole subject of 'smart data' rather than 'big data' is also the focus of the BAYES-KNOWLEDGE project.

Tuesday, 3 March 2015

The Statistics of Climate Change


From left to right: Norman Fenton, Hannah Fry, David Spiegelhalter. Link to the Programme's BBC website
Norman Fenton, 3 March 2015 (This is a cross posting of the article here)

I had the pleasure of being one of the three presenters of the BBC documentary called “Climate Change by Numbers”  (first) screened on BBC4 on 2 March 2015.

The motivation for the programme was to take a new look at the climate change debate by focusing on three key numbers that all come from the most recent IPCC report. The numbers were:
  • 0.85 degrees - the amount of warming the planet has undergone since 1880
  • 95% - the degree of certainty climate scientists have that at least half the warming in the last 60 years is man-made
  • one trillion tonnes - the cumulative amount of carbon that can be burnt, ever, if the planet is to stay below ‘dangerous levels’ of climate change
The idea was to get mathematicians/statisticians who had not been involved in the climate change debate to explain in lay terms how and why climate scientists had arrived at these three numbers. The other two presenters were Dr Hannah Fry (UCL) and Prof Sir David Spiegelhalter (Cambridge) and we were each assigned approximately 25 minutes on one of the numbers. My number was 95%.

Being neither a climate scientist nor a classical statistician (my research uses Bayesian probability rather than classical statistics to reason about uncertainty) I have to say that I found the complexity of the climate models and their underlying assumptions to be daunting. The relevant sections in the IPCC report are extremely difficult to understand and they use assumptions and techniques that are very different to the Bayesian approach I am used to. In our Bayesian approach we build causal models that combine prior expert knowledge with data. 

In attempting to understand and explain how the climate scientists had arrived at their 95% figure I used a football analogy – both because of my life-time interest in football and because - along with my colleagues Anthony Constantinou and Martin Neil – we have worked extensively on models for football prediction. The climate scientists had performed what is called an “attribution study” to understand the extent to which different factors – such as human CO2 emissions – contributed to changing temperatures. The football analogy was to understand the extent to which different factors contributed to changing success of premiership football teams as measured by the total number of points they achieved season-by-season.  In contrast to our normal Bayesian approach – but consistent with what the climate scientists did – we used data and classical statistical methods to generate a model of success in terms of the various factors. Unlike the climate models which involve thousands of variables we had to restrict ourselves to a very small number of variables (due to a combination of time limitations and lack of data). Specifically, for each team and each year we considered:
  • Wages (this was the single financial figure we used)
  • Total days of player injuries
  • Manager experience
  • Squad experience
  • Number of new players
The statistical model generated from these factors produced, for most teams, a good fit of success over the years for which we had the data. Our ‘attribution study’ showed wages was by far the major influence. When wages was removed from the study, the resulting statistical model was not a good fit. This was analogous to what the climate scientists’ models were showing when the human CO2 emissions factor was removed from their models; the previously good fit to temperature was no longer evident. And, analogous to the climate scientists’ 95% derived from their models, we were able to conclude there was a 95% chance that an increase in turnover of 10 per cent would result in at least one extra premiership point. (Update: note that this was a massive simplification to make the analogy. I am certainly not claiming that increasing wages causes an increase in points. If I had had the time I would have explained that in a proper model - like the Bayesian networks we have previously built - wages offered is one of the many factors influencing quality of players that can be bought which, in turn, along with other factors influences performance).

Obviously there was no time in the programme to explain either the details or the limitations of my hastily put-together football attribution study and I will no doubt receive criticism for it (I am preparing a detailed analysis).  But the programme also did not have the time or scope to address the complexity of some of the broader statistical issues involved in the climate debate (including issues that lead some climate scientists to claim the 95% figure is underestimated and others to believe it is overestimated). In particular, the issues that were not covered were:
  • The real probabilistic meaning of the 95% figure. In fact it comes from a classical hypothesis test in which observed data is used to test the credibility of the ‘null hypothesis’. The null hypothesis is the ‘opposite’ statement to the one believed to be true, i.e.  ‘Less than half the warming in the last 60 years is man-made’. If, as in this case, there is only a 5%  probability of observing the data if the null hypothesis is true, the statisticians equate this figure (called a p-value) to a 95% confidence that we can reject the null hypothesis. But the probability here is a statement about the data given the hypothesis. It is not generally the same as the probability of the hypothesis given the data (in fact equating the two is often referred to as the ‘prosecutors fallacy’, since it is an error often made by lawyers when interpreting statistical evidence).See here and here for more on the limitations of p-values and confidence intervals.
  • Any real details of the underlying statistical methods and assumptions. For example, there has been controversy about the way a method called principal component analysis was used to create the famous hockey stick graph that appeared in previous IPCC reports. Although the problems with that method were recognised it is not obvious how or if they have been avoided in the most recent analyses.
  •  Assumptions about the accuracy of historical temperatures. Much of the climate debate  (such as that concerning the exceptionalness of the recent rate of temperature increase) depends on assumptions about historical temperatures dating back thousands of years. There has been some debate about whether sufficiently large ranges were used.
  • Variety and choice of models. There are many common assumptions in all of the climate models used by the IPCC and it has been argued that there are alternative models not considered by the IPCC which provide an equally good fit to climate data, but which do not support the same conclusions.
Although I obviously have a bias, my enduring impression from working on the programme is that the scientific discussion about the statistics of climate change would benefit from a more extensive Bayesian approach. Recently some researchers have started to do this, but it is an area where I feel causal Bayesian network models could shed further light and this is something that I would strongly recommend.

Acknowledgements: I would like to thank the BBC team (especially Jonathan Renouf, Alex Freeman, Eileen Inkson, and Gwenan Edwards) for their professionalism, support, encouragement, and training; and my colleagues Martin Neil and Anthony Constantinou for their technical support and advice. 

My fee for presenting the programme has been donated to the charity Magen David Adom
Watching the programme as it is screened