menu

Sunday, 26 July 2015

Winchester Science Festival: Fallacies of Probability and Risk (a report by Norman Fenton)




 
Norman Fenton at the Winchester Science Festival

I had the privilege of being an invited speaker today at this weekend's annual Winchester Science Festival, presenting a talk on "Fallacies of Probability and Risk" (the slides of which can be downloaed from here).

It's the first time I have spoken at one of these big 'popular science' events and I  was very impressed by it. There were people of all different age groups attending both the talks and the various activities in the reception area and the audience was really enthusiastic and responsive.

Norman Fenton during his talk
Judging by the other talk I managed to attend before mine (coincidentally by a former Queen Mary student Marcus Chown** with the same title as his latest book) and the event I attended last night (see below) - it was clear that this festival was a high quality, well organised event.

Festival of the Spoken Nerd: Matt Parker of QM in the centre
I arrived in time last night for a performance of the Festival of the Spoken Nerd. This amazing show consists of three scientists/entertainers: stand-up mathematician Matt Parker (who happens to be Maths Outreach Coordinator at Queen Mary),  experiments maestro Steve Mould and geek songstress/physicist Helen Arney. They manage to provide 90 minutes of quality humour and mathematics education at the same time. They were actually previewing their new show which they will performing at the Edinburgh Festival and on a UK tour. I strongly recommend you go to it.

**Marcus's website is here.

Friday, 17 July 2015

The use of Bayes in the Netherlands Appeal Court


Henry Prakken
Norman Fenton, 17 July 2016

There has been an important development on the use of Bayes in the Law in the Netherlands, with what is possibly the first full Bayesian analysis of a major crime in an appeal court there.

The case, referred to as the “Breda 6”, was the 1993 murder of a Chinese woman in Breda in her son’s restaurant. Six young people were convicted of the crime and sentenced to up to 10 years in jail (all have since completed their sentences).  In 2012 the advocate general recommended the case be looked at again because it centred on confessions which may have been false.

In the review of the case a Bayesian argument supporting the prosecution case was presented by Frans Alkemade (Update: see below about some concerns Frans has about this article). Frans is actually a physicist who previously used Bayes to analyse a drugs trafficking case, concluding in a report commissioned by the prosecution, that there was not enough evidence for a conviction (the suspect was acquitted). The court requested that Henry Prakken (professor in Legal Informatics and Legal Argumentation at the University of Groningen) respond to the Bayesian argument.

In June, while he was preparing his response, I met Henry at the International Conference on AI and the Law in San Diego. Henry told me about the case and Alkemade's analysis for which the guilty hypothesis was "At least some of the six suspects were involved in the crime, which took place after 4:30 on the night of 3 and 4 july 1993, and which included the luring of the victim to the restaurant by at least some of the female suspects". Alkemade interpreted "involved" in a weak way and Henry said:
"..it could even be no more than just knowing about the crime. In fact, one of my points of criticism was that this guilt hypothesis is not useful for the court, since it is consistent with the innocence of any of the individual suspects (and even with the collective innocence of all three male suspects)."
Among other things, Alkemade focused on two pieces of evidence:
  1. A report by the Criminal Intelligence Unit (CID) of the Dutch police, saying that they had received information from "usually reliable" sources identifying two of the male defendants and one of the female defendants as being involved in the murder**.
  2. The subsequent discovery that two of the female defendants (not mentioned in the CID report and who supposedly knew the three defendants mentioned in the CID report) worked next door to the murder scene.  
From Henry's description of the analysis, it seemed that Alkemade did not account for all relevant unknown variables and dependencies*** (also see the update), such as the possibility that the anonymous tip-off may have been both malicious and prompted by the fact that the caller knew the defendants worked next door to the murder scene (making the tip-off more believable). This would mean that the combination of the two pieces of evidence would not have been such an incredible coincidence if the defendants were innocent. So in that sense the Bayesian argument was over-simplistic. On the other hand it was also too complex for lawyers to understand since it was presented 'from first principles' in the sense that all of the detailed Bayesian inference calculations were spelled out. For the reasons we have explained in detail here it seemed like a Bayesian network (BN) model would be far more suitable. I therefore produced - in discussions with Henry - a generic BN model to reason about the impact of anonymous evidence when combined with other evidence that can influence the anonymous tip-off (the model is here and can be run using the free AgenaRisk software).

The intention was not to replicate all of the features of the case but rather to demonstrate the impact of missing dependencies in Alkemade's argument. Indeed, with a range of reasonable assumptions, the BN model pointed to a much lower probability of guilt than suggested by Alkemade's calculations.

Henry presented his response in court last week. He said:
"The court session was sometimes frustrating, since the discussion was fragmentary and sometimes I had the impression that the court felt it was confronted with a battle of the experts without the means to understand who was right."
In my view this case confirms our claim that presenting a Bayesian legal argument from first principles (as Alkemade did) is not a good idea. The very fact that people assume it is necessary to do this for Bayes to be accepted is actually the reason (ironically) that there will continue to be very strong resistance to accepting Bayes in the courtroom. Why? Because it means you are restricted to ludicrously over-simplistic (and normally flawed) models of the case (3 unknowns maximum) because that is the limit of the Bayesian calculations you can do by hand and explain clearly. Our proposed solution is to model (and run) the case properly using a BN and report back on the results stating in lay terms what the model assumptions were and how sensitive the conclusions are to different prior assumptions.

**Henry told me that there was something funny with the CID report, as also noted by the advocate general. According to the CID report, the anonymous informant had also accused the defendants mentioned in the report of committing several other crimes, but in the investigations preceding the revision case the police investigators had not been able to find any confirmation of these other crimes, not even reports of these supposed crimes to the police by the supposed victims. As the advocate general stated in 2012, this casts doubt on the reliability of the CID informants.

***My colleague Richard Gill - who knows Alkemade - says that Alkemade was careful to define his pieces of evidence in such a way that he thinks that he can justify the independence assumptions which he needs in order to at least conservatively bound the Likelihood ratio coming from each piece of evidence in turn.

UPDATE 21 July 2015: Frans has contacted me stating a number of concerns about the above narrative and provided a number of technical insights that I was not aware of. As an expert witness in a case that is still under trial, he does not feel free to discuss any details in public, but once the trial has finished I will provide an updated report that incorporates his comments. What I can confirm, however, is that in order to do the calculations manually Frans could not model dependencies between different pieces of evidence - a major limitation - although he did make clear the limitations and pitfalls of this in his report.

See also:

Thursday, 9 July 2015

Why target setting leads to poor decision-making


Norman Fenton is the co-author of an article in Nature published today that addresses the issue of improved decision-making in the context of international sustainable development goals. The article pushes for a Bayesian, smart-data approach:

We contend that target-setting is flawed, costly and could have little — or even negative — impact. First, targets may have unintended consequences. For example, education quality as a whole suffered in some countries that diverted resources to early schooling to meet the target of the Millennium Development Goal (MDG) of achieving universal primary education.

Second, target-setting inhibits learning by focusing efforts on meeting the target rather than solving the problem. The milestones are easily manipulated — aims such as halving deaths from road-traffic accidents can trigger misreporting if the performance falls short or encourage underperformance if the goal can be exceeded.

Third, it is costly: development partners will have to reallocate scant resources for a 'data revolution' that will cost an estimated US$1 billion a year.

We advocate a different approach. Governments and the development community need to embrace decision-analysis concepts and tools that have been used for decades in mining, oil, cybersecurity, insurance, environmental policy and drug development.
The approach is based on five principles:
  1. Replace targets with measures of investment return
  2. Model intervention decisions
  3. Integrate expert knowledge
  4. Include uncertainty in predictive models
  5. Measure the most informative variables
Recommendations include the following:
It is a common mistake to assume that 'evidence' is the same as 'data' or that 'subjective' means 'uninformative'. Decision-making should draw on all appropriate sources of evidence. In developing countries where data are sparse, expert knowledge can fill the gaps. For instance, in our assessment of the viability of agroforestry projects in Africa, we used our experience to set ranges on tree-survival rates, costs of raising tree seedlings and farm prices of tree products.
 ....
Decision theorists and local experts will have to work together to identify relevant variables, causal associations and uncertainties. The most widely accepted method of incorporating knowledge for probability assessment is Bayes' theorem. This updates the likelihood of a belief in some event (such as whether an intervention will reduce poverty) when observing new evidence about the event (such as the occurrence of drought). Bayesian analyses — incorporating historical data and expert judgement — are used in transport and systems-safety assessments, medical diagnosis, operational risk assessment in finance and in forensics, but seldom in development. They should be used, for example, to evaluate the relative risks of competing development interventions. 
 ....
Decision-makers .. should employ probabilistic decision analysis, for example Monte Carlo simulations or Bayesian network models. Provided that such models are developed using properly calibrated expert judgement and decision-focused data, they can incorporate the key factors and outcomes and the causal relationships between them. For instance, simulations for evaluating options for building a water pipeline could take into account rare 'what-if' scenarios, such as a hurricane during development, and predict (with probabilities) the time and cost of implementation and the benefits of improved water supply.