menu

Tuesday, 4 September 2018

It's finally arrived...

Still waiting to get our own copies of the second edition of the book, but one of our PhD students just received his copy, so it is real! Note that sample chapters and lots of other resources are available on the book's blog. The first edition (published Dec 2012) has 437 Google scholar citations, and many dozens of 5-star reviews on Amazon.

Friday, 24 August 2018

Second Edition of our book to be published 28 August 2018




From the back cover of the Second Edition:

************************************************
"The single most important book on Bayesian methods for decision analysts" —Doug Hubbard (author in decision sciences and actuarial science)  
"The book provides sufficient motivation and examples (as well as the mathematics and probability where needed from scratch) to enable readers to understand the core principles and power of Bayesian networks." —Judea Pearl (Turing award winner)  
"The lovely thing about Risk Assessment and Decision Analysis with Bayesian Networks is that it holds your hand while it guides you through this maze of statistical fallacies, p-values, randomness and subjectivity, eventually explaining how Bayesian networks work and how they can help to avoid mistakes.” —Angela Saini (award-winning science journalist, author & broadcaster)
Since the first edition of this book published, Bayesian networks have become even more important for applications in a vast array of fields. This second edition includes new material on influence diagrams, learning from data, value of information, cybersecurity, debunking bad statistics, and much more. Focusing on practical real-world problem-solving and model building, as opposed to algorithms and theory, it explains how to incorporate knowledge with data to develop and use (Bayesian) causal models of risk that provide more powerful insights and better decision making than is possible from purely data-driven solutions.

Features
  • Provides all tools necessary to build and run realistic Bayesian network models
  • Supplies extensive example models based on real risk assessment problems in a wide range of application domains provided; for example, finance, safety, systems reliability, law, forensics, cybersecurity and more
  • Introduces all necessary mathematics, probability, and statistics as needed
  • Establishes the basics of probability, risk, and building and using Bayesian network models, before going into the detailed applications
A dedicated website contains exercises and worked solutions for all chapters along with numerous other resources. The AgenaRisk software contains a model library with executable versions of all of the models in the book. Lecture slides are freely available to accredited academic teachers adopting the book on their course.

************************************************
Sample chapters are available on the book's website

Wednesday, 25 July 2018

Updating Prior Beliefs Based on Ambiguous Evidence


Suppose two nations, North Bayesland and South Bayesland are independently testing new missile technology. Each has made six detonation attempts: North Bayesland has been successful once and  South Bayesland four times. You observe another detonation on the border between the two countries but cannot determine the source. Based only on the provided information:
  1. What is the probability that North (or South) Bayesland is the source of this missile? 
  2. What is your best estimate of the propensity for success of North and South Bayesland after this latest observation (i.e. the probability, for each nation, that a future missile they launch will detonate)?
The general form of this problem is ubiquitous in many areas of life.  But how well do people answer such questions?

Our paper "Updating Prior Beliefs Based on Ambiguous Evidence", which was accepted at the prestigious 40th Annual Meeting of the Cognitive Science Society (CogSci 2018) in Madison, Wisconsin, addresses this problem. Stephen Dewitt (former QMUL PhD student) is presenting the paper on 27 July. 

First of all the normative answer to Question 1 - based on simple Bayesian reasoning - is 20% for North Bayesland and 80% for South Bayesland. But Question 2 is much more complex because we cannot assume the small amount of data on previous detonation attempts represents a 'fixed' propensity of success (the normative Bayesian solution requires a non-trivial Bayesian network that models our uncertainty about the success propensities).

Based on experiments involving 250 paid participants, we discovered two types of errors in the answers.
  1. There was a ‘double updating’ error: individuals appear to first use their prior beliefs to interpret the evidence, then use the interpreted form of the evidence, rather than the raw form, when updating. 
  2. We found an error where individuals convert from a probabilistic representation of the evidence to a categorical one and use this representation when updating. 
Both errors have the effect of exaggerating the evidence in favour of the solver’s prior belief and could lead to confirmation bias and polarisation. Given the importance of the class of problems to which the study applies, we believe that greater understanding of the cognitive processes underlying the errors should therefore be an important avenue for future study.

The full paper details and pdf (also available here)
Dewitt, S, Lagnado, D, Fenton N. E (2018), "Updating Prior Beliefs Based on Ambiguous Evidence", CogSci 2018, Madison Wisconsin, 25-28 July 2018 
This research is based upon work supported in part by the Office of the Director of National Intelligence (ODNI), Intelligence Advanced Research Projects Activity (IARPA), under Contract [2017-16122000003]. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied, of ODNI, IARPA, or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for governmental purposes notwithstanding any copyright annotation therein. Funding was also provided by the ERC project ERC-2013-AdG339182-BAYES_KNOWLEDGE and the Leverhulme Trust project RPG-2016-118 CAUSAL-DYNAMICS.

UPDATE: Stephen Dewitt presenting the paper in Madison:




Saturday, 14 July 2018

How to handle uncertain priors in Bayesian reasoning

In the classic simple Bayesian problem we have:
  • a hypothesis H (such as 'person has specific disease') with a prior probability (say 1 in a 1000) and
  • evidence E (such as a test result which may be positive or negative for the disease) for which we know the probability E given H (for example the probability of a false positive is 5% and the probability of a false negative is 0%). 
With those particular values Bayes' theorem tells us that a randomly selected person who tests positive has a 1.96% probability of having the disease.

But what if there is uncertainty about the prior probabilities (i.e. the 1 in a 1000, the 5% and 0%). Maybe the 5% means 'anywhere between 0 and 10%'. Maybe the 1 in a 1000 means we only saw it once in 1000 people. This new technical report explains how to properly incorporate uncertainty about the priors using a Bayesian Network.


Fenton NE, "Handling Uncertain Priors in Basic Bayesian Reasoning", July 2018,  DOI 10.13140/RG.2.2.16066.89280

Friday, 13 July 2018

How much do we trust academic 'experts'?


Queen Mary has released the following press release about our new paper: Osman, M., Fenton, N. E., Pilditch, T., Lagnado, D. A., & Neil. M. (2018). "Who do we trust on social policy interventions", to appear next week in Basic and Applied Social Psychology. The preprint of the paper is here. There are already a number of press reports on it (see below).

People trust scientific experts more than the government even when the evidence is outlandish


Members of the public in the UK and US have far greater trust in scientific experts than the government, according to a new study by Queen Mary University of London. In three large scale experiments, participants were asked to make several judgments about nudges - behavioural interventions designed to improve decisions in our day-to-day lives.

The nudges were introduced either by a group of leading scientific experts or a government working group consisting of special interest groups and policy makers. Some of the nudges were real and had been implemented, such as using catchy pictures in stairwells to encourage people to take the stairs, while others were fictitious and actually implausible like stirring coffee anti-clockwise for two minutes to avoid any cancerous effects.

The study, published in Basic and Applied Social Psychology, found that trust was higher for scientists than the government working group, even when the scientists were proposing fictitious nudges. Professor Norman Fenton, from Queen Mary’s School of Electronic Engineering and Computer Science, said: “While people judged genuine nudges as more plausible than fictitious nudges, people trusted some fictitious nudges proposed by scientists as more plausible than genuine nudges proposed by government. For example, people were more likely trust the health benefits of coffee stirring than exercise if the former was recommended by scientists and the latter by government.”

The results also revealed that there was a slight tendency for the US sample to find the nudges more plausible and more ethical overall compared to the UK sample. Lead author Dr Magda Osman from Queen Mary’s School of Biological and Chemical Sciences, said: “In the context of debates regarding the loss of trust in experts, what we show is that in actual fact, when compared to a government working group, the public in the US and UK judge scientists very favourably, so much so that they show greater levels of trust even when the interventions that are being proposed are implausible and most likely ineffective. This means that the public still have a high degree of trust in experts, in particular, in this case, social scientists.” She added: “The evidence suggests that trust in scientists is high, but that the public are sceptical about nudges in which they might be manipulated without them knowing. They consider these as less ethical and trust the experts proposing them less with nudges in which they do have an idea of what is going on.”

Nudges have become highly popular decision-support methods used by governments to help in a wide range of areas such as health, personal finances, and general wellbeing. The scientific claim is that to help people make better decisions regarding their lifestyle choices, and those that improve the welfare of the state, it is potentially effective to subtly change the framing of the decision-making context, which makes the option which maximises long term future gains more prominent. In essence the position adopted by nudge enthusiasts is that poor social outcomes are often the result of poor decision-making, and in order to address this, behavioural interventions such as nudges can be used to reduce the likelihood of poor decisions being made in the first place.

Dr Osman said: “Overall, the public make pretty sensible judgments, and what this shows is that people will scrutinize the information they are provided by experts, so long as they are given a means to do it. In other words, ask the questions in the right way, and people will show a level of scrutiny that is often not attributed to them. So, before there are strong claims made about public opinion about experts, and knee-jerk policy responses to this, it might be worth being a bit more careful about how the public are surveyed in the first place.”
Press reports:
  • The  Daily Record: Stirred by science:

Tuesday, 3 July 2018

How Bayesian Networks are pioneering the ‘smart data’ revolution

The July issue of Open Access Government has a 2-page article summarising our recent research and tool developments on Bayesian networks. A high-res pdf article of the article can be found here or here.



Thursday, 28 June 2018

Guilty Until Proven Innocent: The Crisis in Our Justice System



As mentioned in my previous posting I was invited by Jon Robins (the Justice Gap) to speak at the third meeting of the All-Party Parliamentary Group on Miscarriages of Justice, hosted by Barry Sheerman MP, in the House of Commons on 25 June 2018. The meeting was based around the launch of Jon Robins' outstanding new book "Guilty Until Proven Innocent: The Crisis in Our Justice System".  Other speakers were: Michael Mansfield QC and lawyer Matt Foot who have been involved in many of the cases described in the book; Waney Squier the world-renowned neuropathologist who suffered for being one of the few medical experts to question the mainstream medical guidelines on 'shaken baby syndrome'; Gloria Morrison who spoke about the problems of Joint Enterprise relevant to some of the cases; and Liam Allan and Eddie Gilfoyle who spoke about their own experiences (theirs are are two of the cases discussed in the book). It was a very powerful and informative meeting which was very well attended (with many having to stand for the full two hours)

I  have now written a detailed review of the book which includes more about the House of Commons meeting. (Note an updated version which fixes some errors in the Researchgate version is available here)



See also