menu

Wednesday, 25 July 2018

Updating Prior Beliefs Based on Ambiguous Evidence


Suppose two nations, North Bayesland and South Bayesland are independently testing new missile technology. Each has made six detonation attempts: North Bayesland has been successful once and  South Bayesland four times. You observe another detonation on the border between the two countries but cannot determine the source. Based only on the provided information:
  1. What is the probability that North (or South) Bayesland is the source of this missile? 
  2. What is your best estimate of the propensity for success of North and South Bayesland after this latest observation (i.e. the probability, for each nation, that a future missile they launch will detonate)?
The general form of this problem is ubiquitous in many areas of life.  But how well do people answer such questions?

Our paper "Updating Prior Beliefs Based on Ambiguous Evidence", which was accepted at the prestigious 40th Annual Meeting of the Cognitive Science Society (CogSci 2018) in Madison, Wisconsin, addresses this problem. Stephen Dewitt (former QMUL PhD student) is presenting the paper on 27 July. 

First of all the normative answer to Question 1 - based on simple Bayesian reasoning - is 20% for North Bayesland and 80% for South Bayesland. But Question 2 is much more complex because we cannot assume the small amount of data on previous detonation attempts represents a 'fixed' propensity of success (the normative Bayesian solution requires a non-trivial Bayesian network that models our uncertainty about the success propensities).

Based on experiments involving 250 paid participants, we discovered two types of errors in the answers.
  1. There was a ‘double updating’ error: individuals appear to first use their prior beliefs to interpret the evidence, then use the interpreted form of the evidence, rather than the raw form, when updating. 
  2. We found an error where individuals convert from a probabilistic representation of the evidence to a categorical one and use this representation when updating. 
Both errors have the effect of exaggerating the evidence in favour of the solver’s prior belief and could lead to confirmation bias and polarisation. Given the importance of the class of problems to which the study applies, we believe that greater understanding of the cognitive processes underlying the errors should therefore be an important avenue for future study.

The full paper details and pdf (also available here)
Dewitt, S, Lagnado, D, Fenton N. E (2018), "Updating Prior Beliefs Based on Ambiguous Evidence", CogSci 2018, Madison Wisconsin, 25-28 July 2018 
This research is based upon work supported in part by the Office of the Director of National Intelligence (ODNI), Intelligence Advanced Research Projects Activity (IARPA), under Contract [2017-16122000003]. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied, of ODNI, IARPA, or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for governmental purposes notwithstanding any copyright annotation therein. Funding was also provided by the ERC project ERC-2013-AdG339182-BAYES_KNOWLEDGE and the Leverhulme Trust project RPG-2016-118 CAUSAL-DYNAMICS.

UPDATE: Stephen Dewitt presenting the paper in Madison:




Saturday, 14 July 2018

How to handle uncertain priors in Bayesian reasoning

In the classic simple Bayesian problem we have:
  • a hypothesis H (such as 'person has specific disease') with a prior probability (say 1 in a 1000) and
  • evidence E (such as a test result which may be positive or negative for the disease) for which we know the probability E given H (for example the probability of a false positive is 5% and the probability of a false negative is 0%). 
With those particular values Bayes' theorem tells us that a randomly selected person who tests positive has a 1.96% probability of having the disease.

But what if there is uncertainty about the prior probabilities (i.e. the 1 in a 1000, the 5% and 0%). Maybe the 5% means 'anywhere between 0 and 10%'. Maybe the 1 in a 1000 means we only saw it once in 1000 people. This new technical report explains how to properly incorporate uncertainty about the priors using a Bayesian Network.


Fenton NE, "Handling Uncertain Priors in Basic Bayesian Reasoning", July 2018,  DOI 10.13140/RG.2.2.16066.89280

Friday, 13 July 2018

How much do we trust academic 'experts'?


Queen Mary has released the following press release about our new paper: Osman, M., Fenton, N. E., Pilditch, T., Lagnado, D. A., & Neil. M. (2018). "Who do we trust on social policy interventions", to appear next week in Basic and Applied Social Psychology. The preprint of the paper is here. There are already a number of press reports on it (see below).

People trust scientific experts more than the government even when the evidence is outlandish


Members of the public in the UK and US have far greater trust in scientific experts than the government, according to a new study by Queen Mary University of London. In three large scale experiments, participants were asked to make several judgments about nudges - behavioural interventions designed to improve decisions in our day-to-day lives.

The nudges were introduced either by a group of leading scientific experts or a government working group consisting of special interest groups and policy makers. Some of the nudges were real and had been implemented, such as using catchy pictures in stairwells to encourage people to take the stairs, while others were fictitious and actually implausible like stirring coffee anti-clockwise for two minutes to avoid any cancerous effects.

The study, published in Basic and Applied Social Psychology, found that trust was higher for scientists than the government working group, even when the scientists were proposing fictitious nudges. Professor Norman Fenton, from Queen Mary’s School of Electronic Engineering and Computer Science, said: “While people judged genuine nudges as more plausible than fictitious nudges, people trusted some fictitious nudges proposed by scientists as more plausible than genuine nudges proposed by government. For example, people were more likely trust the health benefits of coffee stirring than exercise if the former was recommended by scientists and the latter by government.”

The results also revealed that there was a slight tendency for the US sample to find the nudges more plausible and more ethical overall compared to the UK sample. Lead author Dr Magda Osman from Queen Mary’s School of Biological and Chemical Sciences, said: “In the context of debates regarding the loss of trust in experts, what we show is that in actual fact, when compared to a government working group, the public in the US and UK judge scientists very favourably, so much so that they show greater levels of trust even when the interventions that are being proposed are implausible and most likely ineffective. This means that the public still have a high degree of trust in experts, in particular, in this case, social scientists.” She added: “The evidence suggests that trust in scientists is high, but that the public are sceptical about nudges in which they might be manipulated without them knowing. They consider these as less ethical and trust the experts proposing them less with nudges in which they do have an idea of what is going on.”

Nudges have become highly popular decision-support methods used by governments to help in a wide range of areas such as health, personal finances, and general wellbeing. The scientific claim is that to help people make better decisions regarding their lifestyle choices, and those that improve the welfare of the state, it is potentially effective to subtly change the framing of the decision-making context, which makes the option which maximises long term future gains more prominent. In essence the position adopted by nudge enthusiasts is that poor social outcomes are often the result of poor decision-making, and in order to address this, behavioural interventions such as nudges can be used to reduce the likelihood of poor decisions being made in the first place.

Dr Osman said: “Overall, the public make pretty sensible judgments, and what this shows is that people will scrutinize the information they are provided by experts, so long as they are given a means to do it. In other words, ask the questions in the right way, and people will show a level of scrutiny that is often not attributed to them. So, before there are strong claims made about public opinion about experts, and knee-jerk policy responses to this, it might be worth being a bit more careful about how the public are surveyed in the first place.”
Press reports:
  • The  Daily Record: Stirred by science:

Tuesday, 3 July 2018

How Bayesian Networks are pioneering the ‘smart data’ revolution

The July issue of Open Access Government has a 2-page article summarising our recent research and tool developments on Bayesian networks. A high-res pdf article of the article can be found here or here.