The Processing Cost Of Conditionnal Probabilities

(Last Updated On: 21 February 2020)

Essay on

  • Effects of context on the rate of conjunctive responses in the probabilistic truth table task – Jonathan Jubin & Pierre Barrouillet (2019)

Based on

  • Cognitive Psychology – Ken Gilhooly, Fiona Lyddy and Frank Pollick (2014)
  • Cognitive Science Lectures [H02B2A] – Walter Schaeken (2019)

Faculty of Computer Science

05  January 2020


Figure 1– “The Monty Hall Problem”,in “La Linea” (Osvaldo Cavandoli) style.
Drawing by Marius L.

Probabilistic reasoning involves not only a set of knowledge and skill requirements, but also a certain amount of logic and objectivity. It is part of our education to learn how to reason rationally and how to deal with uncertainty in our everyday lives, but is this enough to make us all equal facing probabilistic tasks? In practice, probabilistic reasoning is a highly personal and subjective task that relies on multiple factors. Studying those factors help us understand why differences are observed between individuals performing a conditional reasoning that is supposed to lead to a single possible and logical outcome. In Effects of context on the rate of conjunctive responses in the probabilistic truth table task, J. Jubin and P. Barrouillet design an experiment where students have to compute the probability of a conditional “If A then C” claim being true of [a throw of a dice / a card drawn at random from a deck]. Ex: “If it is a triangle, then it is blue”. 20 different “If… then” claims are presented after 4 training trials without feedback. All the groups are showed the four combinations of two different shapes (called the categories[1], or “A”, or “p” in the claim), and two different colors (called the conditional[2], or “C”, or “q” in the claim).

Previous studies have suggested that several adult responders facing a conditional claim produce a conjunctive response P(A&C) instead of the conditional P(C|A). This number can range from 12% (Fugard et al., 2011) to 43% (Evans et al., 2003). Some hypothesis tried to explain such answers. Participants could use incomplete “procedure known as the Ramsey test” (Evans and Over, 2004), “cutting short the reasoning process” or the response could “result from a matching effect” (Pfeifer, 2013). Those hypotheses respectively fail to explain the assessment of “probability of false conditionals” or a difference of answers between “truth table tasks” and “probabilistic task”. Pfeifer also suggested that “the conjunctive response could result from a linguistic ambiguity”. The latter, participants focus on “cases the conditional is strictly speaking true, [and], exclusively focus on A & C cases”. For Jubin and Barrouillet, “reasoning and interpretation of the conditional remains uncertain” for participants giving a conjunctive response. In 2013, Pfeifer “suggested that the conjunctive response could result from a linguistic ambiguity”. In this case, some participants would interpret conditionals as conjunctions, but this has “rarely” been reported in previous studies according to the authors.

The intuition that governs this paper is that the “responses does not reflect participants’ interpretation of the conditional, but the way they understand the task of assessing the probability itself”. First, authors manipulated the “framing” of the problem with a dice versus card context, where participants would either perform their computations with a fixed number of faces for the dice or with a varying number of cards faces. The cards “framing” would invite the participants to step out from standardized reasoning mechanism (compute probabilities with “6 as denominator”) and produce a genuine answer. For the dice, P(A&C) cases must be computed out of 6 sides whereas P(A|C) must be computed on the number of A occurrences (Card(A)) in the dice faces set. Secondly, authors also manipulated “the way the different truth table cases are displayed”, by making the choice to group or not the different A sets. Playing on “framing” (dice vs cards) and “sets” (grouped/not grouped) lead the authors to distribute the 96 participating students in a factorial 2 design such that 4 groups of 24 students were exposed to different experiment conditions described in this paragraph.

As intuited, “the distribution of the different responses strongly varied as a function of the experimental condition.”. An analysis of variance test reveals that in general, the dice reasoning raised more conjunctive responses, about 9/20 on average, than the cards, with about 4/20 on average. To extent their analysis, the authors have examined the number of consistent responders, who give a conjunctive/conditional response “in more than 2/3 of the cases” and the number of interpretation shifts. The latter could not be interpreted based on the 20 questions set alone, but the previous 4 trials of the tasks had to be considered since shifts “often happened” on early stages. In the cases were the shift goes from conjunctive to conditional, this “early shift” would correspond to a necessary training time for respondents to correctly understand what is expected from the probabilistic task. Since the shift from conjunctive to conditional is particularly high in this experiment (between 33 and 54% of participants shifted in the first trials, all conditions considered), this hypothesis is discussed as a possible viable explanation in the results, described by the author as a “processing step […] introduc[ing] an undesirable noise in what the task is intended to measure”. Figure 3 shows that 3 responders have shifted their interpretation in the late stages of the experiment (step 20, 21 and 23). The late shifts might not be relevant in the scope of this experiment if respondent’s consistency is not observed after the detected shift event. How can consistency be confirmed for a small remaining number of trials? Neither the algorithm heuristics for shift detection nor interpretation of late shifts are detailed in this paper.

The number of shifts from the conjunctive to the conditional and the overall number of conditional responses in responders can be explained by mental models theory (MMT). When facing the probabilistic task, responders construct mental models of the probabilities in a direct link with the statement, like adding A and C to “their stock of knowledge”, but the conjunctive responders do not represent, at first glance or during all the experiment, what is false and irrelevant, like the ⌐A cases. Previous authors (Barrouillet and Gauffroy) suggested “that the production of the initial A & C models depends on a Type 1 heuristic process” accountable for responses “Coming


Figure 2: Representation of the difference between conditional responses (left) and conjunctive response (right). Switching interpretation is represented as deleting the blue zone. What the authors describe as “flushing out”.

spontaneously and automatically to mind”. The authors state that conditional responders, or conjunctive responders who have shifted to a conditional response during the experiment “flesh out this initial representation”. The intervention of a “Type 2 resource-dependent system” builds a new model where the ⌐A & C possibilities are flushed out. In Figure 2, the difference between the conditional response and the conjunctive response only differs from the “flushing-out” of the zone delimited by the blue border. In the experiment, the ability to flush out the initial model can be hindered by the problem structure, like the dice context. Approaching the problem in the dice context triggers the activation of the knowledge of processing probabilities out of 6 and can introduce prior beliefs extremely difficult to suppress. For the grouping and sets context, a visual separation between the A and ⌐A cases can boost the intervention of the analytic process. The authors therefore conclude that the difference between the responders depends on “two different levels of elaboration of a same interpretation”, with responders being more or less able to go further the type 1 heuristic process.

The authors shaped the problem “in such a way that two different interpretations of the conditional never resulted in the same response” in order to fully acknowledge what type of response is given. For example, evaluating the probability of the statement “If it is a triangle, then it is blue” could, according to the authors, lead to give some “biconditional” and “material implication” interpretations. If the latter was considered by respondents, their answer would be computed as P(A&C) + P(⌐A&⌐C) + P(⌐A&C). Due to the relative complexity of this computation compared to the simplicity of the problem, it is unlikely that respondents think like “I shouldadd the probability of having a blue triangle, plus the probability of having any figure except a blue triangle, plus the probability of having a figure that is not a triangle but its color is blue”. Overall, only <0.3% of responses were material implication. By comparison the rate of unclassifiable responses is 6%. Can any computing result be interpreted and categorized correctly? These results could suggest that some respondents produce what seems to be the most appropriate response given the problem. “I have x cards, y triangles, z blue triangles, then I must compute something with x, y and z. I remember probabilities is about dividing with the total number of cases “x”. It seems “z” is appropriate regarding the statement keywords. The answer must be z/x.”. By “intuition”, it leads to what the authors describe as the conjunctive answer. The number of unclassified responses could be interpreted with the so-called ‘age-of-the-captain’-problem [3]: In both cases, respondents are framed to a context where they are expected to produce an answer even without necessarily fully understanding the problem or eliciting critical thoughts about their given answer. This argument could be reinforced by the fact that no feedback is given during the trials in the paper’s experiment.


This experiment plays with our ability to manipulate inference rules. The “If… then” proposition pattern as a conditional statement implies a set of specific rules to reach an appropriate conclusion. Modus ponens of Modus tollens are some of the valid rules stating what can be inferred based on the statement. Let’s consider respondents now must assess the probability of the following statement being true in the card or dice context: “Ifand only if there is a triangle, then it is blue.”. The statement is now a biconditional statement and the correct answer differs from the conditional statement (must be computed as P(A&C) + P(⌐A&⌐C)).  However, it is interesting to note that this formulation can be misinterpreted as being the conditional. The “If and only if there is a triangle…” may invite to exclude the conjunctive answers, flushing out what seems false and irrelevant, like the ⌐A cases (no triangles) as discussed in the paragraph about the Mental Models Theory illustrated by Figure 2. It is possible to speculate that most respondents will forget to consider that the “green circles” cases (⌐A&⌐C) also make the conditional being true under the given statement.

On a thread (Reasoning on dice problem: your opinion matters! [4]) posted on the course’s discussion board in the Forum Reasoning and Decision Making, a test like the one in the paper was presented to students following the Cognitive Science course, also expected to participate in the Forum for partial fulfilment of course requirements. The test includes a similar figure to the Figure 1.a in the paper and respondents are asked to estimate “what is the probability of the claim being true of a throw of a dice” for two claims. Claim 1: “If there is a triangle then it is green”. Claim 2: “If and only if there is a triangle, then it is green”. Conclusions can hardly be drawn from the experiment since only 3 students who didn’t choose to write their essay on the same paper have participated. Nonetheless, those answers give hints about the intuitions discussed in the two previous paragraphs of this essay. All the respondents have stated that they “[were] not very confident with one or both of [their] answers”. Only 2 of the respondents have answered correctly about Claim 1 (2 out 3 chances). Only 1 respondent, who also answered correctly about Claim 1, answered correctly about Claim 2 (1 out of 2 chances). This micro-experiment suggests that it is not trivial to label conjunctive or conditional thinkers based on doubtful answers. The biconditional Claim 2 did not lead to a good answer for the respondent who did not answer correctly about Claim 1, but it would rather produce a great source of confusion among respondents who have given 3 different answers (2 out of 5 chances, 0 chances, 1 out of 2 chances).

Reasoning under conditional “If, then” claims is intimately related to working with Bayes’s theorem: “the probability of an event, based on prior knowledge of conditions that might be related to the event”. Other research (Gras and Totohasina[5], 1995) suggest that other possible interpretations than conjunctive/conditional can be made from a conditional statement. In their study, they describe : a chronological conception where A should always precede event C in P(C|A), a causal conception where A is the cause and C is the consequence and finally a cardinal conception which is the ratio of Card(C∩A) over Card(A). The authors conclude that “the origin of the chronological and causal misconceptions is cognitive, while the cardinal conception is induced by   teaching.” [6]. Depending on their conception, the respondents can easily produce a false answer. Linking this research result to the paper’s Figure 1.a experiment, the good answer can be computed with the cardinal conception, with Card(C∩A) = 2 and Card(A) = 3. Note thatthis interpretation yields a correct answer only for finite equiprobable sample spaces, which is the case in the experiment.


More elaborate experiments, like the Monty Hall (MH) Problem [7], demonstrate that several factors explain our difficulty to manipulate Bayesian or conditional statements. In this problem, MH, the organizer of a game, has randomly dispatched one prize behind one of three closed empty doors. A participant or player must choose a door that can possibly hide the prize. MH, who knows where the prize is hidden, will open an empty door that has not been chosen by the player. The player then has to make a second and last choice: to rather keep or change the door initially chosen. Most people playing the game tend to think that the probability of winning the prize is 50% chances for each of the two remaining closed doors and prefer sticking to their initial choice. In fact, there is twice as much chance of winning by switching (2/3) than keeping (1/3) [8]. Calculating the conditional probability P(prize behind my door | Monty opened another empty door) is not a trivial task : “Our systematic review revealed various causes (updating, not taking into account conditional information, etc) and misconceptions (e.g., the equiprobability bias), which are known to occur in some other (posterior) probability problems” [8] .

In the MH framing, repeating the experiment a high number of times does not necessarily help us go beyond the type one heuristic process and help us shift our interpretation, as Jubin and Barrouillet suggest for a simple probabilistic truth table task. Other studies about the MH problem suggest for example that pigeons outperform humans [9] to reach the optimal strategy after a high number of trials. This problem studied in the field of mathematics, psychology, sociology and cognitive science emphasizes the complexity of decision making while facing a conditional setting. In the MH problem, the probability-computing cost can be expensive, thwarting us from winning a rewarding prize.

To conclude, reasoning in a context of a probabilistic truth table task has a high processing cost for respondents that can lead to drastically different answers among respondents. The path we can build to compute a correct probability requires to go from the automatic, spontaneous model to an analytic, elaborated heuristic process. This can be triggered or hindered by experience (dice context) or the way the problem is framed. There exists a certain ability to flush out initial model and go beyond the type I heuristic process as the shifting interpretation cases have demonstrated. Manipulating conditional statement and playing with Bayes’s theorem is a hard and counterintuitive task directed by our understanding of the problem. If a “modified mental models theory” can account “the ease with which some individuals can adopt one type of response or the other”, more complex probabilistic tasks like the MH problem demonstrate the inequality and irrationality of our answers. Appropriate teaching and the ability to step out from cognitive bias are other useful tools to navigate in the wide jungle of reasoning and decision making facing conditional probabilities.


Licence Creative Commons
Ce(tte) œuvre est mise à disposition selon les termes de la Licence Creative Commons Attribution – Pas d’Utilisation Commerciale 4.0 International.

The Processing Cost of Conditional Probabilities – Thomas Breniere –
https://breniere.eu/en/the-processing-cost-of-conditionnal-probabilities/


[1] (p.7) “the sides […] were grouped by categories” and  (p.11) “the identification of ⌐A categories”

[2] In opposition to the conjunctive case of having A AND C.

[3] “A captain owns 26 sheep and 10 goats. How old is the captain?”.

To this problem, a lot of elementary children offered the answer 36. A simple but irrelevant calculation (26+10) based on the problem data produces an answer that is relevant for being the age of a ship captain. Focus is made on producing an answer rather than a critical appreciation (level of confidence) of the answer.

Verschaffel, Lieven, Brian Greer, and Erik De Corte. Making sense of word problems. Lisse: Swets & Zeitlinger, 2000.

[4] Thread: Reasoning on dice problem : your opinion matters !, Thomas Breniere, 18th December 2019. (link)

[5] Gras, R., & Totohasina, A. (1995). Chronologie et causalité, conceptions sources d’obstacles épistémologiques à la notion de probabilité conditionnelle. RDM, 15, 1.

[6] Jones, G. A. (Ed.). (2006). Exploring probability in school: Challenges for teaching and learning (Vol. 40). Springer Science & Business Media.

[7] Monty Hall Problem. Brilliant.org. Retrieved 11:09, January 5, 2020 (link)

[8] Saenen, Lore et al. “Why Humans Fail in Solving the Monty Hall Dilemma: A Systematic Review.” Psychologica Belgica vol. 58,1 128-158. 1 Jun. 2018, doi:10.5334/pb.274

[9] Herbranson, W. T. (2012). Pigeons, Humans, and the Monty Hall Dilemma. Current Directions in Psychological Science, 21(5), 297–301.