P. Dawid, On Individual Risk

Originally published in The Reasoner Volume 10, Number 6– June 2016

mouse
Mousetraps usually kill. But is this one going to kill the cautious rodent?

In Mathematics and Plausible Reasoning: Patterns of plausible inference G. Polya introduces random mass phenomena along the following lines. Consider raindrops falling on an ideally squared pavement, and focus on just two otherwise identical stones called Left and Right. It starts raining (conveniently one drop at a time) and we start recording the sequence of Left and Right according to which stone is hit by each raindrop. In this situation we are (reasonably) unable to predict where the next raindrop will fall, but we can easily predict that in the long run, both stones will be wet. This, Polya suggests, is typical of random mass phenomena: “unpredictable in certain details, predictable in certain numerical proportions to the whole”.

The fact that we can often make reliable predictions on some aggregate, but fail to draw from this obvious conclusions on the individuals, has profound implications not only for the foundations of probability, but also for its practical applications. In medicine, for instance, this is quite the norm. In the absence of further information, what does the fact that a certain side effect of, say statins, is known to affect 1 in 100 patients say about you suffering from it? Problems like this raise the more general question: what is the extent to which forecasts on some aggregate can reliably inform us about its individuals? This question, and its philosophical underpinnings, are tackled by P. Dawid (2016: On Individual Risk, Synthese, First Online, Open Access.)

Here’s a motivating example from the paper

Continue reading →

Not Quite True: The logic, philosophy and mathematics of vagueness

NQT

13 May 2016 2.:00 pm Aula Enzo Paci, Department of Philosophy, University of Milan

The purpose of this workshop is to provide a multidisciplinary perspective on the fascinating yet elusive notion of vagueness. Logical, philosophical and mathematical concepts and techniques will be brought to bear on the topic.

Attendance is free, all welcome!

Speakers

 

Titles, Abstracts and practical information

Hilbert’s interpretations of probability

Originally published in The Reasoner Volume 10, Number 5– May 2016

 

The concept of Probability is interesting, among other reasons, for the variety of ways in which we may be talking about distinct things and yet, in the end, still talking about probability. From the philosophy-of-mathematics point of view, this is vividly illustrated by the fact that, except possibly for one’s views on `finite vs. countable additivity’, one axiomatisation serves a great number of largely incompatible interpretations of the concept being axiomatised. Chapters 1-3 of J. Williamson (2010. In Defence of Objective Bayesianism. Oxford University Press.) offer a wide angle picture which I recommend to those who are unfamiliar with the landscape of probability interpretations.

hilbert-problems
Viewed at a relative coarse grain, the axiomatisation of probability developed by following a similar path to other mathematical concepts until at the turn of the twentieth century the key motivation became that of securing its applications against the threat of paradoxical consequences Needless to say David Hilbert played an important role in this. The explicit question appears as number “six” in the list of problems Hilbert posed to the audience of the Second International Congress of Mathematicians, in Paris on 8 August 1900:

Six. Mathematical Treatment of the Axioms of Physics. The investigations on the foundations of geometry suggest the problem: To treat in the same manner, by means of axioms, those physical sciences in which already today mathematics plays an important part; in the first rank are the theory of probabilities and mechanics. [] As to the axioms of the theory of probabilities, it seems to me desirable that their logical investigation should be accompanied by a rigorous and satisfactory development of the method of mean values in mathematical physics, and in particular in the kinetic theory of gases.

Continue reading →

Probabilistic mistakes kill (possibly many innocents)

Originally published in The Reasoner Volume 10, Number 4– March 2016

snowden

The Oscar winning documentary Citizenfour brought the concept of metadata to the attention of general audiences. As one scene of the film explains, we leave, mostly unwillingly, many digital traces of our daily activities. Most Londoners, for instance, use an Oyster card to travel across the city. When they top-up their Oyster online or opt in for the convenient auto top-up, they effectively allow whoever has access to the data, to track their routine. (And the recent introduction of contactless payment on the London transport system clearly made this even simpler.) This can then be linked to what people buy, what they read on the internet, what they post on social networks, and indeed, to what other people do. That’s metadata.

It goes without saying that metadata is syntax with no semantics. There are many reasons as to why people do what they do, and there are many people travelling independently on the same journey. Quite obviously then, the dots representing their digital traces can be joined in a number of distinct ways, and possibly found to draw specific but wrong pictures. That’s why the Owellian idea that someone possesses a wealth of metadata about us is indeed frightening. But knowing that governments may kill based on that, is rather hard to accept.

The opening of this recent piece by C. Grothoff and J.M. Porup on Arstechnica UK is chilling:

In 2014, the former director of both the CIA and NSA proclaimed that “we kill people based on metadata.” Now, a new examination of previously published Snowden documents suggests that many of those people may have been innocent.

Continue reading →