Back to overview

Evaluating credal classifiers by utility-discounted predictive accuracy. International Journal of Approximate Reasoning

Type of publication Peer-reviewed
Publikationsform Original article (peer-reviewed)
Author Zaffalon Marco, Corani Giorgio, Mauá Denis,
Project Multi Model Inference for dealing with uncertainty in environmental models
Show all

Original article (peer-reviewed)

Journal International Journal of Approximate Reasoning
Volume (Issue) 53(8)
Page(s) 1282 - 1301
Title of proceedings International Journal of Approximate Reasoning
DOI 10.1016/j.ijar.2012.06.022


Predictions made by imprecise-probability models are often indeterminate (that is, set- valued). Measuring the quality of an indeterminate prediction by a single number is important to fairly compare different models, but a principled approach to this problem is currently missing. In this paper we derive, from a set of assumptions, a metric to evaluate the predictions of credal classifiers. These are supervised learning models that issue set-valued predictions. The metric turns out to be made of an objective compo- nent, and another that is related to the decision-maker’s degree of risk aversion to the variability of predictions. We discuss when the measure can be rendered independent of such a degree, and provide insights as to how the comparison of classifiers based on the new measure changes with the number of predictions to be made. Finally, we make extensive empirical tests of credal, as well as precise, classifiers by using the new metric. This shows the practical usefulness of the metric, while yielding a first insightful and extensive comparison of credal classifiers.