This blog post is written by CDT Students Mauro Comi and Matt Clifford.
For the Bristol summer AI day we were lucky enough to hear from an outstanding group of internationally renowned speakers. The general topic for talks was based around the evaluation of machine learning models. During the day we touched upon a variety of interesting concepts such as: multilabel calibration, visual perception, meta-learning, uncertainty-awareness and the evaluation of calibration. It was an enjoyable and inspiring day and we give a huge thanks to all of the organisers and speakers of the day.
Capability Oriented Evaluation of Models
The day’s talks opened with Prof. José Hernández-Orallo who presented his work around evaluating the capabilities of models rather than their aggregated performance. Capability and performance are two words in machine learning evaluation that are often mistakenly used interchangeably.
Capabilities are a more concrete evaluation of a model which can tell us the prediction of a model’s success on an instance level. This is crucial and reassuring for safety critical applications where knowing the limits of use for a model is essential.
Evaluation of classifier calibration
Prof. Meelis Kull gave a pragmatic demonstration of how the errors of calibration can be determined. After giving us an overview of the possible biases when estimating the calibration error from a given test set, he explained a new paradigm ‘fit-on-the-test’. This approach reduces some biases such as those due to arbitrary binning choices of the probability space.
A Turing Test for Artificial Nets devoted to vision
Jesus’ presented work focused on understanding the visual system. Deep neural networks are the current state of the art in machine vision tasks, taking some degree of inspiration from the human visual system. However, using deep neural networks to understand the visual system is not easy.
Their work proposes an easy-to-use test to determine if human-like behaviour emerges from the network. Which, in principle, is a desirable property of a network that is designed to perform similar tasks that the human brain conducts, such as image segmentation and object classification.
The experiments are a Turing style set of tests that the human visual system is known to pass. They provide a notebook style test bed on GitHub. In theory, if your network that is operating on the visual domain passes the tests then it is regarded as having a competent understanding of the natural visual world.
The evaluation procedures were later explained by Jesus’ PhD students Jorge and Pablo. They take two networks: PerceptNet and a UNet variation, and with them determine the level of similarity to the human visual system. They test known features that occur and that are processed by the human visual system when shown in natural images such as Gabor filter edge outputs and luminosity sensitive scaling. The encoded representations of the images from PerceptNet and UNet are then compared to what is found in the human visual system to illustrate any discrepancies.
This work into evaluating networks’ understanding of natural imaging is useful for justifying decisions such as architecture design and what knowledge a network has learnt.
Uncertainty awareness in machine learning models
Prof. Eyke Hullermeier’s talk expanded on the concept of uncertainty awareness in ML model. ML classifiers tend to be overconfident on their predictions, and this could lead to harmful behaviours, especially in safety-critical contexts. Ideally, we want a ML system to give us an unbiased and statistically reliable estimation of their uncertainty. In simpler words, we want our model to tell us “I am not sure about this”.
When dealing with uncertainty, it is important to distinguish the aleatoric uncertainty, due to stochasticity in the data, from the epistemic one, which is caused by lack of knowledge. However, Prof. Hullermeier explains that it’s often hard to discern the source of uncertainty in real-world scenarios. The conversation moves from a frequentist to a Bayesian perspective of uncertainty, and dives into different levels of probability estimation.
ML for Explainable AI
Explainability is a hot topic in Machine Learning nowadays. Prof. Cèsar Ferri presented his recent work on creating accurate and interpretable explanations using Machine Teaching, an area that looks for the optimal examples that a teacher should use to make a learner capture a concept.
This is an interesting concept for machine learning models where the teacher and student scenario has been flipped. Prof. Ferri showed how this concept was applied to make noisy curves representing battery health monitoring more interpretable. This involves selecting an explanation that balances simplicity and persuasiveness to the user in order to convey the information effectively.
Explainability in meta learning and multilabel calibration
Dr. Telmo SIlva Filho expanded the concept of explainability introduced by Prof. Ferri to the meta-learning setting. The first paper that he described suggested a novel method, Local Performance Regions, to extract rules from a predetermined region in the data space and link them to an expected performance.
He then followed, together with Dr. Hao Sang, with a discussion of multilabel classification and calibration, and how multilabel calibration is often necessary due to the limitation of label-wise classification. The novelty in their approach consists in calibrating a joint label probability with consistent covariance structure.
Final Words
Again, we would like to emphasise our gratitude to all of the speakers and organisers of this event and we look forward to the next interactive AI event!