ELISE Wrap up Event

This blog post is written by AI CDT student, Jonathan Erskine

I recently attended the ELISE Wrap up Event in Helsinki, marking the end of just one of many programs of research conducted under the ELLIS society, which “aims to strengthen Europe’s sovereignty in modern AI research by establishing a multi-centric AI research laboratory consisting of units and institutes distributed across Europe and Israel”.

This page does a good job of explaining ELISE and ELLIS if you want more information.

Here I summarise some of the talks from the two-day event (in varying detail). I also provide some useful contacts and potential sources of funding (you can skip to the bottom for these).

Robust ML Workshop

Peter Grünwald: ‘e’ is the new ‘p’

P-values are an important indicator of statistical significance when testing a hypothesis, whereby a calculated p-value must be smaller than some predefined value, typically $\alpha = 0.05$. This is a guarantee that Type 1 Errors (where null hypothesis can be falsely rejected) are less than 5% likely.

“p-hacking” is a malicious practice where statistical significance can be manufactured by, for example:

  • stopping the collection of data once you get a P<0.05
  • analyzing many outcomes, but only reporting those with P<0.05
  • using covariates
  • excluding participants
  • etc.

Sometimes this is morally ambiguous. For example, imagine a medical trial where a new drug shows promising, but not statistically significant results. Should a p-test fail, you can simply repeat the trial, sweep the new data into the old and repeat until you achieve the desired p-value, but this can be prohibitively expensive, and it is hard to know whether you are p-hacking or haven’t tested enough people to prove your hypothesis. This approach, called “optional stopping”, can lead to violation of Type 1 Error guarantees i.e. it is hard to have faith in your threshold $\alpha$ due to the increasing cumulative probability that individual trials are in the minority case of false positives.

Peter described the theory of hypothesis testing based on the e-value, a notion of evidence that, unlike the p-value, allows for “effortlessly combining results from several studies in the common scenario where the decision to perform a new study may depend on previous outcomes.

Unlike with the p-value, this proposed method is “safe under optimal continuation with respect to Type 1 error”; no matter when the data collecting and combination process is stopped, the Type-I error probability is preserved. For singleton nulls, e-values coincide with Bayesian Factors.

In any case, general e-values can be used to construct Anytime-Valid Confidence Intervals (AVCIs), which are useful for A/B testing as “with a bad prior, AVCIs become wide rather than wrong”.

In comparison to classical approaches, you need more data to apply e-values and AVCIs, with the benefit of performing optional stopping without introducing Type 1 errors. In the worst case you need more data, but on average you can stop sooner.

This is being adopted for online A/B testing but is more challenging for expensive domains, such as medical trials; you need to reserve more patients for your trial, but you wont need them all – a challenging sell, but probability indicates that you should save time and effort in the majority of cases.

Other relevant literature which is pioneering this approach to significance testing is Waudby-smith and Ramdas, JRSS B, 2024

There is an R package here for anyone who wants to play with Safe Anytime-Valid Inference.

Watch the full seminar here:

https://www.youtube.com/watch?v=PFLBWTeW0II

Tamara Broderick: Can dropping a little data change your conclusions – A robustness metric

arxiv.org

Tamara advocated the value of economics datasets as rich test beds for machine learning, highlighting that one can examine the data produced from economic trials with respect to robustness metrics and can come to vastly different conclusions than those published in the original papers.

Focusing in, she described a micro-credit experiment where economists ran random controlled trials on small communities, taking approximately 16500 data points with the assumption that their findings would generalise to larger communities. But is this true?

When can I trust decisions made from data?

In a typical setup, you (1) run an analysis on a series of data, (2) come to some conclusion on that data, and (3) ultimately apply those decisions to downstream data which you hope is not so far out-of-distribution that your conclusions no longer apply.

Why do we care about dropping data?

Useful data analysis must be sensitive to some change in data – but certain types of sensitivity are concerning to us, for example, if removing some small fraction of the data $\alpha$ were to:

  • Change the sign of an effect
  • Change the significance of an effect
  • Generate a significant result of the opposite sign

Robustness metrics aim to give higher or lower confidence on our ability to generalise. In the case described, this implies a low signal-to-noise ratio, which is where Tamara introduces her novel metric (Approximate Maximum Influence Perturbation) which should help to quantify this vulnerability to noise.

Can we drop one data point to flip the sign of our answer?

In reality, this is very expensive to test for any dataset where the sample size N is large (by creating N*(N-1) datasets and re-running your analysis. Instead, we need an approximation.

Let the Maximum Influence Perturbation be the largest possible change induced in the quantity of interest by dropping no more than 100α% of the data.

From the paper:

We will often be interested in the set that achieves the Maximum Influence Perturbation, so we call it the Most Influential Set.

And we will be interested in the minimum data proportion α ∈ [0,1] required to achieve a change of some size ∆ in the quantity of interest, so we call that α the Perturbation-Inducing Proportion. We report NA if no such α exists.

In general, to compute the Maximum Influence Perturbation for some α, we would need to enumerate every data subset that drops no more than 100α% of the original data. And, for each such subset, we would need to re-run our entire data analysis. If m is the greatest integer smaller than 100α, then the number of such subsets is larger than $\binom{N}{m}$. For N = 400 and m = 4, $\binom{N}{m} = 1.05\times10^9$. So computing the Maximum Influence Perturbation in even this simple case requires re-running our data analysis over 1 billion times. If each data analysis took 1 second, computing the Maximum Influence Perturbation would take over 33 years to compute. Indeed, the Maximum Influence Perturbation, Most Influential Set, and Perturbation-Inducing Proportion may all be computationally prohibitive even for relatively small analyses.

Further definitions are described better in the paper, but suffice to say the approximation succeeds in identifying where analyses can be significantly affected by a minimal proportion of the data.For example, in the Oregon Medicaid study (Finkelstein et al., 2012), they identify a subset containing less than 1% of the original data that controls the sign of the effects of Medicaid on certain health outcomes. Dropping 10 data points takes data from significant to non-significant.

Code for the paper is available at:

https://github.com/rgiordan/AMIPPaper/blob/main/README.md

An R version of the AMIP metric is available:

https://github.com/maswiebe/metrics.git

Watch a version of this talk here:

https://www.youtube.com/watch?v=7eUrrQRpz2w

Cedric Archambeau | Beyond SHAP : Explaining probabilistic models with distributional values

Abstract from the paper:

A large branch of explainable machine learning is grounded in cooperative game theory. However, research indicates that game-theoretic explanations may mislead or be hard to interpret. We argue that often there is a critical mismatch between what one wishes to explain (e.g. the output of a classifier) and what current methods such as SHAP explain (e.g. the scalar probability of a class). This paper addresses such gap for probabilistic models by generalising cooperative games and value operators. We introduce the distributional values, random variables that track changes in the model output (e.g. flipping of the predicted class) and derive their analytic expressions for games with Gaussian, Bernoulli and categorical payoffs. We further establish several character- ising properties, and show that our framework provides fine-grained and insightful explanations with case studies on vision and language models.

Cedric described how Shap values can be reformulated as random variables on a simplex, shifting from weight of individual players to distribution of transition probabilities. Following this insight, they generate explanations on transition probabilities instead of individual classes, demonstrating their approach on several interesting case studies. This work is in it’s infancy – and has plenty of opportunity for further investigation.

Semantic, Symbolic and Interpretable Machine Learning Workshop

Nada Lavrač: Learning representations for relational learning and literature-based discovery

This was a survey of types of representation learning, focusing on Nada’s area of expertise in propositionalisation and relational data, Bisociative Literature-Based Discovery, and interesting avenues of research in this direction.

Representation Learning

Deep learning, while powerful (accurate), raises concerns over interpretability. Nada takes a step back to survey different forms of representation learning.

Sparse, Symbolic, Propositionalisation:

  • These methods tend to be less accurate but are more interpretable.
  • Examples include propositionalization techniques that transform relational data into a propositional (flat) format.

Dense, Embeddings:

  • These methods involve creating dense vector representations, such as word embeddings, which are highly accurate but less interpretable.

with recent work focusing on unifying approaches which can incorporate the strengths of both approaches.

Hybrid Methods:

  • Incorporate Sparse and Deep methods
  • DeepProp, PropDRM, propStar(?) – Methods discussed in their paper.

Representation learning for relational data can be achieved by:

  • Propositionalisation – transforming a relational database into a single-table representation. example: Wordification
  • Inductive logic programming
  • Semantic relational learning
  • Relational sub-route discovery (written by Nada and our own P. Flach)
  • Semantic subgroup discovery system, “Hedwig” that takes as input the training examples encoded in RDF, and constructs relational rules by effective top-down search of ontologies, also encoded as RDF triples.
  • Graph-based machine learning
    • data and ontologies are mapped to nodes and edges
    • In this example, gene ontologies are used as background knowledge for improving quality assurance of literature-based Gene Ontology Annotation

These slides, although a little out of date, talk about a lot of what I have noted here, plus a few other interesting methodologies.

The GitHub Repo for their book contains lots of jupyter notebook examples.

https://github.com/vpodpecan/representation_learning.git

Marco Gori: Unified approach to learning over time and logic reasoning

I unfortunately found this very difficult to follow, largely due to my lack of subject knowledge. I do think what Marco is proposing requires an open mind as he re-imagines learning systems which do not need to store data to learn, and presents time as an essential component of learning for truly intelligent “Collectionless AI”.

I wont try and rewrite his talk here, but he has full classroom series available on google, which he might give you access to if you email him.

Conclusions:

  • Emphasising environmental interactions – collectionless AI which doesn’t record data
  • Time is the protagonist: higher degree of autonomy, focus of attention and consciousness
  • Learning theory inspired from theoretical physics & optimal control: hamiltonian learning
  • Nuero-symbolic learning and reasoning over time: semantic latent fields and explicit semantics
  • Developmental stages and gradual knowledge acquisitation

Contacts & Funding Sources

For Robust ML:

e-values, AVCIs:

Aaditya Ramdas at CMU

Peter Grünwald Hiring

For anyone who wants to do a Robust ML PhD, apply to work with Ayush Bharti : https://aalto.wd3.myworkdayjobs.com/aalto/job/Otaniemi-Espoo-Finland/Doctoral-Researcher-in-Statistical-Machine-Learning_R40167

If you know anyone working in edge computing who would like 60K to develop an enterprise solution, here is a link to the funding call: https://daiedge-1oc.fundingbox.com/ The open call starts on 29 August 2024.

If you’d like to receive monthly updates with new funding opportunities from Fundingbox, you can subscribe to their newsletter: https://share-eu1.hsforms.com/1RXq3TNh2Qce_utwh0gnT0wfegdm

Yoshua Bengio said he had fellowship funding but didn’t give out specific details, or I forgot to write them down… perhaps you can send him an email.

Spring Research Conference Day 1 – Professor Seth Bullock “AI for Collective Intelligence (AI4CI) Research Hub”

This blog post is written by AI CDT student, Fahd Abdelazim

Recent Artificial Intelligence (AI) advances have shown that the applications of AI extend far beyond increasing efficiency or convenience. It is now possible to use AI to tackle some of humanity’s most pressing challenges from minimizing pandemics to managing extreme weather events and guiding sustainable urban development. However addressing these issues requires specialized systems and skilled researchers to lead these innovations.

Recognizing  the importance of tackling these challenges the University of Bristol established the AI for Collective Intelligence (AI4CI) research hub which will serve as the cornerstone for interdisciplinary collaboration and bringing together experts partners from across academia, government, charities and industry to work together to harness the power of AI to address the complex challenges which lie at the intersection of humans and AI.

For example, the personalization of treatment for diabetes patients and using data to enhance the NHS policies for patients. The hub will also work on enhancing pandemic prediction and response through analysing previous pandemics and exploring how AI can be used to help policy makers and healthcare professionals to make swift and informed decisions in the future.

Climate change is another pressing issue which increases the frequency and intensity of extreme weather events. AI can play a pivotal role in disaster management and mitigation through analysing real-time meteorological data to predict extreme weather events and provide early warnings. In the area of urban development AI can allow for more creating smarter and more resilient cities. This can be done through analysing population density, transportation routes and energy consumption. This can allow for optimized infrastructure and improved public services.

It is clear that AI will play a pivotal role in the process of building a better future and it is necessary to fully capitalize on the potential of this technology. Through initiatives like the AI4CI research hub we can harness the power of AI to address the future challenges that humanity will face and create a better and sustainable world for future generations.

Spring Research Conference Day 1 – Isabel Potter “Artists are not Technologists – AI for Scenography “

This blog post is written by AI CDT student, Lucy Farnik

Isabel Potter gave a talk at this year’s Spring Research Conference about their work on applying AI to scenography, which they are exploring in their PhD. They chose this research area partially due to their extensive amounts of experience in the creative arts, having been involved with theatre since age 14. They have also founded their own company in this space and are taking on various freelance projects in theatre alongside their PhD.

Isabel’s talk was built on one central theme — artists are not technicians. At the moment, generative AI is getting closer to being able to automate parts of scenography, from creating background music to staging. However, many of these tools are made for people with a STEM background and use a terminology that matches this. For example, tools which can be used for immersive technology in the arts include Unreal Engine which uses many computer vision and mathematics terms. One may contrast this with tools like Adobe Photoshop which uses terms such as “paintbrush tool”, which comes from the terminology artists use on a daily basis.

Isabel is trying to reduce the barrier to entry for artists. They are specifically focusing on lighting design, as this is the most under-explored area of immersive technology for scenography and is also the area that they have the most experience working in. At the moment, prompting large language models to create diagrams such as lighting plots leads to results which are not yet usable, but the step of translating lighting ideas into programs which can be loaded into a lighting desk is already somewhat doable by existing foundation models. They are currently exploring this as a starting point while optimizing for ease of use by a non-technical audience.

BMVA Symposium 2024

This blog post is written by AI CDT student, Phillip Sloan

I had the opportunity to go to the British Machine Vision Association 2024 Symposium, which took place at the British Computer Society in London on the 17th of January, 2024. The symposium was chaired by Dr. Michael Wray from the University of Bristol, Dr. Davide Moltisanti from the University of Bath, and Dr. Tengda Han from the University of Oxford.

The day kicked off with three invited speakers, the first being Professor Hilde Kuehne from the University of Bonn and MIT-IBM Watson AI Lab. Her presentation was related to vision language understanding for video, she started her presentation with an introduction to the field, how it began and how it has adapted over time before moving on to the current work that she and her students have been working on including the paper “MAtch, eXpand and Improve: Unsupervised Finetuning for Zero-Shot Action Recognition with Language Knowledge” by Wei Lin.

Her final remarks outlined potential issues for evaluation within the field, when the field was more focused on classification the simple labels could easily be evaluated to be right or wrong, however now the field has moved to vision-language retrieval the ground truth might not actually be the best, or most relevant caption that is contained within the dataset which is a hurdle that must be overcome.

The second invited speaker, Professor Frank Keller from the University of Edinburgh, had a very interesting talk on the topic of visual story generation, a domain where a coherent narrative is constructed to describe a sequence of images often related to the characters within the images. He broke his talk down into three sections, first introducing the field more concretely before going on to explain two different areas: Characters in visual stories and planning in visual stories.

He emphasised that the characters within a story are important, and so character detection and grounding are important in order to generate a fluent story. To help improve this aspect, Prof. Keller and his students introduced a dataset called VIST-Character that contains character groundings and visual and textual character co-reference chains. To help with planning the stories, Prof. Keller explained that their current methods utilise a blueprint, which focuses on localising characters in the text and images before relating them together. These blueprints are used as a guide to generate the story.

He explained that the domain is more difficult than image captioning as you have characters, and are required to have a fluent sequence of text which renders current NLP evaluations such as BLEU as poor measures for this task as it is concerned about generating interesting, coherent and grounded stories rather than exact matches to the ground truth. His research implemented human evaluators which is an interesting way to add humans to the loop.

Following Prof. Keller’s talk we had a break for poster sessions, before coming back for talks from a select few people who brought posters to the symposiums including talks related to explainability of autonomous driving and evaluating the reliability of LLMs in the face of adversarial attacks.

After lunch we had talks from the remaining two invited speakers. Professor Andrew Zisserman from the University of Oxford presented research for training visual language models to generate audio descriptions, helping people who are blind or partially blind to enjoy movies.

The talk started by providing a brief introduction and explanation of the field and then outlined that the current available datasets, explaining that they were not sufficient, so a new dataset utilising AudioVault was created through the use of processing the audio to provide audio descriptions and subtitles.

The talk walked us through a basic model overview architecture. Its limitations were pointed out, including the fact that characters were often not used (often using he, it) and descriptions were often incomplete. Prof. Zisserman explained that, to combat these limitations, they took two research directions, improving “the who”: providing supplementary information about the characters within the film and on “the what”: improving the models ability to provide better context by utilising preteained video-language models.

Finally, he discusses how evaluation measures, e.g. cider are not fit for the purpose of audio description generation, explaining that large language models are starting to be used in the domain as an evaluation tool.

The second talk of the afternoon was related to vision-language with limited and no supervision and was presented by Dr. Yuki Asano from the University of Amsterdam, who asked the question: “Why care about Self-supervised Learning ideas in the age of CLIP et al?”

He presented three works that were undertaken by him and his team. The first being the “Similarities of Unimodal representations across language and vision”. Demonstrating a model that uncoupled image-language pairs and trained them in an unsupervised fashion to reach 75% of the performance of the CLIP model “Localisation in visual language models” was the second topic that was reviewed, a task that vision language models are not traditionally good at. the solution of his team was to unlock localisation abilities in frozen VLMs  by adding a low weight module called the positional insert module (PIN).

The final part of the talk was on the topic of image encoder pretraining with a single video with many details. Their model, called Dora ( discover and track), has the high level idea of tracking multiple objects across time and enforce invariance of features across time. They evaluated their model against DINO, finding the model to perform better on various datasets.

After a coffee break, we had some shorter talks from people presenting posters at the event, including a radiology report generation presentation which was particularly relevant to me. CXR-IRGen was proposed, a diffusion model which is used to generate extra image-report pairs which could potentially help improve the problem of lack of data within the field. Kevin Flanagan, a fellow CDT memory also presented his research into learning temporal sentence grounding from narrated egovideos, showcasing his method called CliMer which merges clips from rough narration timestamps and trains in a contrastive manner.

Throughout the day we were encouraged to use Padlet to put our thoughts and questions down. After the talks had concluded there was a final informal Q&A session into the future of the vision-language domain which used our Padlet responses as talking points. We discussed points including the need for better evaluation metrics (which was a big theme from a lot of talks), the role of academia in the age of large language models and utilising NLP to make vision models explainable.

A very interesting and thought provoking day! There were several people working within medical image analysis so it was great to network and discuss ideas. Thank you to the speakers and people who presented for their contributions and to the chairs and organisers of the event for making it possible!

 

Conference on Information and Knowledge Management (CIKM) – Matt Clifford

This blog post is written by AI CDT student, Matt Clifford

At the end of October ’23, I attended CIKM in Birmingham to present our conference paper. The conference was spread across 3 days with multiple parallel tracks on each day focusing on specific topic areas. CIKM is a medium size conference, which was a good balance between being able to meet many lots of researchers but at the same time not being overwhelmingly big that you feel less connected and integrated within the conference. CIKM spans many topics surrounding data science/mining, AI, ML, graph learning, recommendation systems, ranking systems.

This was my first time visiting Birmingham, dubbed by some the “Venice of the North”. Despite definitely not being in the north and resembling very little of Venice (according to some venetians at the conference), I was overall very impressed with Birmingham. It has a much friendlier hustle and bustle compared to bigger cities in the UK, and the mixture of grand Victorian buildings interspersed with contemporary and art-deco architecture makes for an interesting and welcoming cityscape.

Our Paper

Our work focuses on explainable AI, which I helps people to get an idea the inner workings of a highly complicated AI system. In our paper we investigate one of the most popular explainable AI methods called LIME. We discover situations where AI explanation systems like LIME become unfaithful, providing the potential to misinform users. In addition to this, we illustrate a simple method to make an AI explanation system like LIME more faithful.

This is important because many users take the explanations provided from off-the-shelf methods, such as LIME, as being reliable. We discover that the faithfulness of AI explanation systems can vary drastically depending on where and what a user chooses to explain. From this, we urge users to understand whether an AI explanation system is likely to be faithful or not. We also empower users to construct more faithful AI explanation systems with our proposed change to the LIME algorithm.

 You can read the details of our work in our paper https://dl.acm.org/doi/10.1145/3583780.3615284

Interesting Papers

At the conference there was lots of interesting work being presented. Below I’ll point towards some of the papers which stood out most to me from a variety of topic areas.

Fairness

  • “Fairness through Aleatoric Uncertainty” – focus on improving model fairness in areas of aleatoric uncertainty where it is not possible to increase the model utility so there is a less of a fairness/utility tradeoff – https://dl.acm.org/doi/10.1145/3583780.3614875
  • “Predictive Uncertainty-based Bias Mitigation in Ranking” – improve bias in ranking priority by reshuffling results based on their uncertainty of rank position – https://dl.acm.org/doi/abs/10.1145/3583780.3615011

Explainabilty

Counterfactuals

Healthcare

Data Validity

Cluster package in python

A group that were at the conference maintain a python package which neatly contains many state-of-the-art clustering algorithms. Here is the link to the GitHub https://github.com/collinleiber/ClustPy . Hopefully some people find it useful!

 

BIAS ’23 – Day 3: Prof. Kerstin Eder talk – (Trustworthy Systems Laboratory, University of Bristol) The AI Verification Challenge

This blog post is written by AI CDT student, Isabella Degen

A summary of Prof. Kerstin Eder’s talk on the well-established procedures and practices of verification and validation (V&V) and how they relate to AI algorithms. The objective is to inspire the readers to apply better V&V processes to their AI research. 

Verification is the process used to gain confidence in the correctness of a system compared to its requirements and specifications. Validation is the process used to assess if the system behaves as intended in its target environment. A system can verify well, meaning it does what it was specified to do, and not validate well, meaning it does not behave as intended.

V&V are challenging for systems that fully or partially involve AI algorithms despite V&V being a well-established and formalised practice. Many AI algorithms are black boxes that offer no transparency about how the algorithm operates. They respond with multiple correct answers to similar or even the same input. AI algorithms are not deterministic by design. Ideally, they can handle new situations well without needing to be trained for all situations. Therefore, accurately and exhaustively listing all the requirements against which these algorithms need to be verified is practically impossible.

V&V methods for complex robotic systems like automated vehicles are well-established. Automated vehicles need to be capable of operating in an environment where unexpected situations occur. Various ISO standards (ISO 13485 – Medical Devices Quality Management, ISO 10218-1 – Robots and Robotic Devices, ISO 12207 – Systems and Software Engineering) describe different V&V practices required for software, systems and devices. These standards expect the use of multiple processes and practices to meet the required quality. No one practice covers the extent of V&V each practice has shortcomings. The three techniques for V&V are formal verification, simulation-based verification and experiments [3]. The image below arranges these techniques by how realistic and coverable they are, where coverability refers to how much of the system a technique can analyse [1].

The image shows the framework for corroborative V&V [1].

An approach for simulation-based testing is coverage-driven verification (CDV). A two-tiered test generation approach where abstract test sequences are computed first and then concretised has been shown to achieve a high level of automation [2]. It is important to note that coverage includes code coverage, structural coverage (e.g. employing Finite State Machines) and functional coverage (including requirements and situations).

The images show the CDV process (left) and its translation to an automated vehicle scenario (right) [2].

Belief-desire-intention (BDI) agents used as models can further generate tests. These agents achieve coverage that is higher or equivalent to model-checking automata. The BDI agents can emulate the agency present in Human-Robot Interactions. However, the cost of learning a belief set has to be considered [3]. Similarly, software testing agents can be used to generate tests for simulation-based automated vehicle verification. Such an agency-directed approach is robust and efficient. It generates twice as many effective tests compared to pseudo-random test generation. Moreover, these agents are encoded to behave naturally without compromising the effectiveness of test generation [4].

The hope is that inspired by these techniques used to test robotic systems we will promote V&V to first-class citizens when designing and implementing AI algorithms. V&V for AI algorithms requires innovation and a creative combination of existing techniques like intelligent agency-based test generation. The reward will be to increase trust in AI algorithms.

References:

[1] Webster, Matt, et al. “A corroborative approach to verification and validation of human–robot teams.The International Journal of Robotics Research 39.1 (2020): 73-99. https://journals.sagepub.com/doi/full/10.1177/0278364919883338 

[2] Araiza-Illan, Dejanira, et al. “Systematic and realistic testing in simulation of control code for robots in collaborative human-robot interactions.” Towards Autonomous Robotic Systems: 17th Annual Conference, TAROS 2016, Sheffield, UK, June 26–July 1, 2016, Proceedings 17. Springer International Publishing, 2016. https://link.springer.com/chapter/10.1007/978-3-319-40379-3_3 

[3] Araiza-Illan, Dejanira, Anthony G. Pipe, and Kerstin Eder. “Model-based test generation for robotic software: Automata versus belief-desire-intention agents.arXiv preprint arXiv:1609.08439 (2016). https://arxiv.org/abs/1609.08439 

[4] Chance, Greg, et al. “An agency-directed approach to test generation for simulation-based autonomous vehicle verification.2020 IEEE International Conference On Artificial Intelligence Testing (AITest). IEEE, 2020. https://arxiv.org/abs/1912.05434 

 

 

Essai 2023 Summer School – Matt Clifford

This blog post is written by AI CDT student, Matt Clifford

ESSAI 2023 – https://essai.si/

A few of us from the CDT – Me (Matt), Jonny and Rachael attended the ESSAI summer school on the 24th -28th of July 2023. ESSAI is the first European summer school on Artificial Intelligence and was held in Ljubljana, Slovenia. There were a variety of interesting topics and classes on offer (https://essai.si/schedule/) but here I’ll share some of the classes that I attended. I’ll keep the information brief of each topic here but feel free to reach out to me if you would like to chat through any of the topics which might be useful to you or if would like to know more!

AutoMLhttps://www.automl.org/

Optimise machine learning algorithm hyperparameters and Neural architectures automatically by using various techniques (Baysian optimisation etc.) Python packages for sklearn and pytorch: https://pypi.org/project/smac/

https://github.com/automl/Auto-PyTorch

Very useful when you want a more objective training approach which will save you time, computation and more importantly frustration!

Learning Beyond Static Datasets – https://owll-lab.com/

Exploring mechanisms to help catastrophic forgetting when learning a new task in ML.

Topics related to: transfer learning, active learning, continual learning, lifelong learning, curriculum learning, open world learning, knowledge distillation.

A nice survey paper to map out the whole landscape – https://www.sciencedirect.com/science/article/pii/S089360802300014X?via%3Dihub

Uncertainty Quantification

Adding uncertainty to a model (important with neural networks being so overly confident!). Methods can either be inherent (Bayesian NN etc.) or post hoc (calibration, ensembling, Monte-Carlo dropout) and can disentangle aleatoric and epistemic uncertainty measures.

Fairness & Privacy –

https://aif360.readthedocs.io/en/latest/

https://fairlearn.org/

The president of Slovenia (plus her not so inconspicuous bodyguards) attended these talks which was a bit of a surprise!

Explored navigating the somewhat conflicting landscape of statical fairness by ensuring groups of people have the same model statistics. Picking which statistics, however, not so easy and it’s impossible to ensure all statistics match in real life scenarios – https://arxiv.org/pdf/2304.06057.pdf .

Also looked at privacy through anonymity (K-anonymity, L-diversity, T-closeness) and differential privacy. I won’t go into details but thought I’d mention some of the main techniques currently used in academic and industry.

Again, let me know if you want to go into the details of anything that is useful or interesting to you!

Also, a side note, Slovenia is an amazingly beautiful country, and I can very much recommend to anyone thinking of going! Here’s a few photos:

 

AI UK 2023 Conference – Rachael Laidlaw

This blog post is written by AI CDT student, Rachael Laidlaw

Last month, I took the exciting opportunity to attend AI UK 2023, a large-scale event organised by The Alan Turing Institute. It was my first conference outside of Bristol, held in the heart of London at the Queen Elizabeth II Centre – right by Westminster Abbey and Big Ben – and it promised to offer a diverse programme of activities with a broad range of interactive content. As such, the sessions were packed with novel material delivered by leading international thinkers across multiple disciplines, resulting in an in-depth exploration of how data science and AI can be used to solve real-world challenges.

On the day

After a short walk to the venue from my hotel in Piccadilly Circus, I signed in and collected my demonstrator lanyard before heading up to the third floor of the building to meet my colleagues from the Jean Golding Institute. We would be spending the day manning a stall for the Local Air initiative in the environmental section of the Fleming room, engaging with attendees from both academia and industry about a pollution monitoring system designed to be mounted on e-scooters.

Highlights included:

  • using ground coffee to simulate particulate matter in the air and generate a live response from the prototype which was shown on the screen behind us,
  • contemplating alternative applications for the noise-pollution sound sensors (i.e., for use in the study of bats) with representatives from the UK Centre for Ecology and Hydrology, and
  • considering media coverage possibilities for the project with a journalist from the Financial Times.

Into the afternoon

When lunchtime arrived, I began circling the floor to visit the other stalls. Whilst wandering, I encountered displays of lots of innovative concepts, some of my favourites being:

  • a family of domestic social robot pets developed by the company Konpanion to alleviate loneliness,
  • progress on the tool BoneFinder, created by academics at University of Manchester for use in clinical practice to segment skeletal structures,
  • a cardiac digital twin produced at King’s College London,
  • SketchX’s headset that gives you the ability to build your own metaverse from rough virtual drawings, and
  • the Data Hazards project, complete with holographic stickers and hi-vis jackets worn by another University of Bristol team to really bring data-oriented risk assessments to life.

Of the above, BoneFinder stood out to me in particular, owing to the fact that my current specialist focus is ecological computer vision, and, thus, seeing the same sort of technique being used for a medical application piqued my interest.

The talks

During a quiet period at the stall, I jumped at the chance of sitting in on a very well-attended talk by Gary Marcus from NYU on the power of ChatGPT and the unknowns surrounding the future of such pieces of technology. This was especially thought-provoking and relevant to my ongoing work towards a potential CHI publication.

After re-energising with some delicious cookies in the break, I also made it to an insightful panel discussion on shaping public perceptions of artificial intelligence, featuring Tracey Brown (the director of Sense About Science), Tania Duarte (the co-founder and CEO of We and AI) and David Leslie (a specialist in ethics and responsible innovation). This reminded me of the importance of keeping stakeholders in mind during all stages of research.

Closing moments

To round off the day, everyone came together to mingle and expand their networks over canapés and a significant amount of complimentary wine. We then gathered our belongings and headed out for dinner and to be tourists in London for the evening.

All in all, it was an incredibly fun and informative experience alongside a great team, and I’m already looking forward to future conferences!

Highlights from NeurIPS 2022 and the 2nd Interactive Learning for NLP Workshop – Dr Edwin Simpson

This blog post is written by lecturer in Computer Science, Dr Edwin Simpson

In November I was lucky enough to attend NeurIPS 2022 in person in New Orleans, and take part as a co-organiser of InterNLP, our second interactive learning for NLP workshop. I had many interesting discussions around posters, talks and coffee breaks and took loads of photos of posters. It was hard to write up my highlights and without the post becoming endlessly long, so here is my attempt to pick out a handful of papers that caught my eye and tell you a little bit about how our workshop unfolded.

Main Conference

One topic generating a lot of buzz was in-context learning, where language models learn to perform new tasks without updating their weights from examples given in the model’s input prompt. Models like GPT3 can perform in-context learning from small numbers of examples. Garg et al. presented an interesting paper that triez to understand what classes of functions can be learned in this way [1]. They were able to train Transformers that learn function classes including linear functions and two-layer neural networks.

 

However, for few-shot learning, in-context learning may not be the best solution: Liu et al. [2] showed that fine-tuning a model by introducing a small number of additional weights can be cheaper and produce more accurate models.

 

 

Another interesting NLP paper from Jian, Gao and Vosoughi [3] learns sentence embeddings usingimage and audio data alongside a text training set. The method works by creating pairs of images (or audio) using data augmentation, which are then embedded and fed through a BERT-like transformer to provide additional data for contrastive learning. This is especially useful for low-resource languages and domains, and it is really interesting that we can learn from different modalities without any parallel examples.

Many machine learning researchers are concerned with models that produce well-calibrated probabilities, but what difference does calibration make to end users? Vodrahalli, Gerstenberg and Zou [4] investigated a binary prediction task in which a classifier provides advice ta user, along with its confidence. They found that exaggerating the model’s confidence led the user to perform better. So, the classifier was uncalibrated and had higher training loss but the complete human-AI system was more effective, which shows how important it is for ML researchers to consider real-world use cases for their models.

Sticking with the topic of uncertainty, Bayesian deep learning aims to quantify uncertainty in complex neural network models, but is challenging to apply as it is difficult to specify a suitable prior distribution. Ideally, we’d specify a prior over the functions that the network encodes, rather than over individual network weights. Tran et al. [4] introduce a method for setting functional priors in Bayesian neural networks, by aligning them with Gaussian processes. It will be interesting to try out their approach in some deep learning applications where quantifying uncertainty is important.

At the poster sessions, I also enjoyed learning about the wide range of new benchmarks and datasets that will enable lots of exciting future work. For example, one that relates to my own work that I’d like to make use of is BIGBIO [5], which makes a number of biomedical NLP datasets more accessible and will hopefully to more reproducible results.

Juho Kim, who is associate professor at Korea Advanced Institute of Science and Technology (KAIST), gave a keynote on his vision of Interaction-Centric AI. He called on AI researchers to move beyond data-centric or model-centric research by rethinking the complete AI research process around the user experience of AI. Juho’s talk gave examples of how an interaction-centric approach may affect the way we evaluate models, which cases we focus on when trying to improve accuracy, how to incentivise users to engage with AI, and several other aspects of interaction-centric AI that his lab has been working on. He demonstrated Stylette, a tool that lets you use natural language to change the appearance of a website. The keynote ended with a call to action for AI researchers to rethink performance metrics, the design process and collaboration, particularly with HCI researchers.

Geoff Hinton appeared remotely from home to present the Forward-Forward algorithm, a method for training neural networks without backpropagation that could give insights into how learning in the cortex takes place. His experiments showed some promising early results, and in the Q&A Geoff talked about coding the experiments himself. A preliminary arXiv paper is now out [6].

1. Garg et al., What Can Transformers Learn In-Context? A Case Study of Simple Function Classes, https://arxiv.org/abs/2208.01066

2. Liu et al., Few-Shot Parameter-Efficient Fine-Tuning is Better and Cheaper than In-Context Learning, https://arxiv.org/abs/2205.05638

3. Jian, Gao and Vosoughi, Non-Linguistic Supervision for Contrastive Learning of Sentence Embeddings, https://arxiv.org/pdf/2209.09433.pdf

4. Vodrahalli, Gerstenberg and Zou, Uncalibrated Models Can Improve Human-AI Collaboration, https://arxiv.org/abs/2202.05983

5. Fries et al., BigBIO: A Framework for Data-Centric Biomedical Natural Language Processing, https://arxiv.org/abs/2206.15076

6. Hinton, The Forward-Forward Algorithm: Some Preliminary Investigations, https://arxiv.org/abs/2212.13345

InterNLP Workshop

2022 was our second edition of the InterNLP workshop, and we were very happy that the community grew, this year with 20 accepted papers and a chance to meet in person!  Some of the videos are on youtube at https://www.youtube.com/@InterNLP. Others will hopefully be available soon on the NeurIPS archives

The programme was packed with impressive invited talks from Karthik Narasimhan (Princeton), John Langford (Microsoft), Dan Weld (UWashington), Anca Dragan (UCBerkeley) and Aida Nematzadeh (DeepMind). To pick out just a couple, Karthik presented recent work on semantic supervision [1] for few-shot generalization and personalization, which learns from semantic descriptions of classes, providing a way for instruct models through text. Anca Dragan talked about interactive agents that go beyond following instructions about how exactly to perform a task, to inferring the user’s goals, preferences, and constraints. She emphasized that the way people refer to desired actions provides important information about their preferences, and therefore we can infer, from a user’s language, reward functions that reflect their preferences. Aida Nematzadeh compared self-supervised pretraining to language learning in childhood, which involves interacting with other people. Her talk focused on the evaluation of neural representations, and she called for real-world evaluations, strong baselines and probing to provide a much more thorough way of uncovering the strengths and weaknesses of pretrained models.

The contributed talks and posters showcased a wide range of work from human-in-the-loop learning techniques to software libraries and benchmark datasets. For example, PyTAIL [2] is a Python library for active learning that collects new labelling rules and customizes lexicons as well as collecting labels. Mohanty et al. [3] developed the IGLU challenge, in which an agent has to perform tasks by following natural language instructions; their presentation at InterNLP explained how they collected the data. The RL4M library [4] provides a way to optimize language generation models using reinforcement learning, as a way to adapt to human preferences; the paper [4] also presents a benchmark, GRUE, for evaluating RL methods for language generation. Majumder and McAuley [5] investigate the use of explanations to debias NLP models while maintaining a good trade-off between predictive performance and bias mitigation.

 

 

 

 

At the end of the day, I got to ask a lot of questions to some very smart people during our panel discussion – thanks to John Langford, Karthik Narasimhan, Aida Nematzadeh, and Alane Suhr for taking part, and thanks to the audience for some great interactions too. The wide-ranging discussion touched on the evaluation of interactive systems (how to use static data for evaluation, evaluating how well models adapt to user input), working with researchers and users from other fields, different forms of interaction besides language, and challenges that are specific to interactive NLP.

We plan to be back at a future conference (not sure which one yet!) for the next iteration of InterNLP. Large language models and in-context learning are clearly revolutionizing this space in some ways, but I’m convinced we still have a lot of work to do to design interactive machine learning systems that are accountable, reliable, and require fewer resources.

Thank you to Nguyễn Xuân Khánh for letting us include his InterNLP workshop photos.

1. Aggarwal, Deshpande and Narasimhan, SemSup-XC: Semantic Supervision for Zero and Few-shot Extreme Classification, https://arxiv.org/pdf/2301.11309.pdf

2. Mishra and Diesner, PyTAIL: Interactive and Incremental Learning of NLP Models with Human in the Loop for Online Data, https://internlp.github.io/documents/2022/papers/24.pdf

3. Mohanty et al., Collecting Interactive Multi-modal Datasets for Grounded Language Understanding, https://internlp.github.io/documents/2022/papers/17.pdf

4. Ramamurthy et al., Is Reinforcement Learning (Not) for Natural Language Processing?: Benchmarks, Baselines, and Building Blocks for Natural Language Policy Optimization, https://arxiv.org/abs/2210.01241

5. Majumder and McAuley, InterFair: Debiasing with Natural Language Feedback for Fair Interpretable Predictions, https://arxiv.org/abs/2210.07440

2023 AAAI Conference Blog – Amarpal Sahota

This blog post is written by AI CDT Student Amarpal Sahota

I attended the 37th AAAI Conference on Artificial Intelligence from the 7th of February 2023 to the 14th February. This was my first in person conference and I was excited to travel to Washington D.C.

The conference schedule included Labs and Tutorials February 7th – 8th , the main conference February 9th – 12th followed by the workshops on February 13th – 14th.

Arriving and Labs / Tutorials

I arrived at the conference venue on 7th February to sign in and collect my name badge. The conference venue (Walter E. Washington Convention Center) was huge and had within it everything you could need from areas to work or relax to restaurants and of course many halls / lecture theatres to host talks.

I was attending the conference to present a paper at the Health Intelligence Workshop. Two of my colleagues from the University of Bristol (Jeff and Enrico) were also attending to present at this workshop (we are pictured together below!).

The tutorials were an opportunity to learn from experts on topics that you may not be familiar with yourself. I attended tutorials on Machine Learning for Causal Inference, Graph Neural Networks and AI for epidemiological forecasting.

The AI for epidemiological forecasting tutorial was particularly engaging. The speakers were very good at giving an overview of historical epidemiological forecasting methods and recent AI methods used for forecasting before introducing state of the art AI methods that use machine learning combined with our knowledge of epidemiology. If you are interested, the materials for this tutorial can be accessed at : https://github.com/AdityaLab/aaai-23-ai4epi-tutorial .

Main conference Feb  9th – Feb 12th

The main conference began with a welcome talk in the ‘ball room’. The room was set up with a stage and enough chairs to seat thousands. The welcome talk introduced included an overview of the different tracks within the conference (AAAI Conference of AI, Innovative Application of AI, Educational Advances in AI) , statistics around conference participation / acceptance and introduced the conference chairs.

The schedule for the main conference each day included invited talks and technical talks running from 8:30 am to 6pm. Each day this would be followed by a poster session from 6pm – 8pm allowing us to talk and engage with researchers in more detail.

For the technical talks I attended a variety of sessions from Brain Modelling to ML for Time-Series / Data Streams and Graph-based Machine Learning. Noticeably, all of the sessions were not in person. They were hybrid, with some speakers presenting online. This was disappointing but understandable given visa restrictions for travel to the U.S.

I found that many of the technical talks became difficult to follow very quickly with these talks largely aimed at experts in the respective fields. I particularly enjoyed some of the time-series talks as these relate to my area of research. I also enjoyed the poster sessions that allowed us to talk with fellow researchers in a more relaxed environment and ask questions directly to understand their work.

For example, I enjoyed the talk ‘SVP-T: A Shape-Level Variable-Position Transformer for Multivariate Time Series Classification‘ by PhD researcher Rundong Zhuo. At the poster session I was able to follow up with Rundong to ask more questions and understand his research in detail.  We are pictured together below!

Workshops Feb 13th – 14th

I attended the 7th International Workshop On Health Intelligence from 13th to 14th February. The workshop began with opening remarks from the Co-chair Martin Michalowski before a talk by our first keynote speaker. This was Professor Randi Foraker who  spoke about her research relating to building trust in AI for Improving Health Outcomes.

This talk was followed by paper presentations with papers on related topics grouped into sessions. My talk was in the second session of the day titled ‘Classification’. My paper (pre-print here) is titled ‘A Time Series Approach to Parkinson’s Disease Classification from EEG’. The presentation went reasonably smoothly and I had a number of interesting questions from the audience about  applications of my work and the methods I had used. I am pictured giving the talk below!

The second half of the day focused on the hackathon. The theme of the hackathon was biological age prediction. Biological ageing is a latent concept with no agreed upon method for estimation. Biological age tries to capture a sense of how much you have aged in the time you have been alive. Certain factors such as stress and poor diet can be expected to age individuals faster. Therefore two people of the same chronological age may have different biological ages.

The hackathon opened with a talk on biological age prediction by Morgan Levin (The founding Principal Investigator at Altos Labs). Our team for the hackathon included four people from the University of Bristol – myself , Jeff , Enrico and Maha. Jeff (pictured below) gave the presentation for our team. We would have to wait until the second day of the conference to find out if we won one of the three prizes.

The second day of the workshop consisted of further research talks, a poster session and an awards ceremony in the afternoon. We were happy to be awarded the 3rd place prize of $250 for the hackathon! The final day concluded at around 5pm. I said my good byes and headed to Washington D.C. airport for my flight back to the U.K