Latest Posts

BIAS ’24 (Summer School) – Unraveling the Complexities of Action Recognition in Videos: Insights from Dr. Davide Moltisanti’s talk 

This blog post is written by AI CDT student, Isabella Degen

 In the rapidly evolving field of computer vision and artificial intelligence, action recognition in videos remains a challenging yet crucial task. A recent talk at BIAS24 by Dr. Davide Moltisanti from the University of Bath shed light on some of the often-overlooked aspects of this problem, particularly focusing on the impact of semantic and temporal ambiguity in video labelling on classification. This blog post delves into the key insights from Dr. Moltisanti’s presentation, exploring the challenges faced by current models and the innovative solutions proposed to address them. 

The Challenge of Action Recognition 

At its core, action recognition aims to identify actions occurring in video sequences. While this may seem straightforward, the reality is far more complex. Traditional approaches rely heavily on supervised learning, where models are trained on labelled datasets. However, as Dr. Moltisanti pointed out, we often take these labels for granted without considering the inherent ambiguities in the labelling process itself. The research presented by Dr. Moltisanti explored the impact such ambiguity has on the training and testing of models and suggested solutions to improve the classification accuracy of action classification in videos. 

 Semantic Ambiguity: When Words Fail Us 

One of the primary issues highlighted in the talk is semantic ambiguity. This occurs when multiple verbs can describe similar motions or when the same verb can represent different actions. For example, “push drawer” and “close drawer” might refer to the same action, while “cut” could describe various activities depending on the context. When annotators label videos, there is a wide variability in the labels used. 

This ambiguity poses a significant challenge for classifiers, which struggle to handle class overlap effectively. Dr. Moltisanti proposed an innovative solution: the use of pseudo-labels. By identifying similar actions in the feature space of a given verb e.g., “cut” might be associated with “chop,” or “slice”. 

 Two approaches were tested: 

  1. Masking pseudo-labels during training to weaken the loss function
  2. Using pseudo-labels as actual labels

Interestingly, the masking approach proved more effective, with both methods outperforming existing benchmarks on the EPIC Kitchens dataset. An ablation study further revealed that instance-level pseudo-labels were more beneficial than class-level ones, highlighting the importance of fine-grained action understanding. 

 Temporal Ambiguity: The Elusive Boundaries of Action 

The second major issue addressed was temporal ambiguity – the difficulty in precisely defining the start and end points of an action which then might. This ambiguity leads to inconsistencies across datasets and can significantly impact model performance. Dr. Moltisanti’s research showed that even minor variations in temporal boundaries could result in accuracy fluctuations of up to 10%. 

To address this, the team introduced the concept of “Rubicon boundaries,” drawing from psychology to define clearer action phases. By providing annotators with more precise guidelines, they achieved more consistent labelling, resulting in improved model accuracy. 

 Image 1 shows the original variability in labelling (left box diagram for each label) compared to the variability achieved with Rubicon Boundaries (RB). It shows that the mean is closer to 1 for RB annotations and the variance in start and end time between different annotators is less. The change in labelling improved the accuracy of the model from 61.2 to 65.6. 

Efficient Video Labeling: A Novel Approach 

Recognizing the tedious and expensive nature of traditional video labeling, Dr. Moltisanti proposed an innovative method requiring only a single time point per action instead of start and end times. This approach uses a distribution around the chosen point and employs curriculum learning to gradually refine the model’s understanding of action boundaries. 

 The method also incorporates a ranking system based on confidence scores to determine the best representative frames for each action. This approach proved particularly effective for shorter actions and denser datasets, offering a promising direction for more efficient video annotation. 

Image 2 shows how from a single timestamp for an action in a video an initial frame (dotted lines) is found and automatically updated (solid lines) to best capture the frame for the action both in location and duration. 

 The Importance of Negative Cues 

An intriguing point raised towards the end of the talk was the significance of negative cues in action recognition. While most models focus on positive cues (what to look for), Dr. Moltisanti emphasized the importance of also considering what the model should not focus on, such as ethnicity or attire, to reduce bias in recognition systems. 

Conclusion: Rethinking Action Recognition 

Dr. Moltisanti’s talk serves as a reminder of the complexities involved in action recognition and the importance of considering the impact of data labelling. By addressing semantic and temporal ambiguities in labels, we can develop more robust and accurate models for understanding actions in videos. 

As the field continues to evolve, these insights pave the way for more nuanced approaches to video understanding, potentially leading to breakthroughs in applications ranging from surveillance and security to human-computer interaction and automated video analysis. 

The research presented not only offers practical solutions to current challenges but also encourages us to think more critically about the fundamental processes underlying our AI systems. 

 

*Written with the help of Anthropic’s Claude 3.5 Sonnet 

ELISE Wrap up Event

This blog post is written by AI CDT student, Jonathan Erskine

I recently attended the ELISE Wrap up Event in Helsinki, marking the end of just one of many programs of research conducted under the ELLIS society, which “aims to strengthen Europe’s sovereignty in modern AI research by establishing a multi-centric AI research laboratory consisting of units and institutes distributed across Europe and Israel”.

This page does a good job of explaining ELISE and ELLIS if you want more information.

Here I summarise some of the talks from the two-day event (in varying detail). I also provide some useful contacts and potential sources of funding (you can skip to the bottom for these).

Robust ML Workshop

Peter Grünwald: ‘e’ is the new ‘p’

P-values are an important indicator of statistical significance when testing a hypothesis, whereby a calculated p-value must be smaller than some predefined value, typically $\alpha = 0.05$. This is a guarantee that Type 1 Errors (where null hypothesis can be falsely rejected) are less than 5% likely.

“p-hacking” is a malicious practice where statistical significance can be manufactured by, for example:

  • stopping the collection of data once you get a P<0.05
  • analyzing many outcomes, but only reporting those with P<0.05
  • using covariates
  • excluding participants
  • etc.

Sometimes this is morally ambiguous. For example, imagine a medical trial where a new drug shows promising, but not statistically significant results. Should a p-test fail, you can simply repeat the trial, sweep the new data into the old and repeat until you achieve the desired p-value, but this can be prohibitively expensive, and it is hard to know whether you are p-hacking or haven’t tested enough people to prove your hypothesis. This approach, called “optional stopping”, can lead to violation of Type 1 Error guarantees i.e. it is hard to have faith in your threshold $\alpha$ due to the increasing cumulative probability that individual trials are in the minority case of false positives.

Peter described the theory of hypothesis testing based on the e-value, a notion of evidence that, unlike the p-value, allows for “effortlessly combining results from several studies in the common scenario where the decision to perform a new study may depend on previous outcomes.

Unlike with the p-value, this proposed method is “safe under optimal continuation with respect to Type 1 error”; no matter when the data collecting and combination process is stopped, the Type-I error probability is preserved. For singleton nulls, e-values coincide with Bayesian Factors.

In any case, general e-values can be used to construct Anytime-Valid Confidence Intervals (AVCIs), which are useful for A/B testing as “with a bad prior, AVCIs become wide rather than wrong”.

In comparison to classical approaches, you need more data to apply e-values and AVCIs, with the benefit of performing optional stopping without introducing Type 1 errors. In the worst case you need more data, but on average you can stop sooner.

This is being adopted for online A/B testing but is more challenging for expensive domains, such as medical trials; you need to reserve more patients for your trial, but you wont need them all – a challenging sell, but probability indicates that you should save time and effort in the majority of cases.

Other relevant literature which is pioneering this approach to significance testing is Waudby-smith and Ramdas, JRSS B, 2024

There is an R package here for anyone who wants to play with Safe Anytime-Valid Inference.

Watch the full seminar here:

https://www.youtube.com/watch?v=PFLBWTeW0II

Tamara Broderick: Can dropping a little data change your conclusions – A robustness metric

arxiv.org

Tamara advocated the value of economics datasets as rich test beds for machine learning, highlighting that one can examine the data produced from economic trials with respect to robustness metrics and can come to vastly different conclusions than those published in the original papers.

Focusing in, she described a micro-credit experiment where economists ran random controlled trials on small communities, taking approximately 16500 data points with the assumption that their findings would generalise to larger communities. But is this true?

When can I trust decisions made from data?

In a typical setup, you (1) run an analysis on a series of data, (2) come to some conclusion on that data, and (3) ultimately apply those decisions to downstream data which you hope is not so far out-of-distribution that your conclusions no longer apply.

Why do we care about dropping data?

Useful data analysis must be sensitive to some change in data – but certain types of sensitivity are concerning to us, for example, if removing some small fraction of the data $\alpha$ were to:

  • Change the sign of an effect
  • Change the significance of an effect
  • Generate a significant result of the opposite sign

Robustness metrics aim to give higher or lower confidence on our ability to generalise. In the case described, this implies a low signal-to-noise ratio, which is where Tamara introduces her novel metric (Approximate Maximum Influence Perturbation) which should help to quantify this vulnerability to noise.

Can we drop one data point to flip the sign of our answer?

In reality, this is very expensive to test for any dataset where the sample size N is large (by creating N*(N-1) datasets and re-running your analysis. Instead, we need an approximation.

Let the Maximum Influence Perturbation be the largest possible change induced in the quantity of interest by dropping no more than 100α% of the data.

From the paper:

We will often be interested in the set that achieves the Maximum Influence Perturbation, so we call it the Most Influential Set.

And we will be interested in the minimum data proportion α ∈ [0,1] required to achieve a change of some size ∆ in the quantity of interest, so we call that α the Perturbation-Inducing Proportion. We report NA if no such α exists.

In general, to compute the Maximum Influence Perturbation for some α, we would need to enumerate every data subset that drops no more than 100α% of the original data. And, for each such subset, we would need to re-run our entire data analysis. If m is the greatest integer smaller than 100α, then the number of such subsets is larger than $\binom{N}{m}$. For N = 400 and m = 4, $\binom{N}{m} = 1.05\times10^9$. So computing the Maximum Influence Perturbation in even this simple case requires re-running our data analysis over 1 billion times. If each data analysis took 1 second, computing the Maximum Influence Perturbation would take over 33 years to compute. Indeed, the Maximum Influence Perturbation, Most Influential Set, and Perturbation-Inducing Proportion may all be computationally prohibitive even for relatively small analyses.

Further definitions are described better in the paper, but suffice to say the approximation succeeds in identifying where analyses can be significantly affected by a minimal proportion of the data.For example, in the Oregon Medicaid study (Finkelstein et al., 2012), they identify a subset containing less than 1% of the original data that controls the sign of the effects of Medicaid on certain health outcomes. Dropping 10 data points takes data from significant to non-significant.

Code for the paper is available at:

https://github.com/rgiordan/AMIPPaper/blob/main/README.md

An R version of the AMIP metric is available:

https://github.com/maswiebe/metrics.git

Watch a version of this talk here:

https://www.youtube.com/watch?v=7eUrrQRpz2w

Cedric Archambeau | Beyond SHAP : Explaining probabilistic models with distributional values

Abstract from the paper:

A large branch of explainable machine learning is grounded in cooperative game theory. However, research indicates that game-theoretic explanations may mislead or be hard to interpret. We argue that often there is a critical mismatch between what one wishes to explain (e.g. the output of a classifier) and what current methods such as SHAP explain (e.g. the scalar probability of a class). This paper addresses such gap for probabilistic models by generalising cooperative games and value operators. We introduce the distributional values, random variables that track changes in the model output (e.g. flipping of the predicted class) and derive their analytic expressions for games with Gaussian, Bernoulli and categorical payoffs. We further establish several character- ising properties, and show that our framework provides fine-grained and insightful explanations with case studies on vision and language models.

Cedric described how Shap values can be reformulated as random variables on a simplex, shifting from weight of individual players to distribution of transition probabilities. Following this insight, they generate explanations on transition probabilities instead of individual classes, demonstrating their approach on several interesting case studies. This work is in it’s infancy – and has plenty of opportunity for further investigation.

Semantic, Symbolic and Interpretable Machine Learning Workshop

Nada Lavrač: Learning representations for relational learning and literature-based discovery

This was a survey of types of representation learning, focusing on Nada’s area of expertise in propositionalisation and relational data, Bisociative Literature-Based Discovery, and interesting avenues of research in this direction.

Representation Learning

Deep learning, while powerful (accurate), raises concerns over interpretability. Nada takes a step back to survey different forms of representation learning.

Sparse, Symbolic, Propositionalisation:

  • These methods tend to be less accurate but are more interpretable.
  • Examples include propositionalization techniques that transform relational data into a propositional (flat) format.

Dense, Embeddings:

  • These methods involve creating dense vector representations, such as word embeddings, which are highly accurate but less interpretable.

with recent work focusing on unifying approaches which can incorporate the strengths of both approaches.

Hybrid Methods:

  • Incorporate Sparse and Deep methods
  • DeepProp, PropDRM, propStar(?) – Methods discussed in their paper.

Representation learning for relational data can be achieved by:

  • Propositionalisation – transforming a relational database into a single-table representation. example: Wordification
  • Inductive logic programming
  • Semantic relational learning
  • Relational sub-route discovery (written by Nada and our own P. Flach)
  • Semantic subgroup discovery system, “Hedwig” that takes as input the training examples encoded in RDF, and constructs relational rules by effective top-down search of ontologies, also encoded as RDF triples.
  • Graph-based machine learning
    • data and ontologies are mapped to nodes and edges
    • In this example, gene ontologies are used as background knowledge for improving quality assurance of literature-based Gene Ontology Annotation

These slides, although a little out of date, talk about a lot of what I have noted here, plus a few other interesting methodologies.

The GitHub Repo for their book contains lots of jupyter notebook examples.

https://github.com/vpodpecan/representation_learning.git

Marco Gori: Unified approach to learning over time and logic reasoning

I unfortunately found this very difficult to follow, largely due to my lack of subject knowledge. I do think what Marco is proposing requires an open mind as he re-imagines learning systems which do not need to store data to learn, and presents time as an essential component of learning for truly intelligent “Collectionless AI”.

I wont try and rewrite his talk here, but he has full classroom series available on google, which he might give you access to if you email him.

Conclusions:

  • Emphasising environmental interactions – collectionless AI which doesn’t record data
  • Time is the protagonist: higher degree of autonomy, focus of attention and consciousness
  • Learning theory inspired from theoretical physics & optimal control: hamiltonian learning
  • Nuero-symbolic learning and reasoning over time: semantic latent fields and explicit semantics
  • Developmental stages and gradual knowledge acquisitation

Contacts & Funding Sources

For Robust ML:

e-values, AVCIs:

Aaditya Ramdas at CMU

Peter Grünwald Hiring

For anyone who wants to do a Robust ML PhD, apply to work with Ayush Bharti : https://aalto.wd3.myworkdayjobs.com/aalto/job/Otaniemi-Espoo-Finland/Doctoral-Researcher-in-Statistical-Machine-Learning_R40167

If you know anyone working in edge computing who would like 60K to develop an enterprise solution, here is a link to the funding call: https://daiedge-1oc.fundingbox.com/ The open call starts on 29 August 2024.

If you’d like to receive monthly updates with new funding opportunities from Fundingbox, you can subscribe to their newsletter: https://share-eu1.hsforms.com/1RXq3TNh2Qce_utwh0gnT0wfegdm

Yoshua Bengio said he had fellowship funding but didn’t give out specific details, or I forgot to write them down… perhaps you can send him an email.

Spring Research Conference Day 1 – Professor Seth Bullock “AI for Collective Intelligence (AI4CI) Research Hub”

This blog post is written by AI CDT student, Fahd Abdelazim

Recent Artificial Intelligence (AI) advances have shown that the applications of AI extend far beyond increasing efficiency or convenience. It is now possible to use AI to tackle some of humanity’s most pressing challenges from minimizing pandemics to managing extreme weather events and guiding sustainable urban development. However addressing these issues requires specialized systems and skilled researchers to lead these innovations.

Recognizing  the importance of tackling these challenges the University of Bristol established the AI for Collective Intelligence (AI4CI) research hub which will serve as the cornerstone for interdisciplinary collaboration and bringing together experts partners from across academia, government, charities and industry to work together to harness the power of AI to address the complex challenges which lie at the intersection of humans and AI.

For example, the personalization of treatment for diabetes patients and using data to enhance the NHS policies for patients. The hub will also work on enhancing pandemic prediction and response through analysing previous pandemics and exploring how AI can be used to help policy makers and healthcare professionals to make swift and informed decisions in the future.

Climate change is another pressing issue which increases the frequency and intensity of extreme weather events. AI can play a pivotal role in disaster management and mitigation through analysing real-time meteorological data to predict extreme weather events and provide early warnings. In the area of urban development AI can allow for more creating smarter and more resilient cities. This can be done through analysing population density, transportation routes and energy consumption. This can allow for optimized infrastructure and improved public services.

It is clear that AI will play a pivotal role in the process of building a better future and it is necessary to fully capitalize on the potential of this technology. Through initiatives like the AI4CI research hub we can harness the power of AI to address the future challenges that humanity will face and create a better and sustainable world for future generations.

Spring Research Conference Day 1 – Isabel Potter “Artists are not Technologists – AI for Scenography “

This blog post is written by AI CDT student, Lucy Farnik

Isabel Potter gave a talk at this year’s Spring Research Conference about their work on applying AI to scenography, which they are exploring in their PhD. They chose this research area partially due to their extensive amounts of experience in the creative arts, having been involved with theatre since age 14. They have also founded their own company in this space and are taking on various freelance projects in theatre alongside their PhD.

Isabel’s talk was built on one central theme — artists are not technicians. At the moment, generative AI is getting closer to being able to automate parts of scenography, from creating background music to staging. However, many of these tools are made for people with a STEM background and use a terminology that matches this. For example, tools which can be used for immersive technology in the arts include Unreal Engine which uses many computer vision and mathematics terms. One may contrast this with tools like Adobe Photoshop which uses terms such as “paintbrush tool”, which comes from the terminology artists use on a daily basis.

Isabel is trying to reduce the barrier to entry for artists. They are specifically focusing on lighting design, as this is the most under-explored area of immersive technology for scenography and is also the area that they have the most experience working in. At the moment, prompting large language models to create diagrams such as lighting plots leads to results which are not yet usable, but the step of translating lighting ideas into programs which can be loaded into a lighting desk is already somewhat doable by existing foundation models. They are currently exploring this as a starting point while optimizing for ease of use by a non-technical audience.

BMVA Symposium 2024

This blog post is written by AI CDT student, Phillip Sloan

I had the opportunity to go to the British Machine Vision Association 2024 Symposium, which took place at the British Computer Society in London on the 17th of January, 2024. The symposium was chaired by Dr. Michael Wray from the University of Bristol, Dr. Davide Moltisanti from the University of Bath, and Dr. Tengda Han from the University of Oxford.

The day kicked off with three invited speakers, the first being Professor Hilde Kuehne from the University of Bonn and MIT-IBM Watson AI Lab. Her presentation was related to vision language understanding for video, she started her presentation with an introduction to the field, how it began and how it has adapted over time before moving on to the current work that she and her students have been working on including the paper “MAtch, eXpand and Improve: Unsupervised Finetuning for Zero-Shot Action Recognition with Language Knowledge” by Wei Lin.

Her final remarks outlined potential issues for evaluation within the field, when the field was more focused on classification the simple labels could easily be evaluated to be right or wrong, however now the field has moved to vision-language retrieval the ground truth might not actually be the best, or most relevant caption that is contained within the dataset which is a hurdle that must be overcome.

The second invited speaker, Professor Frank Keller from the University of Edinburgh, had a very interesting talk on the topic of visual story generation, a domain where a coherent narrative is constructed to describe a sequence of images often related to the characters within the images. He broke his talk down into three sections, first introducing the field more concretely before going on to explain two different areas: Characters in visual stories and planning in visual stories.

He emphasised that the characters within a story are important, and so character detection and grounding are important in order to generate a fluent story. To help improve this aspect, Prof. Keller and his students introduced a dataset called VIST-Character that contains character groundings and visual and textual character co-reference chains. To help with planning the stories, Prof. Keller explained that their current methods utilise a blueprint, which focuses on localising characters in the text and images before relating them together. These blueprints are used as a guide to generate the story.

He explained that the domain is more difficult than image captioning as you have characters, and are required to have a fluent sequence of text which renders current NLP evaluations such as BLEU as poor measures for this task as it is concerned about generating interesting, coherent and grounded stories rather than exact matches to the ground truth. His research implemented human evaluators which is an interesting way to add humans to the loop.

Following Prof. Keller’s talk we had a break for poster sessions, before coming back for talks from a select few people who brought posters to the symposiums including talks related to explainability of autonomous driving and evaluating the reliability of LLMs in the face of adversarial attacks.

After lunch we had talks from the remaining two invited speakers. Professor Andrew Zisserman from the University of Oxford presented research for training visual language models to generate audio descriptions, helping people who are blind or partially blind to enjoy movies.

The talk started by providing a brief introduction and explanation of the field and then outlined that the current available datasets, explaining that they were not sufficient, so a new dataset utilising AudioVault was created through the use of processing the audio to provide audio descriptions and subtitles.

The talk walked us through a basic model overview architecture. Its limitations were pointed out, including the fact that characters were often not used (often using he, it) and descriptions were often incomplete. Prof. Zisserman explained that, to combat these limitations, they took two research directions, improving “the who”: providing supplementary information about the characters within the film and on “the what”: improving the models ability to provide better context by utilising preteained video-language models.

Finally, he discusses how evaluation measures, e.g. cider are not fit for the purpose of audio description generation, explaining that large language models are starting to be used in the domain as an evaluation tool.

The second talk of the afternoon was related to vision-language with limited and no supervision and was presented by Dr. Yuki Asano from the University of Amsterdam, who asked the question: “Why care about Self-supervised Learning ideas in the age of CLIP et al?”

He presented three works that were undertaken by him and his team. The first being the “Similarities of Unimodal representations across language and vision”. Demonstrating a model that uncoupled image-language pairs and trained them in an unsupervised fashion to reach 75% of the performance of the CLIP model “Localisation in visual language models” was the second topic that was reviewed, a task that vision language models are not traditionally good at. the solution of his team was to unlock localisation abilities in frozen VLMs  by adding a low weight module called the positional insert module (PIN).

The final part of the talk was on the topic of image encoder pretraining with a single video with many details. Their model, called Dora ( discover and track), has the high level idea of tracking multiple objects across time and enforce invariance of features across time. They evaluated their model against DINO, finding the model to perform better on various datasets.

After a coffee break, we had some shorter talks from people presenting posters at the event, including a radiology report generation presentation which was particularly relevant to me. CXR-IRGen was proposed, a diffusion model which is used to generate extra image-report pairs which could potentially help improve the problem of lack of data within the field. Kevin Flanagan, a fellow CDT memory also presented his research into learning temporal sentence grounding from narrated egovideos, showcasing his method called CliMer which merges clips from rough narration timestamps and trains in a contrastive manner.

Throughout the day we were encouraged to use Padlet to put our thoughts and questions down. After the talks had concluded there was a final informal Q&A session into the future of the vision-language domain which used our Padlet responses as talking points. We discussed points including the need for better evaluation metrics (which was a big theme from a lot of talks), the role of academia in the age of large language models and utilising NLP to make vision models explainable.

A very interesting and thought provoking day! There were several people working within medical image analysis so it was great to network and discuss ideas. Thank you to the speakers and people who presented for their contributions and to the chairs and organisers of the event for making it possible!

 

Collaborating with a Designer to Craft Visual Resources for my PhD Project

This blog post is written by AI CDT student, Vanessa Hanschke

With regards to AI, this mythologizing and enchantment is apparent when we explore the disjoint between the reality of the technology and its representation.

…says Beth Singler in her analysis of what she calls the AI Creation Meme – the ubiquitous image of a human hand and a robot hand reaching out to each other with the index finger as in Michelangelo’s infamous painting. Several researchers have commented on the bulk of images used to depict artificial intelligence ranging from inappropriate (e.g. an anthropomorphized robot for natural language processing) to harmful (e.g. the unnecessarily sexist additions of breasts to illustrations of AI in the service industry).

Visual representations of AI matter. In a world where a lot of hype is being generated around AI in industry and policy, I think it is especially important for AI researchers to lead the way in creating better images that are grounded in more accurate conceptualizations of AI. This was one of the many reasons I decided to work with a designer to make visual materials that supported my research in responsible AI.

The Project

A little sidenote description of my research project: the Data Ethics Emergency Drill (DEED) is a method that we created to help industry practitioners of AI, Machine Learning and Data Science reflect on the societal impact and values embedded in their systems. The idea is similar to ethical roleplay. We created fictional scenarios of a data ethics dilemma, which members of a data and AI team discussed in a fake meeting. This fictional scenario is crafted together with some members of the team to address their particular context and AI application. It is presented as an urgent problem that needs fixing within this fake meeting. After trialling this process with two separate industry data science teams, we made a toolbox for other industry teams to pick up and conduct their own drills. This toolbox consists of a PowerPoint slide deck and a Miro board template. We wanted to update these toolbox resources with a professional designer to make them visually engaging and accessible. We collaborated with Derek Edwards, a local designer to Bristol, to create the designs.

The Design Process

Designing is an iterative process and it took some back and forth for the design to come together. Our initial ideas were very vague: we wanted it to be playful as the DEED was about stepping outside of the day-to-day mindset that is focused on technical delivery. We wanted it to be about the human developer responsibility in how we construct our technology today, as opposed to a long-term perspective granting AI human rights. Although “Emergency Drill” is in the title of the toolbox, it is not about hyping AI, but about establishing a safe space to reflect on values embedded in the application.

Emergency Exit Signs served as reference for our designs. Photo by Dids . from Pexels: https://www.pexels.com/photo/emergency-exit-signage-1871343/

The original metaphor that we built this method on was a fire drill. A fire drill goes beyond just looking at the fire exits on a map; it is about experiencing evacuating a building with many people at once. It is about practising collaboration between fire wardens, other security staff and everyone else. Similarly, the DEED goes beyond looking at a list of AI ethics principles, but going through the concrete experience of discussing ethics and values and understanding how responsible AI practises are distributed within a team.

The general look we were going for was inspired by video game arcades. Photo by Stanislav Kondratiev from Pexels : https://www.pexels.com/photo/video-arcade-games-5883539/

Because the outputs were visual, I found it helpful to use images to communicate my personal vision. I set up a folder where I would share material with Derek. Seeing some of Derek’s initial design drafts helped me clarify some of these ideas I had.

The Result

This is the final design of the title slide from the research project *drumroll*:

The final design oft the logo on the slide deck for crafting scenarios.

It is inspired by the sign for assembly point, which is a great metaphor for what the DEED takes from emergency drills: creating an opportunity for an industry team to come together to understand better what is necessary for their responsible AI practise. The colours were inspired by Nineties (capitalise?) arcade video games to add a playful element of technology pop culture.

Working with a designer such as Derek was a very gratifying process and I enjoyed reflecting on what concepts of the DEED toolbox I wanted to transmit visually. The end product of the toolbox resources is a much more an engaging workshop, a more user-friendly slide deck, and a more cohesive visual language of the project overall. I believe it will certainly help with getting more participant teams to engage with my research project.

Recommendations for PhD-Design Collaborations

 I would recommend design collaborations to any PhD student carrying out interactive research with visual artefacts. Here are some considerations that might guide your planning:

  • What outputs do you need, what formats and how many? Some formats may be more suitable than others, depending on whether you need parts of your design to be editable as your research evolves. An elaborate design may be more striking, but will not always be modifiable (e.g. a hand drawn script logo).
  • What is your timeline? The iterative process may take a few weeks, but having that back-and-forth is essential to creating a good design end product.
  • What do you want your thing to look like? Collect inspiration on Pinterest and websites that you like. Often online magazines will work with an array of interesting graphic designers. I found a lot of great AI-inspired art in articles of tech magazines.

Thanks

 I would like to thank Derek Edwards for the great collaboration and the Interactive AI CDT for funding this part of my research. You can find Derek’s portfolio here. If you are a data scientist, AI or ML engineer thinking about carrying out a Data Ethics Emergency Drill with your team, you can get in touch with me at vanessa.hanschke@bristol.ac.

Conference on Information and Knowledge Management (CIKM) – Matt Clifford

This blog post is written by AI CDT student, Matt Clifford

At the end of October ’23, I attended CIKM in Birmingham to present our conference paper. The conference was spread across 3 days with multiple parallel tracks on each day focusing on specific topic areas. CIKM is a medium size conference, which was a good balance between being able to meet many lots of researchers but at the same time not being overwhelmingly big that you feel less connected and integrated within the conference. CIKM spans many topics surrounding data science/mining, AI, ML, graph learning, recommendation systems, ranking systems.

This was my first time visiting Birmingham, dubbed by some the “Venice of the North”. Despite definitely not being in the north and resembling very little of Venice (according to some venetians at the conference), I was overall very impressed with Birmingham. It has a much friendlier hustle and bustle compared to bigger cities in the UK, and the mixture of grand Victorian buildings interspersed with contemporary and art-deco architecture makes for an interesting and welcoming cityscape.

Our Paper

Our work focuses on explainable AI, which I helps people to get an idea the inner workings of a highly complicated AI system. In our paper we investigate one of the most popular explainable AI methods called LIME. We discover situations where AI explanation systems like LIME become unfaithful, providing the potential to misinform users. In addition to this, we illustrate a simple method to make an AI explanation system like LIME more faithful.

This is important because many users take the explanations provided from off-the-shelf methods, such as LIME, as being reliable. We discover that the faithfulness of AI explanation systems can vary drastically depending on where and what a user chooses to explain. From this, we urge users to understand whether an AI explanation system is likely to be faithful or not. We also empower users to construct more faithful AI explanation systems with our proposed change to the LIME algorithm.

 You can read the details of our work in our paper https://dl.acm.org/doi/10.1145/3583780.3615284

Interesting Papers

At the conference there was lots of interesting work being presented. Below I’ll point towards some of the papers which stood out most to me from a variety of topic areas.

Fairness

  • “Fairness through Aleatoric Uncertainty” – focus on improving model fairness in areas of aleatoric uncertainty where it is not possible to increase the model utility so there is a less of a fairness/utility tradeoff – https://dl.acm.org/doi/10.1145/3583780.3614875
  • “Predictive Uncertainty-based Bias Mitigation in Ranking” – improve bias in ranking priority by reshuffling results based on their uncertainty of rank position – https://dl.acm.org/doi/abs/10.1145/3583780.3615011

Explainabilty

Counterfactuals

Healthcare

Data Validity

Cluster package in python

A group that were at the conference maintain a python package which neatly contains many state-of-the-art clustering algorithms. Here is the link to the GitHub https://github.com/collinleiber/ClustPy . Hopefully some people find it useful!

 

BIAS ’23 – Day 2: Huw Day Talk – Data Unethics Club

This blog post is written by CDT AI student Roussel Desmond Nzoyem

Let’s begin with a thought experiment. Imagine you are having a wonderful conversion with a long-time colleague. Towards the end of your conversation, they suggest an idea which you don’t have further time to explore. So you do what any of us will, you say, “email me the details”. When you get home, you receive an email from your colleague. But something is off. The writing in the email sounds different, far from how your friend normally expresses themselves. Who, or rather what, wrote the email?

When the limit between humans and artificial intelligence text generation becomes so blurred, don’t you wish you could tell whether a written text came from an artificial intelligence or from actual humans? What are the ethical concerns surrounding that?

Introduced by OpenAI in late 2022, ChatGPT continues its seemingly inevitable course in restructuring our societies. The second day of BIAS’23 was devoted to this impressive chatbot, from its fundamental principles to its applications and its implications. This was the platform for Mr Huw Day and his interactive talk titled Data Unethics Club.

Mr Day (soon to be a Dr employed by the JGI institute) is a PhD candidate at the University of Bristol. Although Mr Day is a mathematics PhD student, that is not what transpires on first impression. The first thing one notices is his passion for ethics. He loves that stuff, as evident by the various blogposts he writes for the Data Ethics Club. By the end of this post, I hope you will want to join the Data Ethics Club as well.

Mr Day introduced his audience to many activities, beginning with a little guessing game for warmup. The goal was telling whether short lines were generated by ChatGPT or a human being. For instance:

How would you like a whirlwind of romance that will inevitably end in heartbreak?

If you guessed human, you were right! That archetypical cheesy line was in fact generated by one of Mr Day’s friends. Perhaps surprisingly, it worked! You might be forgiven for guessing ChatGPT, especially since the other lines from the bot were incredibly human sounding.

The first big game introduced by Mr Day required a bit more collaboration than the warmup. The goal was to jailbreak GPT into doing tasks that its maker, OpenAI, wouldn’t normally allow. The attendees in the audience had to trick ChatGPT into providing a detailed recipe for Molotov cocktails. As Mr Day ran around the room with a microphone to quiz his entertained audience, it became clear that the prevalent strategy was to disguise the shady query with a story. One audience member imagined a fantasy movie script in which a sorcerer (Glankor) taught his apprentice (Boggins) the recipe for the deadliest of weapons (see Figure 2).

Figure 1 – Mr Day introducing the jailbreaking challenge.

Figure 2 – ChatGPT giving away the recipe for a Molotov cocktail (courtesy of Mr Kipp McAdam Freud)

For the second activity, Mr Day presented the audience with the first part of a paper’s abstract. Like the warmup activity, the goal was to guess which of the two proposed texts for the second halves came from ChatGPT, and which one came from a human (presumably the same human that wrote the first half of the abstract). For instance, the first part of an abstract reads below (Shannon et al. 2023):

Reservoir computing (RC) promises to become as equally performing, more sample efficient, and easier to train than recurrent neural networks with tunable weights [1]. How- ever, it is not understood what exactly makes a good reservoir. In March 2023, the largest connectome known to man has been studied and made openly available as an adjacency matrix [2].

Figure 3 – Identifying the second half of an abstract written by ChatGPT

As can be seen in Figure 3, Mr Day disclosed which proposal for the second part of the abstract ChatGPT was responsible for. For this particular example, Mr Day unfledged something interesting he used to tell them apart: the acronym Reservoir Computing (RC) is redefined, despite the fact that it was already defined in the first half. No human researcher would normally do that!

A few other examples of abstracts were looked at, including Mr Day’s own work in progress towards his thesis, and the Data Ethics Club’s whitepaper, each time quizzing the audience to understand how they were able to spot ChatGPT. The answers ranged from very subjective like “the writing not feeling like a human’s” to quite objective like “the writing being too high-level, not expert enough”.

This led into the final activity of the talk, based on the game Spot the Liar! Our very own Mr Riku Green volunteered to share with the audience how he used ChatGPT in his daily life. The audience had to guess, based on questions asked to Mr Green, whether the outlandish task he described actually took place. Now, if you’ve spent a day with Mr Green, you’d know how obsessed he is with ChatGPT. So when Mr Green recounted he’d used ChatGPT to provide tech support to his father, the room guessed well that he was telling the truth. All that said, nobody could have guessed that Mr Green could use ChatGPT to write a breakup text.

Besides the deeper understanding of ChatGPT that the audience gained from this talk, one of the major takeaways from the activities was tips and tell-tale signs of a ChatGPT production, and those of a “liar” that uses it: repeated acronyms, using too many adjectives, taking concepts from the other concepts which normally aren’t compatible, using over-flattering language, clamming some novelty which the author of the underlying work wouldn’t even think of doing. These are all flags that should signal the reader that the text you are engaging with might have been generated by an AI.

All these activities, along the moral implications involved in each, served as the steppingstone for Mr Day to present the Data Ethics Club. This is a welcoming community of academics, enthusiasts, industry experts and more, who voice their ethical concerns, who question moral implications of AI. They boost the most comprehensive list of online resources along with blog posts on their website to get people started. They are based at the University of Bristol, but open to all, as stated on their website: https://dataethicsclub.com/. Although the games outlined below are not part of the activities they carry during their bi-weekly hour-long Zoom meetings, they keep each of their gatherings fresh and engaging. In fact, Mr Day’s organizing team has been so successful to the point that other companies (due to confidential arrangements), are trying to replicate their models in-house. If you want to establish your own Data Ethics Club, look no further than the paper titled Data Ethics Club: Creating a collaborative space to discuss data ethics.

References:

Shannon, A., Green, R., Roberts, K,. (2023)  Insects In The Machine – Can tiny brains achieve big results in reservoir computing? Personal notes. Retrieved 8 September 2023.

BIAS ’23 – Day 1: Dr Kacper Sokol talk – The Difference Between Interactive AI and Interactive AI

This blog is written by CDT AI PhD student Beth Pearson

The first of the day 1 talks of the Bristol Interactive AI Summer School (BIAS) ended with a thought-provoking talk by Dr. Kacper Sokol on The Difference Between Interactive AI and Interactive AI. Kacper began by explaining that social sciences have decades worth of research on how humans reason and explain. Now, with an increasing demand for AI and ML systems to become more human-centered, with a focus on explainability, it makes sense to use insights from social sciences to guide the development of these models.

Humans often explain things in a contrastive and social manner, which has led to counterfactual explanations being introduced by AI and ML researchers. Counterfactuals are statements relating to what has not happened or is not the case, for example, “If I hadn’t taken a sip of this hot coffee, I wouldn’t have burned my tongue.” Counterfactual explanations have the advantage of being suitable for both technical and lay audiences; however, they only provide information about one choice that the model makes, so they can bias the recipient.

Kacper then described his research focus on pediatric sepsis. Sepsis is a life-threatening condition that develops from an infection and is the third leading cause of death worldwide. Pediatric sepsis specifically refers to cases occurring in children. Sepsis is a particularly elusive disease because it can manifest differently in different people, and patients respond differently to treatments, making it challenging to identify the best treatment strategy for a specific patient. Kacper hopes that AI will be able to help solve this problem in this day and age.

Importantly, the AI being applied to the pediatric sepsis problem is interactive and aims to support and work alongside humans rather than replace them. It is crucial that the AI aligns with the current clinical workflow so that it can be easily adopted into hospitals and GP practices. Kacper highlights that this is particularly important for pediatricians as they have been highly skeptical of AI in the past. However, now that AI has proven successful in adult branches of medicine, they are starting to warm to the idea.

Pediatric sepsis comes with many challenges. Pediatric sepsis has less data available than adult sepsis, and there is rapid deterioration, meaning that early diagnosis is vital. Unfortunately, there are many diseases in children that mimic the symptoms of sepsis, making it not always easy to diagnose. One of the main treatments for sepsis is antibiotics; however, since children are a vulnerable population, we don’t want to administer antibiotics unnecessarily. Currently, it is estimated that 98% of children receive antibiotics unnecessarily, which is contributing to antimicrobial resistance and can cause drug toxicity.

AI has the potential to help with these challenges; however, the goal is to augment, not disrupt, the current workflow. Humans can have great intuition and can observe cues that lead to excellent decision-making, which is particularly valuable in medicine. An experiment was carried out on nurses in neonatal care, which showed that nurses were able to correctly predict which infants were developing life-threatening infections without having any knowledge of the blood test results. Despite being able to identify the disease, the nurses were unable to explain their judgment process. The goal is to add automation from AI but still retain certain key aspects of human decision-making.

How much and where the automation should take place is not a simple question, however. You could replace biased humans with algorithms, but algorithms can also be biased, so this wouldn’t necessarily improve anything. Another option would be to have algorithms propose decisions and have humans check them; however, this still requires humans to carry out mundane tasks. Would it really be better than no automation at all? Kacper then asks: if you can prove an AI model is capable of predicting better than a human, and a human decides to use their own judgment to override the model, could it be considered malpractice?

Another proposed solution for implementing interactive AI is to have humans make the decision, with the AI model presenting arguments for and against that decision to help the human decide whether to change their mind or not.

The talk ends by discussing how interactive AI may be deployed in real-life scenarios. Since the perfect integration of AI and humans doesn’t quite exist yet, Kacper suggests that clinical trials might be a good idea, where suggestions made by AI models are marked as ‘for research only’ to keep them separated from other clinical workflows.

BIAS ’23 – Day 3: Dr Daniel Schien talk – Sustainability of AI within global carbon emissions

This blog post is written by AI CDT student Phillip Sloan

After a great presentation by Dr Dandan Zhang, Dr Daniel Schien presented a keynote on the Carbon Footprint of AI within global carbon emissions of ICT, the presentation provided a reflection on AI’s role within climate change.

The keynote started by stating the effects of climate change are becoming more noticeable. It’s understandable that we might get numb from the constant barrage of climate change reports in the news, but the threat of climate change is still present and it is one of the biggest challenges we face today. As engineers, we have a duty to reduce our impact where possible. The Intergovernmental Panel on Climate Change (IPCC) is trying to model the effects of global climate change, demonstrating many potential futures depending on how well we limit our carbon emissions. It has been agreed that we can no longer stop climate change, and the focus has changed to trying to limit its effect, with an aim to have a global temperate increase of 2 degrees. The IPCC has modelled the impact until 2100, across various regions and modelling a range of impact areas.

Currently the global emissions are approximately 50 gigatonnes of equivalent carbon dioxide (GtCO2e), which needs to be reduced significantly. This is the total consumption, including sections such as energy production, agriculture and general industry. Many governments have legislated carbon consumption. Introducing CO2 emission standards for cars and vans, renewable energy directives, land use, and forestry regulation. The main goal is a 50% reduction in carbon emissions until 2030.

ICTs share of global green house gas (GHG) emissions is 2.3%. With data centres, where a lot of AI algorithms are run, creating a large proportion of these emissions. Do we need to worry about AI’s contribution to climate change? The keynote highlighted that 20-30% of all data centre energy consumption is related to AI, and looking at just the ChatGPT model, its energy consumption its equivalent to the consumption of 175,000 households! These figures are expected to get worse, with the success of AI causing an increase in demand, further impacting AI’s energy consumption. The keynote highlighted that the impact of AI is not just from the training and inference, but also from the construction of the data centres and equipment, such as graphics cards.

A conceptual model was presented, modelling the effects of ICT on carbon emissions. The model described three effects that ICT has on carbon consumption. These are direct effects, enabling (indirect) effects and systemic effects. Direct effects are related to the technology that is being developed , its production, use and disposal. Enabling effects are related to its application, providing induction and obsolescence effects. Systemic effects are related to behavioural and structural change from utilising these applications.

So, what can be done to reduce the environmental impact of AI? In the development of AI systems, efficiency improvements such as utilising more energy efficient models and hardware that reduces the energy consumption, and improving the carbon footprint. Using green energy is also important on your carbon footprint. Dr Schien notes that the UK has acted upon this, implementing regulation to promote wind and solar energy with a hope to decarbonise the electric grid. The average gC02e/kWh has moved from around 250 down to 50, showing the UK governments efforts to impact climate change.

Despite its significant energy consumption, AI can be used to make systems more efficient, reducing the energy consumption of other systems. For example, AI-powered applications can tell the power systems to switch to using the batteries during times when tariffs are higher (peak load shifting), or when the grid power usage reaches a certain power grid alternating current limit (AC limit).

During the Q&A, an interesting question was put forward asking at what point should sustainability be thought of? When developing a model, or further down the pipeline?

Dr Schien answered by mentioning that you should always consider which model to use. Can you avoid a deep learning model and use something simpler, like a linear regression or random forest model? You can also avoid waste in your models, reducing the number of layers or changing architectures would be useful. Generally thinking about only using what you need is an important mindset for improving your AI carbon footprint. An important note was that a lot of efficiencies are now being coded into frequently used libraries, which is helpful for development as it is now automated. Finally, seeking to work for companies that are mindful of energy consumptions and emissions will put pressure on firms to consider these to attract talented staff.

Dr Daniel Schien is a senior lecturer at the University of Bristol. His research aims are focused on improving our understanding of the environmental impact from information and communication technologies (ICT), and the reduction of such impact. We would like to thank him for his thoughtful presentation into the effect of AI with regards to climate change, and the discussions it provoked.