BIAS 22 – Review Day 1 – Dr James Cussens: ‘Algorithms for learning Bayesian networks’

BIAS 22 DAY 1, TALK 1 

This blog post is written by CDT Students  Roussel Desmond Nzoyem, Davide Turco and Mauro Comi 

This Tuesday 06th September 2022 marked the start of the second edition of the Bristol Interactive AI Summer School (BIAS): a unique blend of events (talks, workshops, etc.) focusing on machine learning and other forms of AI explored in the Interactive AI CDT.

Following the tradition, BIAS22 began with a few words of introduction from the CDT director, Professor Peter Flach. He welcomed and warmly thanked the roomful of attendees from academia and industry.

Prof Flach proceeded with a passionate presentation of the range of speakers while giving the audience a brief taste of what to expect during the 3-day long event: talks, workshops, along with a barbecue! He remarked on the variety of Interactive AI ingredients that would be touched: data-driven AI, knowledge-driven AI, human-AI interaction, and Responsible AI.

Prof Flach’s introduction ended with an acknowledgement of the organisers of the event.

 

Dr James Cussens: ‘Algorithms for learning Bayesian networks’

The first talk of the day was an introduction to Bayesian networks and methods to learn them, given by our very own James Cussens.

Bayesian networks (BN) are directed acyclic graphs, in which each node represents a random variable. The important aspects of these networks, as Dr Cussens highlighted, is that they both define probabilistic distributions and causality relationships: this makes Bayesian networks a popular tool in complex fields such as epidemiology and medical sciences.

Learning BNs is a form of unsupervised learning, based on the assumption that the available data (real or simulated) is generated by an underlying BN. There are multiple reasons for learning a BN from data, such as learning a data-generating probability distribution or learning conditional independence relationships between variables; the talk, however, focused on learning a BN in order to estimate a causal model of the data, which is a task not easy to complete with other machine learning approaches we study and use in the CDT.

A popular algorithm for learning the structure of a BN, the so-called DAG, is constraint-based learning: the basic idea behind this algorithm is to perform statistical tests on data and find a DAG which is consistent with the outcomes of the tests. However, this approach presents some issues: for example, different DAGs could encode the same set of conditional independence relationships.

Dr Cussens then proceeded to introduce DAGgity, a widely used software for creating DAGs and analysing their causal structure. It is important to note that DAGgity does not learn DAGs from data, but allows the researcher to perform interventions and graph surgery. For example, it could allow a clinician to infer a treatment-response causal effects without doing that in practice. The talk also included a small excursus on score-based learning of BNs, which is a Bayesian approach to learning these networks, I.e., it has a prior formulation.

There are many different methods for learning BNs and evaluation is key for choosing the best method. Dr Cussens introduced benchpress, a framework for performing method evaluation and comparison, and showed some results from the benchpress paper, including the evaluation of his own method, GOBNILP (Global Optimum Bayesian Network via Inductive Logic Programming).

We are thankful to James Cussens for opening the BIAS22 with his talk; it was great to get an introduction to these methods that put together many aspects of our CDT, such as causality and graphical models.

 

BIAS 22 – Review Day 1: Professor James Ladyman: “Attributing cognitive and affective states to AI systems”

This blog post is written by CDT Student Henry Addison

BIAS 22 – Human stupidity about artificial intelligence: My thoughts on Professor James Ladyman’s talk “Attributing cognitive and affective states to AI systems”,  Tuesday 6th September 2022

The speaker arrives, sweaty, excited after his dash across town. The AI never stops working so perhaps he cannot either. He is a doom-monger. How are the lying liars in charge lying to you? When those people-in-charge run large tech companies, how do they take advantage of our failures to both miss and over-attribute cognitive and affective states and agency to AI in order to manipulate us?

AI is a field that seeks to decouple capabilities that in humans are integrated. For humans interacting with them, this can surprise us. In adult humans intelligence is wrapped up with sentience, autonomy is wrapped up with responsibility. Not so the chess robot – very good at chess (which for a human requires intelligence) but not sentient – nor the guided targeting system for which the responsibility of target-picking is left to a human.

Humans are overkeen to believe an AI cares, understands, is autonomous. These words have many meanings to humans, allowing the decoupling of capabilities to confuse us. “Who cares for granny?” This may be a request for the nurse (human or robot) who assists your grandmother in and out of the bath. Or it may be a request of despair by a parent trying to get their children to help prepare a birthday party. If an AI is autonomous, is it a moral agent that is responsible for what it does?

The flip side of the coin are the capabilities that we do not attribute to an AI, perhaps because they are capabilities human do not have. We lose sight of important things. Like how the machines notice and store away far more than we expect and then use these data to serve us ads, recommend us films, deny us loans, guide us to romantic partners, get us hooked on the angry ramblings of narcissists, lock us up.

AI is shit. AI is ruining us. AI is enabling a descent into fascism. But presumably there is enough hope for him to bother building up such a sweat to come and talk to us. We must do more philosophizing so more people can understand AI and avoid unconscious manipulation by it. The business models of data that take advantage of us are our fault, they are human creations not technical necessities but that means we can change them.

Then again how was the speaker lying to me? How is your lying liar of an author lying to you?

AI Worldbuilding Contest – Future Life Institute

This blog post is written by CDT Students Tashi Namgyal and  Vanessa  Hanschke.

Two Interactive AI CDT students were part of a team that won third place in the AI Worldbuilding Contest run by the Future of Life Institute along with their three non-CDT teammates. In this blog post, we would like to tell you more about the competition, its goals and our team’s process of creating the submission.

The Future of Life Institute describe themselves as “an independent non-profit that works to reduce extreme risks from transformative technologies, as well as steer the development and use of these technologies to benefit life”. Besides running contests, their work consists of running grants programs for research projects, educational outreach or engaging in AI policymaking internationally and nationally in the US.

The worldbuilding competition was aimed at creating a discussion around a desirable future, in which Artificial General Intelligence (AI that can complete a wide range of tasks roughly as well as humans) played a major role in shaping the world. The deliverables included a timeline of events until 2045, two “day in the life” short stories, 13 answers to short question prompts and a media piece.

While dystopian or utopian visions of our future are quite commonplace in science fiction, the particular challenge of the competition was to provide an account of the future that was both plausible and hopeful. This formulation raised a lot of questions such as: For whom will the future be hopeful in 2045? How do we resolve or make progress towards existing crises such as climate change that threaten our future? We discussed these questions at length in our meetings before we even got to imagining concrete future worlds.

Our team was composed of different backgrounds and nationalities: we had two IAI CDT PhD students, one civil servant, one Human Computer Interaction researcher and one researcher in Creative Informatics. We were brought together by our shared values, interests, friendship, and our common homes, Bristol and Edinburgh. We tried to exploit these different backgrounds to provide a diverse picture of what the future could look like. We generated future visions for domains that could be influenced by Artificial General Intelligence (AGI), that are often low-tech, but a core part of human society such as art and religion.

To fit the project into our full-time working week, we decided that we would meet weekly during the brainstorming phase to collect ideas and create drafts for stories, events and question prompts on a Miro board. Each week we would also set each other small tasks to build a foundation of our world in 2045, for example everyone had to write a day in the life story for their own life in 2045. We then chose a weekend closer to the deadline, where we had a “Hackathon”-like intense two days to work on more polished versions of all the different parts of the submission. During this weekend we went through each other’s answers, gave each other feedback and made suggestions to make the submission more cohesive. Our team was selected as one of the 20 finalists out of 144 entries and there was a month for the public to give feedback on whether people felt inspired by or would like to live in such worlds, before the final positions were judged by FLI.

Thinking about how AI tools may be used or misused in the future is a core part of the Interactive AI CDT. The first-year taught module on Responsible AI introduces concepts such as fairness, accountability, transparency, privacy and trustworthiness in relation to AI systems. We go through case studies of where these systems have failed in each regard so we can see how ethics, law and regulation apply to our own PhD research, and in turn how our work might impact these things in the future. In the research phase of the programme, the CDT organises further workshops on topics such as Anticipation & Responsible Innovation and Social & Ethical Issues and there are international conferences in this area we can join with our research stipends, such as FAccT.

If you are curious, you can view our full submission here or listen to the podcast, which we submitted as media piece here. In our submission, we really tried to centre humanity’s place in this future. In summary, the world we created was to make you feel the future, really imagine your place in 2045. Current big tech is not addressing the crises of our times including inequality, climate change, war, and pestilence. Our world seeks to imagine a future where human values are still represented – our propensity for cooperation, creativity, and emotion. But we had to include a disclaimer for our world: our solutions are still open to risk of human actors using them for ill purposes. Our solution for regulating AGI was built on it being an expensive technology in the hand of few companies and regulated internationally, but we tried to think beyond the bounds of AGI. We imagine a positive future grounded in a balanced climate, proper political, social and economic solutions to real world problems, and where human dignity is maintained and respected.

 

 

 

Bristol Summer AI day – 30 June 2022

This blog post is written by CDT Students Mauro Comi and Matt Clifford.

For the Bristol summer AI day we were lucky enough to hear from an outstanding group of internationally renowned speakers. The general topic for talks was based around the evaluation of machine learning models. During the day we touched upon a variety of interesting concepts such as: multilabel calibration, visual perception, meta-learning, uncertainty-awareness and the evaluation of calibration. It was an enjoyable and inspiring day and we give a huge thanks to all of the organisers and speakers of the day.

Capability Oriented Evaluation of Models

The day’s talks opened with Prof. José Hernández-Orallo who presented his work around evaluating the capabilities of models rather than their aggregated performance. Capability and performance are two words in machine learning evaluation that are often mistakenly used interchangeably.

Capabilities are a more concrete evaluation of a model which can tell us the prediction of a model’s success on an instance level. This is crucial and reassuring for safety critical applications where knowing the limits of use for a model is essential.

Evaluation of classifier calibration

Prof. Meelis Kull gave a pragmatic demonstration of how the errors of calibration can be determined. After giving us an overview of the possible biases when estimating the calibration error from a given test set, he explained a new paradigm ‘fit-on-the-test’. This approach reduces some biases such as those due to arbitrary binning choices of the probability space.

A Turing Test for Artificial Nets devoted to vision

Jesus’ presented work focused on understanding the visual system. Deep neural networks are the current state of the art in machine vision tasks, taking some degree of inspiration from the human visual system. However, using deep neural networks to understand the visual system is not easy.

Their work proposes an easy-to-use test to determine if human-like behaviour emerges from the network. Which, in principle, is a desirable property of a network that is designed to perform similar tasks that the human brain conducts, such as image segmentation and object classification.

The experiments are a Turing style set of tests that the human visual system is known to pass. They provide a notebook style test bed on GitHub. In theory, if your network that is operating on the visual domain passes the tests then it is regarded as having a competent understanding of the natural visual world.

The evaluation procedures were later explained by Jesus’ PhD students Jorge and Pablo. They take two networks: PerceptNet and a UNet variation, and with them determine the level of similarity to the human visual system. They test known features that occur and that are processed by the human visual system when shown in natural images such as Gabor filter edge outputs and luminosity sensitive scaling. The encoded representations of the images from PerceptNet and UNet are then compared to what is found in the human visual system to illustrate any discrepancies.

This work into evaluating networks’ understanding of natural imaging is useful for justifying decisions such as architecture design and what knowledge a network has learnt.

Uncertainty awareness in machine learning models

Prof. Eyke Hullermeier’s talk expanded on the concept of uncertainty awareness in ML model. ML classifiers tend to be overconfident on their predictions, and this could lead to harmful behaviours, especially in safety-critical contexts. Ideally, we want a ML system to give us an unbiased and statistically reliable estimation of their uncertainty. In simpler words, we want our model to tell us “I am not sure about this”.

When dealing with uncertainty, it is important to distinguish the aleatoric uncertainty, due to stochasticity in the data, from the epistemic one, which is caused by lack of knowledge. However, Prof. Hullermeier explains that it’s often hard to discern the source of uncertainty in real-world scenarios. The conversation moves from a frequentist to a Bayesian perspective of uncertainty, and dives into different levels of probability estimation.

ML for Explainable AI

Explainability is a hot topic in Machine Learning nowadays. Prof. Cèsar Ferri presented his recent work on creating accurate and interpretable explanations using Machine Teaching, an area that looks for the optimal examples that a teacher should use to make a learner capture a concept.

This is an interesting concept for machine learning models where the teacher and student scenario has been flipped. Prof. Ferri showed how this concept was applied to make noisy curves representing battery health monitoring more interpretable. This involves selecting an explanation that balances simplicity and persuasiveness to the user in order to convey the information effectively.

Explainability in meta learning and multilabel calibration

Dr. Telmo SIlva Filho expanded the concept of explainability introduced by Prof. Ferri to the meta-learning setting. The first paper that he described suggested a novel method, Local Performance Regions, to extract rules from a predetermined region in the data space and link them to an expected performance.

He then followed, together with Dr. Hao Sang, with a discussion of multilabel classification and calibration, and how multilabel calibration is often necessary due to the limitation of label-wise classification. The novelty in their approach consists in calibrating a joint label probability with consistent covariance structure.

Final Words

Again, we would like to emphasise our gratitude to all of the speakers and organisers of this event and we look forward to the next interactive AI event!

CDT Research Showcase Day 2 – 31 March 2022

This blog post is written by CDT Student Matt Clifford

The second day of the research showcase focused on the future of interactive AI. This, of course, is a challenging task to predict, so the day was spent highlighting three key areas: AI in green/sustainable technologies, AI in education and AI in creativity.

Addressing each of the three areas, we were given introductory talks from industry/academia.

AI in green/sustainable technologies, Dr. Henk Muller, XMOS

Henk is CTO of Bristol based micro chip designers XMOS. XMOS’s vision is to provide low power solutions that enable AI to be deployed onto edge systems rather than being cloud based.

Edge devices benefit from lower latency and cost as well as facilitating a more private system since all computation is executed locally. However, edge devices have limited power and memory capabilities. This restricts the complexity of models that can be used. Models have to be either reduced in size or precision to conform to the compute requirements. For me, I see this as a positive for model design and implementation. Many machine learning engineers quote Occam’s razor as a philosophical pillar to design. But in practice it is far too tempting to throw power-hungry supercomputer resources at problems where perhaps they aren’t needed.

It’s refreshing to see the type of constraints that XMOS’s chips present us with opening the doors for green and sustainable AI research and innovation in a way that many other hardware manufacturers don’t encourage.

AI in Education, Dr. Niall Twomey, Kidsloop

Niall Twomey, AI in Education talk
Niall Twomey, KidsLoop, giving the AI in Education talk

AI for/in/with education helps teachers by providing the potential for personalised assistants in a classroom environment. They would give aid to students when the teacher’s focus and attention is elsewhere.

The most recent work from kidsloop addresses the needs of neurodivergent students, concentrating on making learning more appropriate to innate ability rather than neurotypical standards. There is potential for the AI in education to reduce biases towards neurotypical students in the education system, with a more dynamic method of teaching that scales well to larger classroom sizes. I think that these prospects are crucial in the battle to reduce stigma and overcome challenges associated with neurodivergent students.

You can find the details of the methods used in their paper: Equitable Ability Estimation in Neurodivergent Student Populations with Zero-Inflated Learner Models, Niall Twomey et al., 2022. https://arxiv.org/abs/2203.10170

It’s worth mentioning that kidsloop will be looking for a research intern soon. So, if you are interested in this exciting area of AI then keep your eyes peeled for the announcements.

AI in Creativity, Prof. Atau Tanaka, University of Bristol

Atau Tanaka, AI and Creativity talk, with Peter Flach leading the Q&A session
Atau Tanaka giving the AI and Creativity talk, with Peter Flach leading the Q&A session

The third and final topic of the day was Ai in a creative environment, specifically for music. Atau showcased an instrument he designed which uses electrical signals produced by the body’s muscles to capture a person’s gesture as the input. He assigns each gesture input to a corresponding sound. From here a regression model is fitted, enabling the interpolation between each gesture. This allows novel sounds to be synthesised with new gestures. The sounds themselves are experimental, dissonant, and distant from the original input sounds, yet Atau seems to have control and intent over the whole process.

The interactive ML training process Atau uses glimpses at the tangibility of ML that we rarely get to experiment with. I would love to see an active learning style component to the learning algorithm that would solidify the human and machine interaction further.

Creativity and technology are intertwined at their core and  I am always excited to see how emerging technologies can influence creativity and how creatives find ways to redefine creativity with technology.

Breakout Groups and Plenary Discussion

Discussion groups
Discussion groups during the Research Showcase

After lunch we split into three groups to share thoughts on our favourite topic area. It was great to share opinions and motivations amongst one another. The overall drive for discussion was to flesh out a rough idea that could be taken forward as a research project with motivations, goals, deliverables etc. A great exercise for us first years to undertake before we enter the research phase of the CDT!

Closing Thoughts

I look forward to having more of these workshop sessions in the future as the restrictions of the covid pandemic ease. I personally find them highly inspirational, and I believe that the upcoming fourth IAI CDT cohort will be able to benefit significantly from having more in person events like these workshops. I think that they will be especially beneficial for exploring, formulating and collaborating on summer project ideas, which is arguably one of the most pivotal aspects of the CDT.

CDT Research Showcase Day 1 – 30 March 2022

Blog post written by CDT Student Oli Deane.

This year’s IAI CDT Research Showcase represented the first real opportunity to bring the entire CDT together in the real world, permitting in-person talks and face-to-face meetings with industry partners.

Student Presentations

Pecha Kucha presentation given by Grant Stevens
Grant Stevens giving his Pecha Kucha talk

The day began with a series of quickfire talks from current CDT students. Presentations had a different feel this year as they followed a Pecha Kucha style; speakers had ~6 minutes to present their research with individual slides automatically progressing after 20 seconds. As a result, listeners received a whistle-stop tour of each project without delving into the nitty gritty details of research methodologies.

Indeed, this quickfire approach highlighted the sheer diversity of projects carried out in the CDT. The presented projects had a bit of everything; from a data set for analyzing great ape behaviors, to classification models that determine dementia progression from time-series data.

It was fascinating to see how students incorporated interactivity into project designs. Grant Stevens, for example, uses active learning and outlier detection methods to classify astronomical phenomena. Tashi Namgyal has developed MIDI-DRAW, an interactive musical platform that permits the curation of short musical samples with user-provided hand-drawn lines and pictures. Meanwhile, Vanessa Hanschke is collaborating with LV to explore how better ethical practices can be incorporated into the data science workflow; for example, her current work explores an ethical ‘Fire-drill’ – a framework of emergency responses to be deployed in response to the identification of problematic features in existing data-sets/procedures. This is, however, just the tip of the research iceberg and I encourage readers to check out all ongoing projects on the IAI CDT website.

Industry Partners

Gustavo Medina Vazquez's presentation, EDF Energy, with Q&A session being led by Peter Flach
Gustavo Medina Vazquez’s EDF Energy presentation with the Q&A session being led by CDT Director Peter Flach

Next, representatives from three of our industry partners presented overviews of their work and their general involvement with the CDT.

First up was Dylan Rees, a Senior Data Engineer at LV. With a data science team stationed in MVB at the University of Bristol, LV are heavily involved with the university’s research. As well as working with Vanessa to develop ethical practices in data science, they run a cross-CDT datathon in which students battle to produce optimal models for predicting fair insurance quotes. Rees emphasized that LV want responsible AI to be at the core of what they do, highlighting how insurance is a key example of how developments in transparent, and interactive, AI are crucial for the successful deployment of AI technologies. Rees closed his talk with a call to action: the LV team are open to, and eager for, any collaboration with UoB students – whether it be to assist with data projects or act as “guinea pigs” for advancing research on responsible AI in industry.

Gustavo Vasquez from EDF Energy then discussed their work in the field and outlined some examples of past collaborations with the CDT. They are exploring how interactive AI methods can assist in the development and maintenance of green practices – for example, one ongoing project uses computer vision to identify faults in wind turbines. EDF previously collaborated with members of the CDT 2019 cohort as they worked on an interactive search-based mini project.

Finally, Dr. Claire Taylor, a representative from QINETIQ, highlighted how interactive approaches are a major focus of much of their research. QINETIC develop AI-driven technologies in a diverse range of sectors: from defense to law enforcement,  aviation to financial services. Dr. Taylor discussed the changing trends in AI, outlining how previously fashionable methods that have lost focus in recent years are making a come-back courtesy of the AI world’s recognition that we need more interpretable, and less compute-intensive, solutions. QINETIQ also sponsor Kevin Flannagan’s (CDT 2020 cohort) PhD project in which he explores the intersection between language and vision, creating models which ground words and sentences within corresponding videos.

Academic Partners and Poster Session

Research Showcase poster session
Research Showcase poster session

To close out the day’s presentations, our academic partners discussed their relevant research. Dr. Oliver Ray first spoke of his work in Inductive Logic Programming before Dr. Paul Marshall gave a perspective from the world of human computer interaction, outlining a collaborative cross-discipline project that developed user-focused technologies for the healthcare sector.

Finally, a poster session rounded off proceedings; a studious buzz filled the conference hall as partners, students and lecturers alike discussed ongoing projects, questioning existing methods and brainstorming potential future directions.

In all, this was a fantastic day of talks, demonstrations, and general AI chat. It was an exciting opportunity to discuss real research with industry partners and I’m sure it has produced fruitful collaborations.

I would like to end this post with a special thank you to Peter Relph and Nikki Horrobin who will be leaving the CDT for bigger and better things. We thank them for their relentless and frankly spectacular efforts in organizing CDT events and responding to students’ concerns and questions. You will both be sorely missed, and we all wish you the very best of luck with your future endeavors!

January Research Skills Event Review: Day 2

This review is written by CDT Student Oliver Deane.

Day 2 of the IAI CDT’s January Research Skills event included a diverse set of talks that introduced valuable strategies for conducting original and impactful research.

Unifiers and Diversifiers

Professor Gavin Brown, a lecturer at the University of Manchester, kicked things off with a captivating talk on a dichotomy of scientific styles: Unifying and Diversifying.

Calling upon a plethora of quotations and concepts from a range of philosophical figures, Prof. Brown contends that most sciences, and indeed scientists, are dominated by one of these styles or the other. He described how a Unifying researcher focuses on general principles, seeking out commonalities between concepts to construct all-encompassing explanations for phenomena, while  a ‘Diversifier’ ventures into the nitty gritty, exploring the details of a task in search of novel solutions for specific problems. Indeed, as Prof. Brown explained, this fascinating dichotomy maintains science in a “dynamic equilibrium”; unifiers construct rounded explanations that are subsequently explored and challenged by diversifying thinkers. In turn, the resulting outcome fuels unifiers’ instinct to adapt initial explanations to account for the new evidence – and round and round we go.

Examples from the field

Prof. Brown proceeded to demonstrate these processes with example class members from the field. He identifies DeepMind founder, Demis Hassabis, as a textbook ‘Unifier’, utilizing a substantial knowledge of the broad research landscape to connect and combine ideas from different disciplines. Contrarily, Yann LeCun, master of the Convolutional Neural Network, falls comfortably into the ‘Diversifier’ category; he has a focused view of the landscape, specializing on a single concept to identify practical, previously unexplored, solutions.

Relevant Research Strategies

We were then encouraged to reflect upon our own research instincts and understand the degree to which we adopt each style. With this in mind, Prof. Brown introduced valuable strategies that permit the identification of novel and worthwhile research avenues. Unifiers can look under the hood of existing solutions, before building bridges across disciplines to identify alternative concepts that can be reconstructed and reapplied for the given problem domain.  Diversifiers on the other hand should adopt a data centric point of view, challenging existing assumptions and, in doing so, altering their mindset to approach tasks from unconventional angles.

This fascinating exploration into the world of Unifiers and Diversifiers offered much food for thought, providing students practical insights that can be applied to our broad research methodologies, as well as our day-to-day studies.

Research Skills in Interactive AI

After a short break, a few familiar faces delved deeper into specific research skills relevant to the three core components of the IAI CDT: Data-driven AI, Knowledge-Driven AI, and Interactive AI.

Data-Driven AI

Professor Peter Flach began his talk by reframing data-driven research as a “design science”; one must analyze a problem, design a solution and build an artefact accordingly. As a result, the emphasis of the research process becomes creativity; researchers should approach problems by identifying novel perspectives and cultivating original solutions – perhaps by challenging some underlying assumptions made by existing methods. Peter proceeded to highlight the importance of the evaluation process in Machine Learning (ML) research, introducing some key Dos and Don’ts to guide scientific practices: DO formulate a hypothesis, DO expect an onerous debugging process, and DO prepare for mixed initial results. DON’T use too many evaluation metrics – select an appropriate metric given a hypothesis and stick with it. AVOID evaluating to favor one method over another to remove bias from the evaluation process; “it  is not the Olympic Games of ML”.

Knowledge-Based AI

Next, Dr. Oliver Ray covered Knowledge -based AI, describing it as the bridge between weak and strong AI. He emphasized that knowledge-based AI is the backbone for building ethical models, permitting interpretability, explainability, and, perhaps most pertinent, interactivity. Oliver framed the talk in the context of the Hypothetico-deductive model, a description of scientific method in which we curate a falsifiable hypothesis before using it to explore why some outcome is not as expected.

Interactive AI

Finally, Dr. Paul Marshall took listeners on a whistle-stop tour of research methods in Interactive AI, focusing on scientific methods adopted by the field of Human-Computer Interaction (HCI). He pointed students towards formal research processes that have had success in HCI. Verplank’s Spiral, for example, takes researchers from ‘Hunch’ to ‘Hack’, guiding a path from idea, through design and prototype, all the way to a well-researched solution or artefact. Such practices are covered in more detail during a core module of the IAI training year: ‘Interactive Design’.

In all, this was a useful and engaging workshop that introduced a diverse set of research practices and perspectives that will prove invaluable tools during the PhD process.

January Research Skills Event Review: Day 1

January Research Skills – Day 1

This review is written by CDT Isabella Degen, @isabelladegen

The first day of the January Research Skills event was about academic web presence. On the agenda were:

  • a presentation on academic blogging and social media by Gavin
  • a group discussion about our experiences
  • a hackathon to extend an authoring platform that makes it easy to publish academic content on the web organised by Benjamin and Tashi

Academic web presence

In his talk, Gavin shared his experience of blogging and tweeting about his research. His web presence is driven by his passion for writing.

I particularly like Gavin’s practice of writing a thread on Twitter for each of his academic papers. I think summarising a complex paper into a few approachable tweets helps to focus on the most important points of the work and provides clarity.

Hackathon

For the hackathon we looked at an authoring platform that can be used to easily publish our work on the Center of Doctoral Training’s website. The aim of the websites is to be a place where people internal and external to the CDT can explore what we all are working on.

The homepage-dev codebase served as starting point. It uses Jekyll as a static site generator. A blog post is written as a markdown file. It can include other online content like PDFs, videos, Jupyter notebooks, Reveal.js presentations, etc. through a Front Matter) template. Uploading the markdown file to an online code repository triggers the publishing workflow.

It only took us a few minutes to get a github.io page started using this setup. We didn’t extend the workflow beyond being able to write our own blogs using what’s already been setup.

At the end we discussed using such a workflow to not repeat the same content for different purposes over and over again. The idea is to apply the software development principle of “DRY” to written content, graphs and presentations. Creating a workflow that keeps all communications about the same research up to date. You can read more about it on: You only Write Thrice.

Takeaway

The event got me thinking about having a web presence dedicated to my research. I’m inspired by sharing clear and concise pieces of my research and how in return this could bring a lot of clarity to my work.

If you are somebody who reads or writes about research on platforms like Twitter, LinkedIn or in your own blog I’d love to hear about your experiences.

BIAS Day 1 Review: ‘Interactive AI’

This review of the 2nd day of the BIAS event, ‘Interactive AI’, is written by CDT Student Vanessa Hanschke

The Bristol Interactive AI Summer School (BIAS) was opened with the topic of ‘Interactive AI’, congruent with the name of the hosting CDT. Three speakers presented three different perspectives on human-AI interactions.

Dr. Martin Porcheron from Swansea University started with his talk “Studying Voice Interfaces in the Home”, looking at AI in one of the most private of all everyday contexts: smart speakers placed in family homes. Using an ethnomethodological approach, Dr. Porcheron and his collaborators recorded and analysed snippets of family interactions with an Amazon Echo. They used a purpose-built device to record conversations before and after activating Alexa. While revealing how the interactions with the conversational AI were embedded in the life of the home, this talk was a great reminder of how messy real life may be compared to the clean input-output expectations AI research can sometimes set. The study was also a good example of the challenge of designing research in a personal space, while respecting the privacy of the research subjects.

Taking a more industrial view of human-AI interactions, Dr Alison Smith-Renner from Dataminr followed with her talk “Designing for the Human-in-the-Loop: Transparency and Control in Interactive ML”. How can people collaborate with an ML (Machine Learning) model to achieve the best outcome? Dr. Smith-Renner used topic modelling to understand the human-in-the-loop problem with respect to these two aspects: (1) Transparency: methods for explaining ML models and their results to humans. And (2) Control: how users can provide feedback to systems. In her work, she looks at how users are affected by the different ways ML can apply their feedback and if model updates are consistent with the behaviour the users anticipate. I found particularly interesting the different expectations the participants of her study had of ML and how the users’ topic expertise influenced how much control they wanted over a model.

Finally, Prof. Ben Shneiderman from the University of Maryland concluded with his session titled “Human-Centered AI: A New Synthesis” giving a broader view on where AI should be heading by building a bridge between HCI (Human-Computer Interaction) and AI. For the question of how AI can be built in a way that enhances people, Prof. Shneiderman presented three answers: the HCAI framework, design metaphors and governance structures, which are featured in his recently published book. Hinting towards day 4’s topic of responsible AI, Prof. Shneiderman drew a compelling comparison between safety in the automobile industry and responsible AI. While often unlimited innovation is used as an excuse for a deregulated industry, regulation demanding higher safety in cars led to an explosion of innovation of safety belts and air bags that the automobile industry is now proud of. The same can be observed for the right to explainability in GDPR and the ensuing innovation in explainable AI. At the end of the talk, Prof. Shneiderman called to AI researchers to create a future that serves humans and is sustainable and “makes the world warmer and kinder”.

It was an inspiring afternoon for anyone interested in the intersection of humans and AI, especially for researchers like me trying to understand how we should design interfaces and interactions, so that we can gain the most benefit as humans from powerful AI systems

BIAS Day 4 Review: ‘Data-Driven AI’

This review of the 4th day of the BIAS event, ‘Data-Driven AI’, is written by CDT Student Stoil Ganev.

The main focus for the final day of BIAS was Data-Driven AI. Out of the 4 pillars of the Interactive AI CDT, the Data-Driven aspect tends to have a more “applied” flavour compared to the rest. This is due to a variety of reasons but most of them can be summed up in the statement that Data-Driven AI is the AI of the present. Most deployed AI algorithms and systems are structured around the idea of data X going in and prediction Y coming out. This paradigm is popular because it easily fits into modern computer system architectures. For all of their complexity, modern at-scale computer systems generally function like data pipelines. One part takes in a portion of data, transforms it and passes it on to another part of the system to perform its own type of transformation. We can see that, in this kind of architecture, a simple “X goes in, Y comes out” AI is easy to integrate, since it will be no different from any other component. Additionally, data is a resource that most organisations have in abundance. Every sensor reading, user interaction or system to system communication can be easily tracked, recorded and compiled into usable chunks of data. In fact, for accountability and transparency reasons, organisations are often required to record and track a lot of this data. As a result, most organisations are left with massive repositories of data, which they are not able to fully utilise. This is why Data-Driven AI is often relied on as a straight forward, low cost solution for capitalising on these massive stores of data. This “applied” aspect of Data-Driven AI was very much present in the talks given at the last day of BIAS. Compared to the other days, the talks of the final day reflected some practical considerations in regards to AI.

The first talk was given by Professor Robert Jenssen from The Arctic University of Norway. It focused on work he had done with his students on automated monitoring of electrical power lines. More specifically how to utilise unmanned aerial vehicles (UAVs) to automatically discover anomalies in the power grid. A point he made in the talk was that the amount of time they spent on engineering efforts was several times larger than the amount spent on novel research. There was no off the shelf product they could use or adapt, so their system had to be written mostly from scratch. In general, this seems to be a pattern with AI systems where even, if the same model is utilised, the resulting system ends up extremely tailored to its own problem and cannot be easily reused for a different problem. They ran into a similar problem with the data set, as well. Given that the problem of monitoring power lines is rather niche, there was no directly applicable data set they could rely on. I found their solution to this problem to be quite clever in its simplicity. Since gathering real world data is rather difficult, they opted to simulate their data set. They used 3D modelling software to replicate the environment of the power lines. Given that most power masts sit in the middle of fields, that environment is easy to simulate. For more complicated problems such as autonomous driving, this simulation approach is not feasible. It is impossible to properly simulate human behaviour, which the AI would need to model, and there is a large variety in urban settings as well. However, for a mast sitting in a field, you can capture most of the variety by changing the texture of the ground. Additionally, this approach has advantages over real world data as well. There are types of anomalies that are so rare that they might simply not be captured by the data gathering process or be too rare for the model to notice them. However, in simulation, it is easy to introduce any type of anomaly and ensure it has proper representation in the data set. In terms of the architecture of the system, they opted to structure it as a pipeline of sub-tasks. There are separate models for component detection, anomaly detection, etc. This piecewise approach is very sensible given that most anomalies are most likely independent of each other. Additionally, the more specific a problem is, the easier and faster it is to train a model for it. However, this approach tends to have larger engineering overheads. Due to the larger amount of components, proper communication and synchronisation between them needs to be ensured and is not a given. Also, depending on the length of the pipeline, it might become difficult to ensure that it perform fast enough. In general I think that the work Professor Jenssen and his students did in this project is very much representative of what deploying AI systems in the real world is like. Often your problem is so niche that there are no readily available solutions or data sets, so a majority of the work has to be done from scratch. Additionally, even if there is limited or even no need for novel AI research, a problem might still require large amounts of engineering efforts to solve.

The second talk of the day was given by Jonas Pfeiffer, a PhD student from the Technical University of Darmstadt. In this talk he introduced us to his research on Adapters for Transformer models. Adapters are a light weight and faster approach to fine tuning Transformer models to different tasks. The idea is rather simple, the Adapters are small layers that are added between the Transformer layers, which are trained during fine tuning, while keeping the transformer layers fixed. While pretty simple and straight forward, this approach appears to be rather effective. However, other than focusing on his research on Adapters, Jonas is also one of the main contributors to AdapterHub.ml, a framework for training and sharing Adapters. This brings our focus to an important part of what is necessary to get AI research out of the papers and into the real world – creating accessible and easy to use programming libraries. We as researchers often neglect this step or consider it to be beyond our responsibilities. That is not without sensible reasons. A programming library is not just the code it contains. It requires training materials for new users, tracking of bugs and feature requests, maintaining and following a development road map, managing integrations with other libraries that are dependencies or dependers, etc. All of these aspects require significant efforts by the maintainers of the library. Efforts that do not contribute to research output and consequently do not contribute to the criteria by which we are judged as successful scientists. As such, it is always a delight when you see a researcher willing to go this extra mile, to make his or her research more accessible. The talk by Jonas also had a tutorial section where he led us though the process of fine tuning an off the shelf pre-trained Transformer. This tutorial was delivered through Jupyter notebooks easily accessible from the projects website. Within minutes we had our own working examples, for us to dissect and experiment with. Given that Adapters and the AdapterHub.ml framework are very recent innovations, the amount and the quality of documentation and training resources within this project is highly impressive. Adapters and the AdapterHub.ml framework are excellent tools that, I believe, will be useful to me in the future. As such, I am very pleased to have attended this talk and to have discovered these tools though it.

The final day of BIAS was an excellent wrap up to the summer school. With its more applied focus, it showed us how the research we are conducting can be translated to the real world and how it can have an impact. We got a flavour of both, what it is like to develop and deploy an AI system, and what it is like to provide a programming library for our developed methods. These are all aspects of our research that we often neglect or overlook. Thus, this day served as great reminder that our research is not something confined within a lab but that it is work that lives and breathes within the context of the world that surrounds us.