January Research Skills Event Review: Day 2

This review is written by CDT Student Oliver Deane.

Day 2 of the IAI CDT’s January Research Skills event included a diverse set of talks that introduced valuable strategies for conducting original and impactful research.

Unifiers and Diversifiers

Professor Gavin Brown, a lecturer at the University of Manchester, kicked things off with a captivating talk on a dichotomy of scientific styles: Unifying and Diversifying.

Calling upon a plethora of quotations and concepts from a range of philosophical figures, Prof. Brown contends that most sciences, and indeed scientists, are dominated by one of these styles or the other. He described how a Unifying researcher focuses on general principles, seeking out commonalities between concepts to construct all-encompassing explanations for phenomena, while  a ‘Diversifier’ ventures into the nitty gritty, exploring the details of a task in search of novel solutions for specific problems. Indeed, as Prof. Brown explained, this fascinating dichotomy maintains science in a “dynamic equilibrium”; unifiers construct rounded explanations that are subsequently explored and challenged by diversifying thinkers. In turn, the resulting outcome fuels unifiers’ instinct to adapt initial explanations to account for the new evidence – and round and round we go.

Examples from the field

Prof. Brown proceeded to demonstrate these processes with example class members from the field. He identifies DeepMind founder, Demis Hassabis, as a textbook ‘Unifier’, utilizing a substantial knowledge of the broad research landscape to connect and combine ideas from different disciplines. Contrarily, Yann LeCun, master of the Convolutional Neural Network, falls comfortably into the ‘Diversifier’ category; he has a focused view of the landscape, specializing on a single concept to identify practical, previously unexplored, solutions.

Relevant Research Strategies

We were then encouraged to reflect upon our own research instincts and understand the degree to which we adopt each style. With this in mind, Prof. Brown introduced valuable strategies that permit the identification of novel and worthwhile research avenues. Unifiers can look under the hood of existing solutions, before building bridges across disciplines to identify alternative concepts that can be reconstructed and reapplied for the given problem domain.  Diversifiers on the other hand should adopt a data centric point of view, challenging existing assumptions and, in doing so, altering their mindset to approach tasks from unconventional angles.

This fascinating exploration into the world of Unifiers and Diversifiers offered much food for thought, providing students practical insights that can be applied to our broad research methodologies, as well as our day-to-day studies.

Research Skills in Interactive AI

After a short break, a few familiar faces delved deeper into specific research skills relevant to the three core components of the IAI CDT: Data-driven AI, Knowledge-Driven AI, and Interactive AI.

Data-Driven AI

Professor Peter Flach began his talk by reframing data-driven research as a “design science”; one must analyze a problem, design a solution and build an artefact accordingly. As a result, the emphasis of the research process becomes creativity; researchers should approach problems by identifying novel perspectives and cultivating original solutions – perhaps by challenging some underlying assumptions made by existing methods. Peter proceeded to highlight the importance of the evaluation process in Machine Learning (ML) research, introducing some key Dos and Don’ts to guide scientific practices: DO formulate a hypothesis, DO expect an onerous debugging process, and DO prepare for mixed initial results. DON’T use too many evaluation metrics – select an appropriate metric given a hypothesis and stick with it. AVOID evaluating to favor one method over another to remove bias from the evaluation process; “it  is not the Olympic Games of ML”.

Knowledge-Based AI

Next, Dr. Oliver Ray covered Knowledge -based AI, describing it as the bridge between weak and strong AI. He emphasized that knowledge-based AI is the backbone for building ethical models, permitting interpretability, explainability, and, perhaps most pertinent, interactivity. Oliver framed the talk in the context of the Hypothetico-deductive model, a description of scientific method in which we curate a falsifiable hypothesis before using it to explore why some outcome is not as expected.

Interactive AI

Finally, Dr. Paul Marshall took listeners on a whistle-stop tour of research methods in Interactive AI, focusing on scientific methods adopted by the field of Human-Computer Interaction (HCI). He pointed students towards formal research processes that have had success in HCI. Verplank’s Spiral, for example, takes researchers from ‘Hunch’ to ‘Hack’, guiding a path from idea, through design and prototype, all the way to a well-researched solution or artefact. Such practices are covered in more detail during a core module of the IAI training year: ‘Interactive Design’.

In all, this was a useful and engaging workshop that introduced a diverse set of research practices and perspectives that will prove invaluable tools during the PhD process.

January Research Skills Event Review: Day 1

January Research Skills – Day 1

This review is written by CDT Isabella Degen, @isabelladegen

The first day of the January Research Skills event was about academic web presence. On the agenda were:

  • a presentation on academic blogging and social media by Gavin
  • a group discussion about our experiences
  • a hackathon to extend an authoring platform that makes it easy to publish academic content on the web organised by Benjamin and Tashi

Academic web presence

In his talk, Gavin shared his experience of blogging and tweeting about his research. His web presence is driven by his passion for writing.

I particularly like Gavin’s practice of writing a thread on Twitter for each of his academic papers. I think summarising a complex paper into a few approachable tweets helps to focus on the most important points of the work and provides clarity.

Hackathon

For the hackathon we looked at an authoring platform that can be used to easily publish our work on the Center of Doctoral Training’s website. The aim of the websites is to be a place where people internal and external to the CDT can explore what we all are working on.

The homepage-dev codebase served as starting point. It uses Jekyll as a static site generator. A blog post is written as a markdown file. It can include other online content like PDFs, videos, Jupyter notebooks, Reveal.js presentations, etc. through a Front Matter) template. Uploading the markdown file to an online code repository triggers the publishing workflow.

It only took us a few minutes to get a github.io page started using this setup. We didn’t extend the workflow beyond being able to write our own blogs using what’s already been setup.

At the end we discussed using such a workflow to not repeat the same content for different purposes over and over again. The idea is to apply the software development principle of “DRY” to written content, graphs and presentations. Creating a workflow that keeps all communications about the same research up to date. You can read more about it on: You only Write Thrice.

Takeaway

The event got me thinking about having a web presence dedicated to my research. I’m inspired by sharing clear and concise pieces of my research and how in return this could bring a lot of clarity to my work.

If you are somebody who reads or writes about research on platforms like Twitter, LinkedIn or in your own blog I’d love to hear about your experiences.

BIAS Day 1 Review: ‘Interactive AI’

This review of the 2nd day of the BIAS event, ‘Interactive AI’, is written by CDT Student Vanessa Hanschke

The Bristol Interactive AI Summer School (BIAS) was opened with the topic of ‘Interactive AI’, congruent with the name of the hosting CDT. Three speakers presented three different perspectives on human-AI interactions.

Dr. Martin Porcheron from Swansea University started with his talk “Studying Voice Interfaces in the Home”, looking at AI in one of the most private of all everyday contexts: smart speakers placed in family homes. Using an ethnomethodological approach, Dr. Porcheron and his collaborators recorded and analysed snippets of family interactions with an Amazon Echo. They used a purpose-built device to record conversations before and after activating Alexa. While revealing how the interactions with the conversational AI were embedded in the life of the home, this talk was a great reminder of how messy real life may be compared to the clean input-output expectations AI research can sometimes set. The study was also a good example of the challenge of designing research in a personal space, while respecting the privacy of the research subjects.

Taking a more industrial view of human-AI interactions, Dr Alison Smith-Renner from Dataminr followed with her talk “Designing for the Human-in-the-Loop: Transparency and Control in Interactive ML”. How can people collaborate with an ML (Machine Learning) model to achieve the best outcome? Dr. Smith-Renner used topic modelling to understand the human-in-the-loop problem with respect to these two aspects: (1) Transparency: methods for explaining ML models and their results to humans. And (2) Control: how users can provide feedback to systems. In her work, she looks at how users are affected by the different ways ML can apply their feedback and if model updates are consistent with the behaviour the users anticipate. I found particularly interesting the different expectations the participants of her study had of ML and how the users’ topic expertise influenced how much control they wanted over a model.

Finally, Prof. Ben Shneiderman from the University of Maryland concluded with his session titled “Human-Centered AI: A New Synthesis” giving a broader view on where AI should be heading by building a bridge between HCI (Human-Computer Interaction) and AI. For the question of how AI can be built in a way that enhances people, Prof. Shneiderman presented three answers: the HCAI framework, design metaphors and governance structures, which are featured in his recently published book. Hinting towards day 4’s topic of responsible AI, Prof. Shneiderman drew a compelling comparison between safety in the automobile industry and responsible AI. While often unlimited innovation is used as an excuse for a deregulated industry, regulation demanding higher safety in cars led to an explosion of innovation of safety belts and air bags that the automobile industry is now proud of. The same can be observed for the right to explainability in GDPR and the ensuing innovation in explainable AI. At the end of the talk, Prof. Shneiderman called to AI researchers to create a future that serves humans and is sustainable and “makes the world warmer and kinder”.

It was an inspiring afternoon for anyone interested in the intersection of humans and AI, especially for researchers like me trying to understand how we should design interfaces and interactions, so that we can gain the most benefit as humans from powerful AI systems

My Experience of Being a Student Ambassador

This week’s blog post is written by CDT Student Grant Stevens

Widening participation involves the support of prospective students from underrepresented backgrounds to access university. The covered students include, but are not limited to, those:

  • from low-income backgrounds and low socioeconomic groups
  • who are the first in their generation to consider higher education
  • who attend schools and colleges where performance is below the national average
  • who are care experienced
  • who have a disability
  • from underrepresented ethnic backgrounds.

Due to my background and school, I matched the widening participation criteria for universities and was eligible for some fantastic opportunities. Without which I would not be in the position I am today.

I was able to attend a residential summer school in 2012 hosted by the University of Bristol. We were provided with many taster sessions for a wide variety of courses on offer from the university. I had such a good time that I applied the year after to attend the Sutton Trust Summer School, which was also held at Bristol uni.

A bonus of these opportunities was that those who took part were provided with a contextual offer (up to two grades below the standard entry requirement for their course), as well as a guaranteed offer (or guaranteed interview if the course required it). These types of provisions are essential to ensure fair access for those who may be disadvantaged by factors outside of their control. In my case, the reduced offer was vital as my final A-Level grades would have led to me missing out on the standard offer.

Although I enjoyed the taster sessions and loved the city, the conversations I had with the student ambassadors had the most impact on me and my aspirations for university. Hearing about their experiences and what they were working on at university was incredibly inspiring.

Due to how impactful it had been hearing from current students, I signed up to be a student ambassador myself when I arrived at Bristol. It’s a job that I have found very enjoyable and extremely rewarding. I have been very fortunate in having the opportunity to interact with so many people from many different backgrounds.

I am now entering my 7th year as a student ambassador for the widening participation and outreach team. I still really enjoy my job, and throughout this time, I have found that working on summer schools always turns out to be the highlight of my year. I’m not sure whether this is because I know first-hand the impact being a participant on these programmes can have or because over the week, you can really see the students coming out of their shells and realising that they’re more than capable of coming to a university like Bristol.

Being in this role for so long has also made me realise how much of an impact the pandemic can have on these types of programmes. I was relieved that at Bristol, many of these schemes have been able to continue in an online form. The option of 100% online programmes has its benefits but also its limitations. It allows us to expand our audience massively as we no longer have spacing restraints. However, zoom calls cannot replace physically visiting the university or exploring the city when choosing where to study. That’s why hearing from current students about their course and what to expect at university is more important than ever. For this reason, I have started to branch out to help with schemes outside of the university. I have presented talks for schools through the STEM Ambassador programme; I recently gave a talk to over 100 sixth formers during the Engineering Development Trust’s Insight into University course, and I also look forward to working with the Sutton Trust to engage with students outside of just Bristol.

In the last few years since starting my PhD, my message to students has changed a little. It has often been about “what computer science isnt.” clearing up misconceptions about what the subject at uni entails. That is still part of my talks; however, now I make sure to put in a section on how I got where I am today. It wasn’t plane sailing, and definitely not how I would have imagined a “PhD student’s” journey would go: from retaking a whole year in sixth form to having a very poor track record with exams at undergrad. I think it’s really important to let students know that regardless of their background, and even when things really don’t go to plan, it’s still possible to go on to do something big like a PhD. That is something I would’ve loved to hear back in school, so hopefully, it’s useful for those thinking (and potentially worrying) about uni now.

Some of the best people I’ve met and the best memories I’ve had at university have come from my student ambassador events. It’s something I feel very passionate about and find very enjoyable and rewarding. I have been very fortunate with the opportunities that have been available to me and am incredibly grateful as, without them, I wouldn’t be studying at Bristol, let alone be doing a PhD. By being a part of these projects, I hope I can inspire the next set of applicants in the same way my ambassadors inspired me.

Neglected Aspects of the COVID-19 pandemic

This week’s post is written by IAI CDT student Gavin Leech.
I recently worked on two papers looking at neglected aspects of the COVID-19 pandemic. I learned more than I wanted to know about epidemiology.

The first: how much do masks do?

There were a lot of confusing results about masks last year.
We know that proper masks worn properly protect people in hospitals, but zooming out and looking at the population effect led to very different results, from basically nothing to a huge halving of cases.
Two problems: these were, of course, observational studies, since we don’t run experiments on the scale of millions. (Or not intentionally anyway.) So there’s always a risk of missing some key factor and inferring the completely wrong thing.
And there wasn’t much data on the number of people actually wearing masks, so we tended to use the timing of governments making it mandatory to wear masks, assuming that this caused the transition to wearing behaviour.
It turns out that the last assumption is mostly false: across the world, people started to wear masks before governments told them to. (There are exceptions, like Germany.) The correlation between mandates and wearing was about 0.32. So mask mandate data provide weak evidence about the effects of mass mask-wearing, and past results are in question.
We use self-reported mask-wearing instead: the largest survey of mask wearing (n=20 million, stratified random sampling) and obtain our effect estimates from 92 regions across 6 continents. We use the same model to infer the effect of government mandates to wear masks and the effect of self-reported wearing. We do this by linking confirmed case numbers to the level of wearing or the presence of a government mandate. This is Bayesian (using past estimates as a starting point) and hierarchical (composed of per-region submodels).
For an entire population wearing masks, we infer a 25% [6%, 43%] reduction in R, the “reproduction number” or number of new cases per case (B).
In summer last year, given self-reported wearing levels around 83% of the population, this cashed out into a 21% [3%, 23%] reduction in transmission due to masks (C).
One thing which marks us out is being obsessive about checking this is robust; that different plausible model assumptions don’t change the result. We test 123 different assumptions about the nature of the virus, of the epidemic monitoring, and about the way that masks work. It’s heartening to see that our results don’t change much (D)
It was an honour to work on this with amazing epidemiologists and computer scientists. But I’m looking forward to thinking about AI again, just as we look forward to hearing the word “COVID” for the last time.

The second: how much does winter do?

We also look at seasonality: the annual cycle in virus potency. One bitter argument you heard a lot in 2020 was about whether we’d need lockdown in the summer, since you expect respiratory infections to fall a lot in the middle months.

We note that the important models of what works against COVID fail to account for this. We look at the dense causal web involved:

This is a nasty inference task, and data is lacking for most links. So instead, we try to directly infer a single seasonality variable.
It looks like COVID spreads 42% less [25% – 53%, 95% CI] from the peak of winter to the peak of summer.
Adding this variable improves two of the cutting-edge models of policy effects (as judged by correcting bias in their noise terms).
One interesting side-result: we infer the peak of winter, we don’t hard-code it. (We set it to the day with the most inferred spread.) And this turns out to be the 1st January! This is probably coincidence, but the Gregorian calendar we use was also learned from data (astronomical data)…
See also
  • Gavin Leech, Charlie Rogers-Smith, Jonas B. Sandbrink, Benedict Snodin, Robert Zinkov, Benjamin Rader, John S. Brownstein, Yarin Gal, Samir Bhatt, Mrinank Sharma, Sören Mindermann, Jan M. Brauner, Laurence Aitchison

Seasonal variation in SARS-CoV-2 transmission in temperate climates

  • Tomas Gavenciak, Joshua Teperowski Monrad, Gavin Leech, Mrinank Sharma, Soren Mindermann, Jan Markus Brauner, Samir Bhatt, Jan Kulveit
  • Mrinank Sharma, Sören Mindermann, Charlie Rogers-Smith, Gavin Leech, Benedict Snodin, Janvi Ahuja, Jonas B. Sandbrink, Joshua Teperowski Monrad, George Altman, Gurpreet Dhaliwal, Lukas Finnveden, Alexander John Norman, Sebastian B. Oehm, Julia Fabienne Sandkühler, Laurence Aitchison, Tomáš Gavenčiak, Thomas Mellan, Jan Kulveit, Leonid Chindelevitch, Seth Flaxman, Yarin Gal, Swapnil Mishra, Samir Bhatt & Jan Markus Brauner

BIAS Day 4 Review: ‘Data-Driven AI’

This review of the 4th day of the BIAS event, ‘Data-Driven AI’, is written by CDT Student Stoil Ganev.

The main focus for the final day of BIAS was Data-Driven AI. Out of the 4 pillars of the Interactive AI CDT, the Data-Driven aspect tends to have a more “applied” flavour compared to the rest. This is due to a variety of reasons but most of them can be summed up in the statement that Data-Driven AI is the AI of the present. Most deployed AI algorithms and systems are structured around the idea of data X going in and prediction Y coming out. This paradigm is popular because it easily fits into modern computer system architectures. For all of their complexity, modern at-scale computer systems generally function like data pipelines. One part takes in a portion of data, transforms it and passes it on to another part of the system to perform its own type of transformation. We can see that, in this kind of architecture, a simple “X goes in, Y comes out” AI is easy to integrate, since it will be no different from any other component. Additionally, data is a resource that most organisations have in abundance. Every sensor reading, user interaction or system to system communication can be easily tracked, recorded and compiled into usable chunks of data. In fact, for accountability and transparency reasons, organisations are often required to record and track a lot of this data. As a result, most organisations are left with massive repositories of data, which they are not able to fully utilise. This is why Data-Driven AI is often relied on as a straight forward, low cost solution for capitalising on these massive stores of data. This “applied” aspect of Data-Driven AI was very much present in the talks given at the last day of BIAS. Compared to the other days, the talks of the final day reflected some practical considerations in regards to AI.

The first talk was given by Professor Robert Jenssen from The Arctic University of Norway. It focused on work he had done with his students on automated monitoring of electrical power lines. More specifically how to utilise unmanned aerial vehicles (UAVs) to automatically discover anomalies in the power grid. A point he made in the talk was that the amount of time they spent on engineering efforts was several times larger than the amount spent on novel research. There was no off the shelf product they could use or adapt, so their system had to be written mostly from scratch. In general, this seems to be a pattern with AI systems where even, if the same model is utilised, the resulting system ends up extremely tailored to its own problem and cannot be easily reused for a different problem. They ran into a similar problem with the data set, as well. Given that the problem of monitoring power lines is rather niche, there was no directly applicable data set they could rely on. I found their solution to this problem to be quite clever in its simplicity. Since gathering real world data is rather difficult, they opted to simulate their data set. They used 3D modelling software to replicate the environment of the power lines. Given that most power masts sit in the middle of fields, that environment is easy to simulate. For more complicated problems such as autonomous driving, this simulation approach is not feasible. It is impossible to properly simulate human behaviour, which the AI would need to model, and there is a large variety in urban settings as well. However, for a mast sitting in a field, you can capture most of the variety by changing the texture of the ground. Additionally, this approach has advantages over real world data as well. There are types of anomalies that are so rare that they might simply not be captured by the data gathering process or be too rare for the model to notice them. However, in simulation, it is easy to introduce any type of anomaly and ensure it has proper representation in the data set. In terms of the architecture of the system, they opted to structure it as a pipeline of sub-tasks. There are separate models for component detection, anomaly detection, etc. This piecewise approach is very sensible given that most anomalies are most likely independent of each other. Additionally, the more specific a problem is, the easier and faster it is to train a model for it. However, this approach tends to have larger engineering overheads. Due to the larger amount of components, proper communication and synchronisation between them needs to be ensured and is not a given. Also, depending on the length of the pipeline, it might become difficult to ensure that it perform fast enough. In general I think that the work Professor Jenssen and his students did in this project is very much representative of what deploying AI systems in the real world is like. Often your problem is so niche that there are no readily available solutions or data sets, so a majority of the work has to be done from scratch. Additionally, even if there is limited or even no need for novel AI research, a problem might still require large amounts of engineering efforts to solve.

The second talk of the day was given by Jonas Pfeiffer, a PhD student from the Technical University of Darmstadt. In this talk he introduced us to his research on Adapters for Transformer models. Adapters are a light weight and faster approach to fine tuning Transformer models to different tasks. The idea is rather simple, the Adapters are small layers that are added between the Transformer layers, which are trained during fine tuning, while keeping the transformer layers fixed. While pretty simple and straight forward, this approach appears to be rather effective. However, other than focusing on his research on Adapters, Jonas is also one of the main contributors to AdapterHub.ml, a framework for training and sharing Adapters. This brings our focus to an important part of what is necessary to get AI research out of the papers and into the real world – creating accessible and easy to use programming libraries. We as researchers often neglect this step or consider it to be beyond our responsibilities. That is not without sensible reasons. A programming library is not just the code it contains. It requires training materials for new users, tracking of bugs and feature requests, maintaining and following a development road map, managing integrations with other libraries that are dependencies or dependers, etc. All of these aspects require significant efforts by the maintainers of the library. Efforts that do not contribute to research output and consequently do not contribute to the criteria by which we are judged as successful scientists. As such, it is always a delight when you see a researcher willing to go this extra mile, to make his or her research more accessible. The talk by Jonas also had a tutorial section where he led us though the process of fine tuning an off the shelf pre-trained Transformer. This tutorial was delivered through Jupyter notebooks easily accessible from the projects website. Within minutes we had our own working examples, for us to dissect and experiment with. Given that Adapters and the AdapterHub.ml framework are very recent innovations, the amount and the quality of documentation and training resources within this project is highly impressive. Adapters and the AdapterHub.ml framework are excellent tools that, I believe, will be useful to me in the future. As such, I am very pleased to have attended this talk and to have discovered these tools though it.

The final day of BIAS was an excellent wrap up to the summer school. With its more applied focus, it showed us how the research we are conducting can be translated to the real world and how it can have an impact. We got a flavour of both, what it is like to develop and deploy an AI system, and what it is like to provide a programming library for our developed methods. These are all aspects of our research that we often neglect or overlook. Thus, this day served as great reminder that our research is not something confined within a lab but that it is work that lives and breathes within the context of the world that surrounds us.

BIAS Day 3 Review: ‘Responsible AI’

This review of the 3rd day of the BIAS event, ‘Responsible AI’, is written by CDT Student Emily Vosper. 

Monday was met with a swift 9:30am start, made easier to digest with a talk on AI and Ethics, why all the fuss? By Toby Walsh. This talk, and subsequent discussion, covered the thought-provoking topic of fairness within AI. The main lesson considered whether we actually need new ethical principles to govern AI, or whether we can take inspiration from well-established areas, such as medicine. Medicine works by four key principles: Beneficence, non-maleficence, autonomy and justice and AI brings some new challenges to this framework. The new challenges include autonomy, decision making and culpability. Some interesting discussions were had around reproducing historical biases when using autonomous systems, for example within the justice system such as predictive policing or parole decision making (COMPAS).

The second talk of the day was given by Nirav Ajmeri and Pradeep MuruKannaiah on ethics in sociotechnical systems. They broke down the definition of ethics as distinguishing between right and wrong which is a complex problem full of ethical dilemmas. Such dilemmas include examples such as Les Miserables where the actor steals a loaf of bread, stealing is obviously bad, but the bread is being stollen to feed a child and therefore the notion of right and wrong becomes nontrivial. Nirav and Pradeep treated ethics as a multiagent concern and values were brought in as the building blocks of ethics. Using this values-based approach the notion of right and wrong could be more easily broken down in a domain context i.e. by discovering what the main values and social norms are for a certain domain rules can be drawn up to better understand how to reach a goal within that domain. After the talk there were some thought provoking discussions surrounding how to facilitate reasoning at both the individual and the societal level, and how to satisfy values such as privacy.

In the afternoon session, Kacper Sokol ran a practical machine learning explainability session where he introduced the concept of Surrogate Explainers – explainers that are not model specific and can therefore be used in many applications. The key takeaways are that such diagnostic tools only become explainers when their properties and outputs are well understood and that explainers are not monolithic entities – they are complex with many parameters and need to be tailer made or configured for the application in hand.

The practical involved trying to break the explainer. The idea was to move the meaningful splits of the explainer so that they were impure, i.e. they contain many different classes from the black box model predictions. Moving the splits means the explainer doesn’t capture the black box model as well, as a mixture of points from several class predictions have been introduced to the explainer. Based on these insights it would be possible to manipulate the explainer with very impure hyper rectangles. We found this was even more likely with the logistical regression model as it has diagonal decision boundaries, while the explainer has horizontal and vertical meaningful splits.

 

Welcome to the first Interactive AI Blog Post

Welcome to the Interactive AI CDT Blog. This Centre for Doctoral Training (CDT) has been funded by UKRI to train the next generation of innovators in human-in-the-loop AI systems, enabling them to responsibly solve societally important problems.

The overarching aim of this CDT in Interactive Artificial Intelligence is to establish an internationally leading Centre that will train the next generations of innovators in responsible, data and knowledge-driven human-in-the-loop AI systems. The CDT offers an innovative cohort-based training experience that will equip students with the skills to design and implement complex AI pipelines solving societally important problems in responsible ways.
The specific objectives are:
  • to recruit highly motivated postgraduate students from computer science, mathematics, and cognate disciplines, and create an intensive and complementary training experience which comprises a bespoke taught programme in the first year, delivered within a distinctive cohort training environment;
  • to set the highest standards of responsible innovation with particular emphasis on AI-specific issues such as transparency, explainability, accountability, fairness, trustworthiness and privacy;
  • to promote a human-in-the-loop (HITL) ethos that emphasises technology design for and with different groups of end users;
  • to be an exemplar for technology-supported cohort-based training, using the latest developments and insights in flipped classroom teaching and teamwork, open software development tools and platforms, and cross-cohort mentoring;
  • to train “AI ambassadors” who, through their deep understanding of the strengths and limitations of artificial intelligence, can contribute to the public debate on AI and its relationship to society;
  • to build long-term relationships between the CDT and its industrial partners, enhancing the cohort provision through industry placements, co-supervised projects and research collaboration;
  • to support the development of entrepreneurial skills and intellectual property;
  • to contribute to the “democratisation of AI” and reduce inequality within the sector by promoting take-up of relevant AI techniques in SMEs;
  • to consolidate and expand existing expertise at Bristol and become a leading international research training centre with research and training relationships with global partners.