BIAS Day 1 Review: ‘Interactive AI’

This review of the 2nd day of the BIAS event, ‘Interactive AI’, is written by CDT Student Vanessa Hanschke

The Bristol Interactive AI Summer School (BIAS) was opened with the topic of ‘Interactive AI’, congruent with the name of the hosting CDT. Three speakers presented three different perspectives on human-AI interactions.

Dr. Martin Porcheron from Swansea University started with his talk “Studying Voice Interfaces in the Home”, looking at AI in one of the most private of all everyday contexts: smart speakers placed in family homes. Using an ethnomethodological approach, Dr. Porcheron and his collaborators recorded and analysed snippets of family interactions with an Amazon Echo. They used a purpose-built device to record conversations before and after activating Alexa. While revealing how the interactions with the conversational AI were embedded in the life of the home, this talk was a great reminder of how messy real life may be compared to the clean input-output expectations AI research can sometimes set. The study was also a good example of the challenge of designing research in a personal space, while respecting the privacy of the research subjects.

Taking a more industrial view of human-AI interactions, Dr Alison Smith-Renner from Dataminr followed with her talk “Designing for the Human-in-the-Loop: Transparency and Control in Interactive ML”. How can people collaborate with an ML (Machine Learning) model to achieve the best outcome? Dr. Smith-Renner used topic modelling to understand the human-in-the-loop problem with respect to these two aspects: (1) Transparency: methods for explaining ML models and their results to humans. And (2) Control: how users can provide feedback to systems. In her work, she looks at how users are affected by the different ways ML can apply their feedback and if model updates are consistent with the behaviour the users anticipate. I found particularly interesting the different expectations the participants of her study had of ML and how the users’ topic expertise influenced how much control they wanted over a model.

Finally, Prof. Ben Shneiderman from the University of Maryland concluded with his session titled “Human-Centered AI: A New Synthesis” giving a broader view on where AI should be heading by building a bridge between HCI (Human-Computer Interaction) and AI. For the question of how AI can be built in a way that enhances people, Prof. Shneiderman presented three answers: the HCAI framework, design metaphors and governance structures, which are featured in his recently published book. Hinting towards day 4’s topic of responsible AI, Prof. Shneiderman drew a compelling comparison between safety in the automobile industry and responsible AI. While often unlimited innovation is used as an excuse for a deregulated industry, regulation demanding higher safety in cars led to an explosion of innovation of safety belts and air bags that the automobile industry is now proud of. The same can be observed for the right to explainability in GDPR and the ensuing innovation in explainable AI. At the end of the talk, Prof. Shneiderman called to AI researchers to create a future that serves humans and is sustainable and “makes the world warmer and kinder”.

It was an inspiring afternoon for anyone interested in the intersection of humans and AI, especially for researchers like me trying to understand how we should design interfaces and interactions, so that we can gain the most benefit as humans from powerful AI systems

Leave a Reply

Your email address will not be published. Required fields are marked *