BIAS 22 – Review day 2 keynote – Prof. Liz Sonenberg: “Imperfectly rational, rationally imperfect, or perfectly irrational…”

Imperfectly rational, rationally imperfect, or perfectly irrational: challenges for human-centered AI keynote by Prof. Liz Sonenberg

This blog post is written/edited by CDT Students  Isabella Degen and Oliver Deane

Liz opened the second day of BIAS 22 with her thought-provoking and entertaining keynote speech about automatic decision-making aids. She demonstrated how we humans make perfectly irrational decisions and spoke about the implications of using Explainable Artificial Intelligence (XAI) for better decision-making. Liz’s talk mentioned a great body of research spanning psychology, mathematics, and computer science for which she kindly provides all the references here https://tinyurl.com/4njp563e.

Starting off, Liz presented research demonstrating how subtle influences in our life can change the decisions we make despite us thinking that we are making them completely rationally. What we believe is human rational decision-making in fact is littered with cognitive biases. Cognitive bias is when we create a subjective reality based on a pattern we perceive regardless of how representative that pattern is of all the information. Anchoring is a type of cognitive bias that happens when a decision of a person is influenced by an anchor such as a random number being shown while the person knows that they are being shown a random number that has nothing to do with their decision. An example Liz shared is an experiment by Englich et al who used irrelevant anchors to change experts’ decision-making. In the experiment young judges were asked to discover the length of the sentence for a theft crime by throwing a dice. Unkown to the judges the dice was rigged: for one group of judges it would throw high numbers, for the other it would throw low numbers. The judges knew that throwing a dice should not influence their decision. However, the result was that the group with the dice giving low numbers gave a 5 months sentence while the group with the dice giving high numbers gave an 8 months sentence. This is not the only kind of cognitive bias. Human decision making also suffers from framing bias where the way in which data is presented can affect the decision we make. As well as confirmation bias where we tend to interpret new information as a confirmation of our existing beliefs without considering that we only ever observe a limited kind of information and so forth. With these examples Liz made us doubt how clearly and rationally we humans can make decisions.

The irrationality of humans is an interesting challenge to consider for researchers attempting to create intelligent systems that help us humans make better decisions. Should we copy the imperfect human rationality in intelligent agents, or should we make them more rational than humans? And what does that mean for interactions between human and intelligent systems? Research shows that it is important that human operators have a sense of what the machine is doing to be able to interact with it. From accidents such as the Three Mile Island’s partial meltdown of a nuclear reactor, we can learn how important it is to design systems in a way that does not overwhelm the human operator with information. The information presented should be just enough to enable an operator to make a high-quality decision. It should help the operator to know when they can trust the decision the machine made and when to interrupt. When designing these systems, we need to keep in mind that people suffer from biases such as automation bias. Automation bias happens when a human cannot make a decision based on the information the machine provides and instead decides to just go with the machine’s decision knowing that the machine is more often right than the human. Sadly, this means that a human interacting with a machine might not be able to interrupt the machine at the right moment. We know that human decision-making is imperfectly rational. And while automation bias appears to be an error, it is in fact a rational decision in the context of limited information and time available to the human operator.

One promise of XAI is to use explanations to counteract various cognitive biases and with that help a human operator to make better decisions together with an intelligent system. Liz made a thought-provoking analogy to the science of magic. Magicians use our limited memory and observation abilities to manipulate our feelings and deceive us and make the impossible appear possible. A magician knows that the audience tries to spot how the trick works. And on the other hand, the audience also knows that the magician tries to deceive them and that they are trying to discover how the trick works. Magicians understand their audience well. They know what humans really do and exploit the limited resources they have. Like in magic human-centered AI systems ought to anticipate how perfectly irrational we humans make decisions to enable us to make better decisions and counteract our biases.

Leave a Reply

Your email address will not be published. Required fields are marked *