This blog post is written/edited by CDT Students Jonathan Erkine and Jack Hanslope
Following from a great keynote by Liz Sonenberg, Dr Nirav Ajmeri presented a discussion on Ethics in Socio-Technical Systems (STS).
As is common practice in discussions on AI, we began by looking inwards to what kind of human behaviour we are trying to replicate – what aspect of intelligence have we defined as our objective? In this case it was the ability of machines to make ethical decisions. Dr. Ajmeri referred to Kantian and Aristotelian ethical frameworks which describe moral duty and virtuous behaviour to establish an ethical baseline, which led to the first main takeaway of the discussion:
We must be capable of expressing how humanity defines, quantifies, and measures ethics before discussing how we might synthesise ethical behaviour.
Dr. Ajmeri clarified that ethical systems must be robust to situations where there are “no good choices”. That is, when even a human might struggle to see the most ethical path forwards. Keen to move away from the trolley problem, Nirav described a group of friends who can’t agree on a restaurant for their evening meal, expounding on the concepts of individual utility, rationality, and fairness to explain why science might fail to resolve the problem.
The mathematical solution might be a restaurant that none of them enjoy, and this could be the same restaurant for every future meal which they attend. From this example, the motivation behind well-defined ethics in socio-technical systems becomes clear; computers lack the ability to apply emotion when reasoning about the impact of their decisions, leading to the second lesson which we took from this talk;
Ethical integration of AI into society necessitates the design of socio-technical systems which can artificially navigate “ethical gridlock”.
Dr. Ajmeri then described the potential of multiagent systems research for designing ethical systems by incorporating agents’ value preferences (ethical requirements) and associated negotiation techniques. This led to a good debate on the merits and flaws of attempting to incorporate emotion into socio-technical systems, with questions such as:
Can the concept of emotion be heuristically defined to enable pseudo-emotional decision making in circumstances when there is no clear virtuous outcome?
Is any attempt to incorporate synthetic emotion inherently deceitful?
These questions were interesting by the very nature that they couldn’t be answered, but the methods described by Nirav did, in the authors opinion, describe a system which could achieve what was required of it – to handle ethically challenging situations in a fair manner.
What must come next is the validation of these systems, with Nirav prompting that the automated handling of information with respect to the (now not-so-recent) GDPR regulations would provide a good test bed, prompting the audience to consider what this implementation might involve.
The end of this talk marked the halfway point of the BIAS summer school, with plenty of great talks and discussions still to come. We would like to thank Dr. Nirav Ajmeri for this discussion, which sits comfortably in the wheelhouse of problems which the Interactive AI CDT has set out to solve.