This blog post is written by CDT Student Henry Addison
BIAS 22 – Human stupidity about artificial intelligence: My thoughts on Professor James Ladyman’s talk “Attributing cognitive and affective states to AI systems”, Tuesday 6th September 2022
The speaker arrives, sweaty, excited after his dash across town. The AI never stops working so perhaps he cannot either. He is a doom-monger. How are the lying liars in charge lying to you? When those people-in-charge run large tech companies, how do they take advantage of our failures to both miss and over-attribute cognitive and affective states and agency to AI in order to manipulate us?
AI is a field that seeks to decouple capabilities that in humans are integrated. For humans interacting with them, this can surprise us. In adult humans intelligence is wrapped up with sentience, autonomy is wrapped up with responsibility. Not so the chess robot – very good at chess (which for a human requires intelligence) but not sentient – nor the guided targeting system for which the responsibility of target-picking is left to a human.
Humans are overkeen to believe an AI cares, understands, is autonomous. These words have many meanings to humans, allowing the decoupling of capabilities to confuse us. “Who cares for granny?” This may be a request for the nurse (human or robot) who assists your grandmother in and out of the bath. Or it may be a request of despair by a parent trying to get their children to help prepare a birthday party. If an AI is autonomous, is it a moral agent that is responsible for what it does?
The flip side of the coin are the capabilities that we do not attribute to an AI, perhaps because they are capabilities human do not have. We lose sight of important things. Like how the machines notice and store away far more than we expect and then use these data to serve us ads, recommend us films, deny us loans, guide us to romantic partners, get us hooked on the angry ramblings of narcissists, lock us up.
AI is shit. AI is ruining us. AI is enabling a descent into fascism. But presumably there is enough hope for him to bother building up such a sweat to come and talk to us. We must do more philosophizing so more people can understand AI and avoid unconscious manipulation by it. The business models of data that take advantage of us are our fault, they are human creations not technical necessities but that means we can change them.
Then again how was the speaker lying to me? How is your lying liar of an author lying to you?