AI trained to give ethical advice mirrors racist biases in society

‘Ask Delphi’ project developers say AI merely guesses what average Americans will do in moral situations

Picture Source - Getty

In a harrowing technological development, an artificial intelligence (AI) bot created to give ethical advice, has started emulating actual racial and gender biases prevalent amongst humans.

The software called Ask Delphi was developed by Allen Institute for AI in the US. It allows users to input a question based on their socio-cultural dilemmas and the software returns an ethical judgement, which often has perpetuated racist stereotypes. For instance, if a user asked whether it was safe if a white man followed them at night, Ask Delphi would say there was no threat. On the flipside, if asked same about a black man, the software would say it was ‘concerning’.

Media outlet Futurism noted that rejigging questions can also help achieve desired outcomes. For instance, a user could ask if it was okay for them to play music at 3:00am while their roommate was asleep, to which Ask Delphi would say it was unacceptable to do so. But if the same question was posed with the addition that the music made the user happy, then the software would say it was okay to play music while the roommate was asleep.

University of Pittsburgh postdoctoral fellow Dr Brett Karlan, who has been researching cognitive science and AI, told Futurism that to the credit of Ask Delphi’s developers, they did list in their paper possible biases that could creep up in the software. Nevertheless, he said, the AI was not just being deployed to understand words, but the project had a moral dimension to it, which rendered it vulnerable to potential controversy.

The problem, Karlan said, was that the average user, who was not versed in AI, could easily mistake Ask Delphi’s platform as a moral authority in critical social situations.

A PhD student, Liwei Jiang, who is also a co-author for the paper on Ask Delphi, told Futurism that the platform was meant to demonstrate the differences between human and robotic abilities to process ethical conundrums. Jiang furthered that it was a ‘research prototype’ to explore the possible usage of AI in ethical situations and that it was discomfiting at present because it reflected a lot of real time biases in society.

Ask Delphi was developed using social media threads with ‘unfiltered internet data’ where actual humans furbish advice to inquisitive internet users. These included subreddits threads like Am I the As*h*le or Confessions and an advice column titled Dear Abby. It must be added that the scenarios were taken from these threads, not the actual answers themselves. Ask Delphi was trained to curate responses using Amazon crowdsourcing service MechanicalTurk.

The project website has also claimed that the endeavour was just an experiment and should not be taken at face value in actual moral situations. Ask Delphi, said Jiang, simply takes a stab at what the average American thinks in any given situation.

Comments are closed.