Discriminatory Algorithms
AI chatbots that are discriminatory, hateful and racist, and are based on data that was collected without user-consent.
South Korean AI chatbot service ‘Lee Luda’ used hate speech towards sexual minorities and disabled people, and made racist remarks in conversations with its users. This chatbot is unethical because even though the negative consequences were unintentional, it caused harm to individuals and specific communities by failing to be inclusive. It is an unsurprising result when looking at the opaque design process of the chatbot service, which was not ethically framed from the beginning. First of all, the data needed for the training algorithm was inappropriately obtained. The company used data without obtaining consent from the data providers (users) in creating an AI chatbot. Moreover, after the service was released, the chatbot was not properly managed or controlled by the company. Luda did not hesitate to express hatred towards a certain group of individuals, and the service provider was ignorant of all these hatreds that users experienced. This indicates a lack of accountability and responsibility.
Seen in:
South Korea
Keywords:
AI
Submitted by
Dr Boyeun Lee
Post-doctoral Research Fellow, The University of Exeter Business School
United Kingdom
Submitted on
April 15, 2023
This was one example of unethical design.