AI Biases
The case of IBM's facial recognition technology was found to have biased algorithms for darker skin tones.The example highlights the implications of such biases and the need to address them to ensure fair and equitable outcomes in facial recognition systems.
One notable example of unethical design is the case of IBM's facial recognition technology, which has been shown to have biased algorithms for darker skin tones. In a 2018 study conducted by the MIT Media Lab, it was found that commercial gender classification systems, including IBM's, had higher error rates when identifying the gender of darker-skinned females compared to lighter-skinned males. The error rates were significantly higher for darker females, lighter females, darker males, and lighter males, indicating a clear bias in the algorithm. Due to these biased results, IBM's facial recognition software became scrutinised and criticised. It highlighted the potential dangers and implications of using such technology in real-world applications, particularly in law enforcement and surveillance, where misidentifications based on race can lead to severe consequences. Following the revelations, IBM announced in 2020 that it would no longer offer, develop, or research general-purpose facial recognition or analysis software. This decision was made to prevent the potential misuse of the technology and address concerns related to bias and privacy.
Author's reflection:
This example underscores the importance of thoroughly evaluating and addressing biases in facial recognition technology to ensure fair and equitable outcomes for all individuals, regardless of their skin tone.
Keywords:
AI, Bias, Cultural Appropriation, Skin
Submitted by
Samantha Osys
Submitted on
May 21, 2023
This was one example of unethical design.