As discussed in my chapter ‘The Face in Digital Space‘ in the book The Culture of Photography in Public Space, the human face first entered abstract matrices of comparison in the late eighteenth century with the pioneering physiognomist Johann Kaspar Lavatar. He placed the face in a psychological hierarchy using either zoological analogies or biometric algorithms. As a coda to my analysis is the recent news reports concerning Jacky Alcine, an early adopter of Google Photos, which automatically placed photos of him and his African-American friend in a folder called ‘Gorillas’. It is not possible for an algorithm to be in and of itself racist, but nonetheless Google scrambled to roll out a fix within two hours. However Google’s first fix led to yet more human faces to be categorised as gorillas, so it had to temporarily remove the word ‘gorilla’ as a category while they worked on more nuanced face recognition algorithms. These accidents point to how ‘live’ and ‘hot’ pseudo-Darwinian narratives still are in popular race discourse, such that Google quickly confessed to being ‘appalled’ by the unintended result of their algorithmic facial analysis. It also points to how easily automatic tagging and profiling systems can overreach themselves in the newly fluid context of face recognition. The face is never neutral, therefore mathematical error quickly transcodes and multiplies itself into linguistic disaster.
AI Isn’t Biased, Humans Are … – Code The Brain