For better or worse, facial recognition technology is everywhere. It’s used to clear or convict suspected criminals, board passengers onto planes, and even hire new employees. Friendly robots are being built to recognise our faces, while quick face scans can now unlock our smartphones.
There’s one big problem: the algorithms behind these technologies are inherently discriminatory.
That’s why Joy Buolamwini, a computer scientist at the Massachusetts Institute of Technology, founded the 'Algorithmic Justice League' – a movement that aims to fight this kind of bias by advocating for more coding diversity. She ran into the bias herself when she sat in front of a computer that was able to recognise her colleagues’ faces as faces – but for Boulamwini, a Ghanaian-American, it didn’t work at all. That’s because the sets of human faces used to train these kinds of programmes are largely homogenous and only recognise certain races, hairstyles or features.
In the past, similar challenges appeared when Siri or Alexa couldn’t recognise voice commands from certain accents. Amazon used misguided machine learning (the same kind that Buolamwini encountered) to screen candidates, which ended up favouring men. Lawmakers are starting to take notice: in the US, legislators began proposing bills to fight algorithmic bias in April. For advocates like Buolamwini, the race is on to fight these inherent biases in technology before they become even more pervasive.