Director Shalini Kantayya and UI Professor Deborah Whaley in Conversation
FilmScene — Facebook Live; Friday, Nov. 20 at 8 p.m.
“The more that humans share with me, the more I learn.”
This is the somewhat ominous voiceover, sourced from the Microsoft AI robot Tay, that opens Shalini Kantayya’s equally ominous documentary, Coded Bias. This line is ostensibly supposed to reassure us that robots rely on human inputs in order to function and only through increased interaction with humans can they “learn” new things. Kantayya’s excellent film should convince us that this relationship is not necessarily good.
I will confess to being one of what I suspect are a lot of Americans who don’t really understand what artificial intelligence is or how it works, despite the fact that it seems to interfere with our lives on an almost incessant basis. To my limited and tech-adverse sensibilities, AI seems a lot like HAL in 2001: a Space Odyssey or like the numbered social overloads in The Prisoner — disembodied, all-knowing and all-seeing sources of information that order their social spaces and impose, at least indirectly, their own sense of correctness by controlling the actions of other players (i.e. most humans) towards a desired outcome.
Before watching Coded Bias, I could pretty easily have been convinced that this was the perspective of someone uninformed, unscientific and maybe a little paranoid. After watching this bracing film, I can now feel that my ill-informed, paranoid view is actually a pretty much totally accurate understanding of what’s going on.
For me, this is bad. For non-white and non-male people who have ever actually used these algorithms on the internet, this can be *very bad. Lose-your-job bad. Lose-your-housing bad. Kill-your-credit-score bad. Even reincarcerate-you bad. This is because the data and information that inform and direct the algorithms which make crucial determinations on important issues for everyday life are mostly based on inputs and data generated by a very select number of white people.
The core ideas of this film come, ironically enough, out of a computer science project at MIT in which programmer Joy Buolamwini, a dark-skinned Ghanian-American, is trying to develop software for an “inspiration mirror.” In this mirror, the user looks into a camera and uses a computer-generated archive of heroic figures whose faces are superimposed on his or her own in order to generate self-assurance to start the day with confidence.
Leaving aside the problem of why you would ever want Gandhi or Malcolm X looking at your sheet-creased, unkempt face every morning, challenging you to be your better self, this project leads Buolamwini to a remarkable discovery: The facial recognition software that she is using actually does not recognize her face — that is, until she puts on a white mask.
This is because the inputs into the program and most of the data used to “perfect” it were all developed by white people, specifically nerdy white men.
We learn how this process is especially dangerous today because most of the programs and algorithms used by big tech corporations are programmed not by being told a specific task to accomplish, but by being given a general task and access to lots of data to “learn” how to accomplish that task most efficiently. This is the type of programming used in everything from predicting what you might like to buy from Amazon to determining if you are a good candidate for a particular job, or school, or how much banks should trust that you will pay back a loan.
In some ways, we knew all this already. Tools developed by humans will of course contain the stains of human prejudice, no matter how independently they may seem to operate. After all, as Cathy O’Neil, one of the authors quoted extensively in this film, puts it, “Before there were algorithms, there were humans.”
The distressing aspects, though, are the sheer scale of the data being gathered and the lack of recourse or accountability when the technology gets it wrong. We learn, for instance, that at the time of the film’s making, there were about 6 million CCTV cameras spread throughout the UK and that it is actually a crime to cover your face when passing them on the street. We also learn of New Yorkers who were threatened with eviction if they failed to submit to facial recognition technology to obtain entrance to their buildings.
We learn — unsurprisingly — that about nine massive tech companies control almost all of the algorithms for generating and manipulating this data, that this information is a closely guarded corporate secret and that these companies exert a massive amount of political influence that affects public policy and undermines regulation and accountability. (The fact that, until 2018, Google’s official code of conduct manual included the earnest directive ‘don’t be evil’, today seems both absurdly naïve and tragically hilarious.)
There is some inevitable redundancy to Kantayya’s movie. Most viewers will likely already be aware that white people tend to win at stuff when they write the rules of the contest. American corporate capitalism controls most of your life already and shows no signs of loosening its grip, at least on the world of technology. There is also a little disjointedness — Kantayya is sometimes guilty of conflating the problems of the data-based methodology of AI with the much less subtle political motives of the surveillance state. “Coded bias” is not the same as just straight spying on you. The importance of the film’s argument, though, along with its efficient run time, makes these problems trivial.
It has become a cliché of modern revisionist history that “only the winners get to write the story of the past.” It may now be the case that the “winners” are also writing the story of the future. I am not a big believer in trigger warnings, and so will put this one at the end where it will not do you much good, but if your day involves watching Coded Bias, you should probably stay off of all your social media for the duration of this film. But by all means, do pay attention, because the computers definitely are.
Coded Bias streams for free at FilmScene starting Wednesday, Nov. 18, as part of the reimagined Science on Screen series. Kantayya will be in conversation with University of Iowa professor Deborah Whaley live on Facebook on Friday, Nov. 20 at 8 p.m. FilmScene encourages interested viewers to see the film before the conversation and arrive with questions. The conversation will be added to the end of the film for viewers watching it afterward.