Not too long ago, a Google employee, James Damore was fired for writing an unsavory memo reinforcing the gender stereotypes in tech. Coincidentally, four months prior to the event, Princeton University published research claiming how just like humans, Artificial Intelligence (AI) is prone to much of the same biases as humans; including incorporation of racist and sexist stereotypes in their intelligence. The research was featured on many of the popular media publications, including Wired and Guardian.
As a technologist and security researcher I like questioning a lot of things and often find myself at the intersection of ‘socio-technological’ issues.
Let’s take the morals and ethics of stereotyping out for a second — and I speak as a human who is against stereotyping in general and has an immigrant card, gay card, and color card, if it adds any credibility. However, if we were to think critically whether stereotypes in an AI is the result of flawed algorithms picking up on human bias, or refined intelligence which would guarantee better efficiency, though not a 100% accuracy, which one would it be?
In 2012, Bruce Schneier, one of the most renowned security figures in the industry posted a memo about Airport Security, also featured in one of his books, questioning airport security screening tactics. He ultimately concluded that ‘profiling’ at the airport security checkpoints “is a bad idea,” for obvious reasons. Yet in a separate post, he commented “how we should adopt more of the Israeli security model here in the U.S.” The same article mentions Rafi Sela, the president of AR Challenges, a global transportation security consultancy, doesn’t believe that profiling is wrong. Whether profiling is right or not, I would certainly argue TSA’s security theater, as Schneier puts it — through which every. single. person and baggage has to pass through is not any more efficient if it still misses critical threats; in fact the process lacks any and all intelligence.
As an Arab-looking man who shares some Middle Eastern ancestry, I was, on occasion, checked more than once when transiting through London’s majestic Heathrow Airport and one time at a reform Jewish synagogue, known for its belief in being open-to-all, welcoming and promoting diversity. I was caught off-guard at times and sometimes a little, let’s just say, hurt for sure but not as offended: the security personnel are just doing their jobs, albeit bending the rules. After all, no one would want to have it on their conscience that they could have screened that exotic looking guy when they had the chance, if… I was to end up being the bad guy.
Let us apply the concepts of profiling, stereotyping and constant relearning to inanimate entities — and we have essentially created an entire field of study what we call machine learning, evolved from its very parent fields like artificial intelligence, pattern recognition, and statistics. Amazon is able to suggest items you “may be interested in” after ‘profiling’ you based on your previous purchases — no matter how inaccurate or accurate it may be. Netflix and Hulu are likely doing similar things: learn your interests over time, program themselves, relearn and reprogram themselves based on new data, and present to you what’s relevant based on a calculative analysis, even if it lacks a 100% accuracy.
I am aware that science and math do not always translate “as it is” into our society. For example, any economist knows imposing price ceiling causes deadweight loss or the negative effects of welfare — economic inefficiency from a financial standpoint, yet governments do it out of ‘fairness’ and empathy. My point of writing this piece is not to say if stereotypes or profiling in real life are okay or not, but rather to make you critically think and question: be it machines or humans, if observing behaviors and patterns, applying them across a wider data set, learning, relearning and improving data isn’t intelligence, then what is?
Usually it’s humans who teach machines everything but sometimes we may be able to learn more from machines about our own behavior. In this case, the findings in Princeton’s research make us beg the question: putting aside the morals of it: is AI indeed intelligent? And if so, should it be adapted to be politically correct?
© Ax Sharma. All Rights Reserved.