Member-only story
Not too long ago, a Google employee, James Damore was fired for writing an unsavory memo reinforcing the gender stereotypes in tech. Coincidentally, four months prior to the event, Princeton University published research claiming how just like humans, Artificial Intelligence (AI) is prone to much of the same biases as humans; including incorporation of racist and sexist stereotypes in their intelligence. The research was featured on many of the popular media publications, including Wired and Guardian.
As a technologist and security researcher I like questioning a lot of things and often find myself at the intersection of ‘socio-technological’ issues.
Let’s take the morals and ethics of stereotyping out for a second — and I speak as a human who is against stereotyping in general and has an immigrant card, gay card, and color card, if it adds any credibility. However, if we were to think critically whether stereotypes in an AI is the result of flawed algorithms picking up on human bias, or refined intelligence which would guarantee better efficiency, though not a 100% accuracy, which one would it be?
In 2012, Bruce Schneier, one of the most renowned security figures in the industry posted a memo about Airport Security, also featured in one of his books, questioning airport security screening tactics. He ultimately concluded…