Facial Recognition Can Now Tell Whether You’re Gay Or Straight

Facial Recognition Can Now Tell Whether You’re Gay Or Straight

Artificial intelligence has always had the ability to both amaze and terrify in equal measures – and one of its most recent developments does just that. Newly developed facial recognition technology can accurately guess whether people are gay or straight based on photographs of their faces. And while it’s an astonishing AI advancement, it begs the question: when could this tech ever be used for good?

Tell us more…

The study, conducted at Stanford University, found that a computer algorithm could distinguish between gay and straight faces. Psychologist and leader of the study, Michal Kosinski, and Stanford computer scientist Yilun Wang, used 35,326 photographs of people from dating websites and found the machine was able to certify their sexual orientation with a high degree of accuracy.

Presented with two pictures – one of a gay person and the other straight – the AI correctly identified the sexuality of male faces 81% of the time, and women 74% of the time. Meanwhile, human judges were able to identify the correct sexuality in men and women 61% and 54% of the time, respectively.

Why is this so dangerous?

The study has sparked debate about the dubious ethics of this kind of facial-detection technology, particularly from LGBTQ+ rights activists who are concerned it could be used against or to harm queer people. Both Glaad and the Human Rights Campaign (HRC) have criticised the study as “dangerous and flawed” and warned that it could be used to ‘out’ gay people across the world, putting them at significant risk.

Ashland Johnson, HRC’s Director of Public Education and Research, addressed the dangers of the new AI in a joint statement with Glaad, urging Stanford University to cut ties with Konsinski: “Imagine for a moment the potential consequences if this flawed research were used to support a brutal regime’s efforts to identify and/or persecute people they believed to be gay. Stanford should distance itself from such junk science rather than lending its name and credibility to research that is dangerously flawed and leaves the world – and this case, millions of people’s lives – worse and less safe than before.”

But Kosinski claims the research actually supports LGBTQ rights, claiming that it provides further evidence that sexual orientation is biological and not a choice.

Experts warned that the tech could be used to out gay people across the world, putting them at significant risk.

Who is Michal Konsinski?

Konsinski’s career revolves around collating groundbreaking research into new technologies; not only AI, but also the art of mass persuasion – according to the Guardian, the latter is what inspired the fruition of shamed (and now defunct) data analytics firm Cambridge Analytica. Konsinski was able to show that all activity on Facebook could indicate personality traits – a discovery that was exploited by Cambridge Analytica and helped make Donald Trump US president.

In this case, Kosinski said he didn’t set out to create technology that could predict sexuality, but rather that it was something he “stumbled upon”. He claims he originally had no intention of publishing the study: “I had a very good life without this paper being out. But then a colleague asked me if I would be able to look myself in the mirror if, one day, a company or a government deployed a similar technique to hurt people… There is a kind of moral question here.”

But despite his self-professed moral dilemma, there have been doubts about where Konsinski plans on taking this research. In a Guardian profile, we meet the 36-year-old as he is giving a talk in Russia – a country with zero tolerance and some of the harshest laws against LGBTQ+ people – to officials on how AI is changing society. But writer Paul Lewis notices a change in atmosphere when discussing his talk in Russia: “He becomes prickly when I press him on Russia… Did he talk about using facial-recognition technology to detect sexuality? Yes, he says – but this talk was no different from other presentations in which he discussed the same research.”

For most, the moral question that plagued Kosinski is clear-cut; a no brainer. Nick Rule, Associate Professor of Psychology at the University of Toronto, said he it could most definitely be used for corrupt purposes in the future. “If you can start profiling people based on their appearance, then identifying them and doing horrible things to them, that’s really bad,” he said. “We should all be collectively concerned.”

Fashion. Beauty. Culture. Life. Home
Delivered to your inbox, daily