‘Junk Science’: LGBT Groups Attack Scientific Researchers Who Created Gay-Detecting AI

LOS ANGELES, CA - JUNE 13: A defiant fist is raised at a vigil for the worst mass shooing in United States history on June 13, 2016 in Los Angeles, United States. A gunman killed 49 people and wounded 53 others at a gay nightclub in Orlando, Florida early yesterday …
David McNew/Getty Images

Popular LGBT groups GLAAD and The Human Rights Campaign are attacking the scientific researchers who created A.I. that can detect whether a person is gay or straight from photos.

The study from Stanford University revealed “that a computer algorithm could correctly distinguish between gay and straight men 81% of the time, and 74% for women,” however experiments did not include those who are transgender or bisexual, and black people were also excluded.

According to the New York Post, “Advocates called the research ‘junk science,’ claiming that not only could the technology out people, but it could put their lives at risk – especially in brutal regimes that view homosexuality as a punishable offense.”

In an open letter, GLAAD and The Human Rights Campaign called upon the Stanford University researchers and the media to backtrack on their reports.

“GLAAD, the world’s largest LGBTQ media advocacy organization, and the Human Rights Campaign, the nation’s largest LGBTQ civil rights organization, today called on Stanford University and responsible media outlets to expose dangerous and flawed research that could cause harm to LGBTQ people around the world,” they declared. “A professor affiliated with Stanford University has published a research study that resulted in several media outlets wrongfully suggesting that artificial intelligence (AI) can be used to detect sexual orientation. Further, GLAAD and HRC today urged all media who either covered the study or plan to in future coverage to include the myriad flaws in the study’s methodology — including that it made inaccurate assumptions, categorically left out any non-white subjects, has not been peer reviewed, and many other issues enumerated below.”

Despite the fact that the A.I. demonstrated a success rating ranging from 74 and 81 percent in determining whether people were gay or straight, GLAAD’s Chief Digital Officer, Jim Halloran, claimed, “Technology cannot identify someone’s sexual orientation.”

“What their technology can recognize is a pattern that found a small subset of out white gay and lesbian people on dating sites who look similar. Those two findings should not be conflated,” he continued. “This research isn’t science or news, but it’s a description of beauty standards on dating sites that ignores huge segments of the LGBTQ community, including people of color, transgender people, older individuals, and other LGBTQ people who don’t want to post photos on dating sites.”

“At a time where minority groups are being targeted, these reckless findings could serve as a weapon to harm both heterosexuals who are inaccurately outed, as well as gay and lesbian people who are in situations where coming out is dangerous,” Halloran concluded.

Ashland Johnson, the director of public education for the Human Rights Campaign, also claimed, “This is dangerously bad information that will likely be taken out of context, is based on flawed assumptions, and threatens the safety and privacy of LGBTQ and non-LGBTQ people alike.”

“Imagine for a moment the potential consequences if this flawed research were used to support a brutal regime’s efforts to identify and/or persecute people they believed to be gay,” Johnson proclaimed. “Stanford should distance itself from such junk science rather than lending its name and credibility to research that is dangerously flawed and leaves the world — and this case, millions of people’s lives — worse and less safe than before.”

Charlie Nash covers technology and LGBT news for Breitbart News. You can follow him on Twitter @MrNashington and Gab @Nash, or like his page at Facebook.


Please let us know if you're having issues with commenting.