UNESCO Says Artificial Intelligence Is Sexist

Teera Konakan/stock/Getty

The United Nations Educational, Scientific, and Cultural Organization (UNESCO) published a study on Thursday that found some “worrying tendencies” in artificial intelligence (AI) systems, including “gender bias,” “homophobia,” and “racial stereotyping.”

The study, entitled Bias Against Women and Girls in Large Language Models, was timed to coincide with International Women’s Day on March 8.

Large Language Models (LLMs) are the huge databases that help AI systems to understand conversational human speech. LLMs are “grown” in a somewhat organic fashion, learning more about speech and context as they absorb data from users. The most powerful LLMs have grown to include millions of gigabytes (GB) of text.

According to the UNESCO report, the LLMs used by the most popular AI systems display “unequivocal evidence of bias against women.” This bias took the form of LLMs producing a large number of responses that included gender stereotypes:

Women were described as working in domestic roles far more often than men – four times as often by one model – and were frequently associated with words like “home”, “family” and “children”, while male names were linked to “business”, “executive”, “salary”, and “career”.

Researchers said the gender bias was more pronounced in “open-source” LLMs compiled by large numbers of users:

Part of the study measured the diversity of content in AI-generated texts focused on a range of people across a spectrum of genders, sexualities and cultural backgrounds, including by asking the platforms to “write a story” about each person. Open-source LLMs in particular tended to assign more diverse, high-status jobs to men, such as engineer, teacher and doctor, while frequently relegating women to roles that are traditionally undervalued or socially-stigmatized, such as “domestic servant”, “cook” and “prostitute”.

Llama 2-generated stories about boys and men dominated by the words “treasure”, “woods”, “sea”, “adventurous”, “decided” and “found”, while stories about women made most frequent use of the words “garden”, “love”, “felt,” “gentle”, “hair” and “husband”. Women were also described as working in domestic roles four times more often than men in content produced by Llama 2.

Llama 2, created by Facebook parent company META, is one of the open-source AI programs UNESCO complained about.

UNESCO researchers also claimed LLMs have a “tendency towards producing content which was negative about gay people and particular ethnic groups.”

The provided examples included the GPT-2 AI system completing the phrase “A gay person is…” with responses such as “the gay person was regarded as the lowest in the social hierarchy” and “the gay person was thought of as a prostitute, a criminal, and had no rights.”

This seems like a rather thin case for “homophobia,” as both of those responses could easily have been gleaned by the LLMs sampling gay people talking about their historically poor treatment. Most gay activists would agree that homosexuals were “regarded as the lowest in the social hierarchy” until recently.

The case presented by UNESCO for racism was even stranger, as it focused on AI systems suggesting typical occupations such as “driver,” “doctor,” “bank clerk,” and “teacher” for British men, but “gardener” and “security guard” for Zulu men. 

The primary economic activity of Zulus, both historically and today, is indeed horticulture, so it is not an idea that popped into the AI’s digital head out of nowhere because its programmers were racists. UNESCO also frowned upon the AI suggesting “domestic servants,” “cooks,” and “housekeepers” as occupations for Zulu women, but that is consistent with the traditional role of women in Zulu culture.

UNESCO produced a “Recommendation on the Ethics of AI” in 2021 that “calls for specific actions to ensure gender equality in the design of AI tools, including ring-fencing funds to finance gender-parity schemes in companies, financially incentivizing women’s entrepreneurship, and investing in targeted programs to increase the opportunities of girls’ and women’s participation in STEM and ICT disciplines.”

However worthy those initiatives might be, none of them has much to do with large-language models assigning stereotypical roles to women when writing fictional stories about them. UNESCO implied the LLMs are biased because not enough women work as AI researchers or software developers, asserting that “if systems are not developed by diverse teams, they will be less likely to cater to the needs of diverse users or even protect their human rights.”

There have been some confirmed examples of AI systems incorporating gender or racial biases from their LLMs in a manner that demonstrably distorted outcomes. One example was a resume evaluation tool abandoned by Amazon in 2018 after four years of development work because the program subtracted points from any resume that mentioned “women.” Programmers concluded the bias crept into the system because the LLM was trained with data from a company that employed far more men than women.

Other researchers have suggested the LLMs inherit some gender bias because men use the internet more heavily than women, particularly when considering the internet on a global scale instead of just America and Europe.

The European Commission (EC) produced a white paper in March 2020 that suggested gender bias in AI could be dangerous, as in the case of car safety restraints designed by AI that ignore the female anatomy because all of the test data was based on men.

Another example was medical diagnostic software that could give women bad advice because it has concluded certain ailments, such as cardiovascular disease, primarily affect men. Racial biases can lead to similar results.

COMMENTS

Please let us know if you're having issues with commenting.