Dangers posed by facial recognition like mass surveillance and mistaken identity have been widely discussed in recent years. But digital rights groups say an equally insidious use case is currently sneaking under the radar: using the same technology to predict someone’s gender. Now, a new campaign has been launched to ban technologies that conduct gender recognition. More than 60 NGOs have sent a letter to the European Commission, asking it to ban this technology.
“Trying to predict someone’s gender from digitized clues is fundamentally flawed”, says Os Keyes, a researcher who’s written extensively on the topic. This technology tends to reduce gender to a simplistic binary and, as a result, is often harmful to trans people who might not fit into these narrow categories.
When the resulting systems are used for things like limiting entry for physical spaces or verifying someone’s identity for an online service, it leads to discrimination.
Commercial facial recognition systems, including those sold by big tech companies like Amazon and Microsoft, frequently offer gender classification as a standard feature.
With facial recognition tech, if someone has short hair, they’re categorized as male; if they’re wearing makeup, they’re female. Similar assumptions are made based on biometric data like bone structure and face shape. The result is that people who don’t fit easily into these two categories — like many trans individuals — face discrimination.
“These systems don’t just fail to recognize that trans people exist. They literally can’t recognize that trans people exist,” says Keyes.
Current applications of this gender recognition tech include digital billboards that analyze passersby to serve them targeted advertisements; digital spaces like “girls-only” social app Giggle, which admits people by guessing their gender from selfies; and marketing stunts, like a campaign to give discounted subway tickets to women in Berlin to celebrate Equal Pay Day that tried to identify women based on facial scans. Researchers have also discussed much more potentially complex use cases, like deploying the technology to limit entry to gendered areas like bathrooms and locker rooms.
Being rejected by a machine in such a scenario has the potential to be not only humiliating and inconvenient but also to trigger an even more severe reaction from others present.
Ultimately, technology that tries to reduce the world to binary classifications based on simple heuristics will always fail when faced with the variety and complexity of human expression, inevitably hurting those who are already marginalized by society.
Source: The Verge
Microsoft has launched its AI-powered “Support Virtual Agent” chatbot for Xbox Insiders in the U.S.,…
Android Authority recently polled its users to find out if they would purchase a Tesla…
The Secretary of the Sukkur IBA Testing Agency has formally requested urgent action from the…
The Pakistan Software Export Board (PSEB) has launched a nationwide program to encourage IT startups…
A significant issue with Google Play Services has left many Pixel users unable to access…
When it comes to Android messaging apps, WhatsApp stands out as one of the best.…
Leave a Comment