Unlike a lot of email signatures these days,erotice pleasuer Gmail doesn't specify its preferred pronoun.
To avoid perpetuating gender bias, Gmail stopped its "Smart Compose" text prediction feature — which provides likely ends of sentences and other phrases for Gmail users while composing emails — from suggesting pronouns, Reuters reported Tuesday.
SEE ALSO: Amazon used AI to promote diversity. Too bad it’s plagued with gender bias.Google told Mashable that Smart Compose launched in May with that bias-averting policy already in place. However, Gmail product manager Paul Lambert only recently revealed this intentional move in interviews with Reuters.
Apparently, during product testing, a company researcher noticed that Smart Compose was assigning gendered pronouns in a way that mirrored some real-world gender bias: It automatically ascribed a "him" pronoun to a person only previously described as an "investor." In other words, it assumed that the investor — a role in a largely male-dominated field — was a man.
Studies show that in language, gender bias — or assuming someone's gender based on stereotypes or tendencies associated with men or women — has the power to both "perpetuate and reproduce" bias in the way people treat each other, and the way we think of ourselves.
"Gender-biased language is harmful because it limits all of us," Toni Van Pelt, the president of the National Organization for Women (NOW) said. "If a woman is using AI, and it refers to an engineer as a 'him,' it may get in her brain that only men make good engineers. It limits our scope of dreaming. That’s why it sets us back so far."
This Tweet is currently unavailable. It might be loading or has been removed.
Gmail reportedly attempted several fixes for its own subtle gender bias, but none of them were perfect. So the Smart Compose architects decided the best solution was to remove pronoun suggestions altogether.
"At Google, we are actively researching unintended bias and mitigation strategies because we are committed to making products that work well for everyone," a Google spokesperson told Mashable over email. "We noticed the pronoun bias in January 2018 and took measures to counter it (as reported by Reuters) before launching Smart Compose to users in May 2018."
But an inherently sexist A.I. is not to blame for the potential gender bias within the algorithm. As with other A.I. tools, the gender bias at the root of Google's pronoun problem is a human one.
"Algorithms are reproducing the biases that we already have in our language," Calvin Lai, a Washington University in St. Louis professor and research director for the implicit bias research center Project Implicit told Mashable. "The algorithm doesn’t have a sense of what’s socially or morally acceptable."
Both Lai and Saska Mojsilovic, IBM's AI Science fellow specializing in algorithmic bias, explained that bias usually enters algorithms through the data algorithms learn from, also known as "training data."
Mojsilovic said, "Training data can reflect bias in some way shape or form, because as a society, this is what we generate."
A Natural Language Generator (NLG) like Smart Compose learns how to "speak" by reading and replicating the words of humans. So if data contains overt or subconscious bias, expressed in language, then AI learning from that data will reproduce those tendencies.
This Tweet is currently unavailable. It might be loading or has been removed.
Another sticking point is that bias in text generation is often difficult to articulate, and very dependent on context. And because the idea of bias and gender can be more interpretive or subjective, it can be harder to teach a machine to recognize and eradicate it.
"For us, as scientists and researchers, text is a more difficult category to master than other data types," Mojsilovic said. "Because text is fluid, and it's very hard to define what it means to be biased."
"A lot of times we think about gender bias in an old-school explicit way," Lai said. "But a lot of it happens much more subtly, on the basic assumptions that we have of other people."
Google is aware of the challenges that arise from training data. The company confirmed that it tests its algorithm training data for bias before deploying it. This is a continual process.
"As language understanding models use billions of common phrases and sentences to automatically learn about the world, it can also reflect human cognitive biases by default," a Google spokesperson told Mashable over email. "Being aware of this is a good start, and the conversation around how to handle it is ongoing."
Moreover, Gmail's Smart Compose provides its own set of challenges beyond other NLG tools. At the launch of Smart Compose predecessor Smart Reply, Google wrote that its NLG tools learn from and tailor its suggestions to individual Gmail users. So even if the algorithm was trained on data tested for bias, the very real and flawed humans it continues to learn from may have prejudices that they subconsciously express through text.
"They’re ultimately based on how people are using the language," Lai said. "And sometimes that might reflect something accurate about the world. And sometimes it might not."
At this point, removing pronoun suggestions may be the best option to avoid gender bias, or to avoid prescribing a pronoun that doesn't match someone's gender identity. NOW's Toni Van Pelt applauds the decision, and sees sensitivity around pronouns as an admirable move for an industry leader like Google.
"I think it’s really important that they were aware of their prejudice, they were aware of their bias, and did the right thing in being conservative in eliminating this," Van Pelt said. "They are leading by example for the other AI companies."
But it's also a temporary fix to the pervasive problem of making sure AI doesn't reflect and enhance our own biases.
"It leaves it up to the user to make up their own minds, rather than put the responsibility on the algorithm’s shoulders," Lai said. "That seems to be one way to absolve or remain a neutral party."
This is a problem Google is proactively working on. The company has released multiple studies, tools, and other initiatives to help developers eradicate bias. And it's working to define a criteria "fairness," which is a prerequisite for getting rid of bias from AI NLG tools in the first place.
Other researchers are also leading the way. IBM has built a tool anyone can use to assess training data. Lai's consortium Project Implicit studies the phenomenon of and potential preventions for implicit bias. (You can see some of their work here). And, crucially, hiring a diverse workforce — one that reflects the real world — is paramount to creating equitable and moral AI.
"We hold these algorithms, perhaps rightfully so, to a higher standard than we hold every day people," Lai said. "There is a vested interest in terms of our society’s values and morals to be gender neutral in many of these cases."
The silver lining: The extent to which these biases are so deeply engrained in our collective language is coming to the fore because of the development of AI. Recognizing bias as we build these tools provides the opportunity to help correct it.
"We are living in a world that is full of biases, the biases we created as humans," Mojsilovic said. "If we are really diligent about it, think about the outcome that we can end up with the technology that can actually be better than us, or help us be better, because it will teach us or point out what we ourselves might have missed."
Topics Artificial Intelligence
Rachel Maddow promises Trump tax returns, and the internet goes wild with reaction GIFs'This Is Us' season finale pulls off the show's cruelest twist yetHey law students: Want a job? Well, you better learn to code.Rachel Maddow infuriates the internet with her leisurely Trump taxes revealRupert Grint transforms into a hustler for Crackle's 'Snatch'J.K. Rowling and basketball player bond on Twitter over nasty 'Harry Potter' cutApple could kill almost 200,000 apps with iOS 11Here's how winter storms like Stella get their (unofficial) namesBotnets are zombie armies and other helpful analogies from Alphabet's new Chrome extensionPeople are pissed they didn't get more snow during Blizzard 2017Rupert Grint transforms into a hustler for Crackle's 'Snatch'Amazon makes it easier to order from your favorite restaurant through AlexaThe secretary of state used an alias email to talk climate changeNintendo takes us through the fourTeam Trump leaves door open for microwave spying and Chelsea Clinton is having none of itKal Penn shares cringeworthy audition scripts from his early days as an actorIn case eggplants are too subtle, Grindr releases more, um, expressive emojiOriginal 'Rogue One' ending did not include Darth Vader kicking assNBA star has priceless reaction to guy who wants him to sign a toasterApple could kill almost 200,000 apps with iOS 11 Best smartwatch deal: Get a Google Pixel Watch 3 for $299.99 at Amazon Google's AI Mode is reportedly hitting the homepage NYT Connections Sports Edition hints and answers for May 15: Tips to solve Connections #234 Microsoft is laying off 3 percent of employees Best portable power station deal: Save $700 on EcoFlow Delta 2 Max at Amazon Save 55% on the Anker 525 charging station 'Andor' season 2 finale, explained Best headphones deal: Take 37% off the Philips H8506 headphones Live Nation launches $30 concert ticket deal NYT Strands hints, answers for May 14 NYT Strands hints, answers for May 15 Dame removes 'Trump tariff surcharge' amid trade deal Is TikTok down? [May 2025] Best iPad deal: Save $132 on Apple iPad (10th Gen) HBO Max is brutally roasting itself on X NYT Connections Sports Edition hints and answers for May 16: Tips to solve Connections #235 Best JBL deal: Save $10 on JBL Go 4 at Amazon Wordle today: The answer and hints for May 14, 2025 New TikTok feature uses AI to bring static images to life Best TV deal: Save 41% on the Hisense U8N TV
1.7645s , 8311.09375 kb
Copyright © 2025 Powered by 【erotice pleasuer】,New Knowledge Information Network