Is AI Censoring Us?

censorship
Partner content from our friends at Emory Goizueta.

Artificial intelligence has been hogging headlines around the world in recent months. In late March 2023, an unprecedented coalition of tech CEOs signed an open letter calling for a moratorium on AI training. The race to empower powerful artificial minds should be paused, argued signatories (including Elon Musk) to give humanity time to review and reassess the potential risks of developing “human-competitive intelligence”–intelligence that “no one–not even their creators–can understand, predict, or reliably control.”  

Concerns about the unchecked rise of AI are not new, and global media is increasingly sounding the alarm, citing concerns that range from invasion of privacy to an existential threat to human existence. 

Weighing in on this with compelling new evidence around the “unintended consequences” of AI is research by Goizueta’s Ramnath Chellappa and Information Systems PhD candidate, Jonathan Gomez Martinez.  

Uncovering the Threat 

Is AI censoring us? Emory Goizueta Faculty Ramnath K. Chellapa
Ramnath K. Chellapa

Their paper, Content Moderation and AI: Impact on Minority Communities, takes a hard look at how the use of AI in social media could disadvantage LGBTQ+ users. And what they find is worrying.  

Chellappa, who is Goizueta Foundation Term Professor of Information Systems & Operations Management, explains that he and Gomez Martinez homed in on Twitter to explore how unchecked artificial language moderation might (mistakenly) censor the use of “otherwise toxic” language by failing to understand the context or nuanced use of the LGBTQ+ lexicon. Examples of this include “reclaimed language”—verbiage that would be a slur in other contexts—but is reclaimed and prosocial if used by the originally targeted community.  

“This is a community that has ‘reclaimed’ certain words and expressions that might be considered offensive in other contexts. Terms like ‘queer’ are used within the community both in jest and as a marker of identity and belonging. But if used by those outside the community, this kind of language could be deemed inflammatory or offensive.” 

Jonathan Gomez Martinez
Is AI censoring us? Emory Goizueta PhD Candidate Jonathan Gomez Martinez
Jonathan Gomez Martinez

Gomez Martinez adds: “We wanted to measure the extent to which AI’s lack of a nuanced understanding of what is ‘acceptable’ affects minority users’ online interactions. As humans, we understand that marginalized communities have long used ‘reclaimed words’ both in jest and as a kind of rallying cry. Our intuition was that the machine simply wouldn’t understand this without context—context that is more immediately apparent to people.” 

Determining the Impact of AI-Based Moderation 

To test this, he and Chellappa looked at data from social media behemoth, Twitter. During the pandemic in 2020, the platform made a significant shift to AI-based content moderation to accommodate stay-at-home measures. Data from Twitter’s proprietary Academic Research API afforded Gomez Martinez and Chellappa access to a complete listing of historical tweets and replies before, during and after this period. Together they analyzed a total of 3.8 million interactions (1.8 million tweets and 2.0 million replies) from a panel of 2,751 users, of which 1,224 self-identified as LGBTQ+ in their Twitter bios. Their study ran over four months, from January to May 2020, before, during and after the switch to machine-based moderation.  

Using the same tools that Twitter moderators deploy to moderate interactions, Gomez Martinez and Chellappa were able to measure any increase or decrease in pro-social, in-group teasing and toxic language among LGBTQ+ users: terms such as “bitch” or “queer,” which research shows to be a form of ritualized insults—dubbed “reading” by the community—which can appear inappropriate or incoherent to outsiders, says Chellappa.  

“Analyzing the language, we find a notable reduction in the use of terms that could be considered toxic. When the AI moderation is in effect, you see these users’ language become more vanilla,” he adds. Quantifiably so, in fact.  

Chellappa and Martinez find a 27 percent reduction in the use of reclaimed language among LGBTQ+ users. And while that doesn’t sound like much, it’s significant for the community, says Gomez Martinez. 

“Using in-language and reading each other is one way for this marginalized group to create a sense of community and social status. Not just that, we know from research that LGBTQ+ people use slurs and insults as a way of preparing themselves emotionally and psychologically for hostile interaction with heterosexual individuals. This kind of teasing and playing helps build resilience, so any reduction in it is significant.” 

Jonathan Gomez Martinez

Good Intentions May Breed Unexpected Consequences 

So what does this mean for social media, for the LGBTQ+ community or any marginalized group for that matter, that might be prone to automated censorship? And how does any of this play out in the context of broader concerns around AI? 

For Chellappa and Gomez Martinez, there is a major hazard in granting technology any degree of control over how human beings interact. And it’s rooted in the mismatch between good intentions and unexpected consequences. Their paper, one of the first to dig into the impact of AI on actual business and society, lays bare some of the real-world impact AI has already had on marginalized people. While this study looks at the LGBTQ+ community, it could equally apply to any group that is prone to bias or exclusion—racial minorities or any other underrepresented demographic. 

“Wherever you have user-generated content, you are likely to find communities with their own, unique way of interacting. We looked at LGBTQ+ Twitter users, but you could also look at the African American community, for instance.”

Ramnath K. Chellapa

At a time when social media platforms have become almost newslike in their influence, this is a concern. On the one hand, censoring certain demographics might earn Twitter et al an unwanted reputation for being anti-LGBTQ+ or racist, he adds. But there are even bigger stakes here than bad publicity. 

“Twitter has long aspired to be a kind of global town square,” says Gomez Martinez. “But you end up pretty far from that scenario if only some voices are truly heard, or if you start reinforcing biases because you are using a time-saving technology that is not equipped yet to understand the complexity and nuance of human interaction.” 

AI isn’t there yet, say Chellappa and Gomez Martinez. And they caution against using AI indiscriminately to expedite or streamline processes that impact human communication and interchange. If we don’t keep track of it, their research shows that AI has the potential to start dictating and moving people into normative behavior—effectively homogenizing us. And that’s a problem. 

Goizueta faculty apply their expertise and knowledge to solving problems that society—and the world—face. Learn more about faculty research at Goizueta. 

Ready to learn more about AI or level up your career? Learn more about Emory’s full-time MS in Business Analytics—now offering an AI in Business track—for early career professionals, or Emory’s advanced part-time MS in Business Analytics for Working Professionals. 

censorship
Is AI Censoring Us?

Photos from Emory Goizueta / Emory Business

Sign up for The Weekly Chic

Receive the latest from MBAchic, advice, news, jobs and more each week.

Sign up for The Weekly Chic

Receive the latest from MBAchic, advice, news, jobs and more each week.

Related Articles