“Platforms like Facebook, YouTube, and Twitter are banking on developing artificial intelligence technology to help stop the spread of hateful speech on their networks. The idea is that complex algorithms that use natural language processing will flag racist or violent speech faster and better than human beings possibly can. Doing this effectively is more urgent than ever in light of recent mass shootings and violence linked to hate speech online.
But two new studies show that AI trained to identify hate speech may actually end up amplifying racial bias. [Sounds like liberal snowflakes melting in the heat of… facts.] In one study, researchers found that leading AI models for processing hate speech were one-and-a-half times more likely to flag tweets as offensive or hateful when they were written by African Americans, and 2.2 times more likely to flag tweets written in African American English (which is commonly spoken by black people in the US).
Another study found similar widespread evidence of
racial bias against [racism in] black speech in five widely used academic data sets for studying hate speech that totaled around 155,800 Twitter posts.”
Are these liberal twits suggesting that a silicon chip based artificial intelligence is racially biased? Or are liberals just allergic to facts? They did a study, and it didn’t support their anti-white worldview. It turned out that AI found a higher percentage of hate speech in the texts of blacks than in the texts of whites.
Their only possible legitimate excuse to deny the obvious conclusion here is because this was based on tweets found on Twitter. Twitter itself may be to blame for allowing and ignoring (promoting?) so much anti-white hate speech from blacks, while enforcing their own policies more often when whites tweet hate speech. Twitter’s own racist bias – its leniency and tolerance of anti-white hate speech – may therefore falsely make it look like tweets from blacks are inherently more likely to include hate speech – because Twitter seems to allow proportionately more hate speech against whites to remain on Twitter.
Sadly, the vox.com article is titled:
Of course, you could use such flawed logic to justify any conclusion. Example:
Polls show most people agree the worst acts of anti-Semitism were done by Nazis.
Liberal illogical conclusion: polls are biased against Nazis.
It reminds me of this Biden quote:
Everyday the media and coastal elites tell us how horrible “hate speech” is on the internet and how something must be done to stop it. The Supreme Court of the United States has ruled that “hate speech,” however you define it, is first amendment protected speech in America.
Some researchers from the University of Cornell decided to build artificial intelligence in order to identify “hate speech” and “offensive content.” It turns out that the remarks from white people were “substantially” less hateful than the comments purportedly made by minorities in the study. What is most interesting here is that the data was sourced from Twitter, which allegedly bans “hate speech,” unless of course that hate is coming from minorities apparently.
Of course now that the data isn’t matching the expectations of researchers and journalists they are making excuses. The AI must be racist or something.
From Campus Reform
A new study out of Cornell reveals that the machine learning practices behind AI, which are designed to flag offensive online content, may actually “discriminate against the groups who are often the targets of the abuse we are trying to detect,” according to the study abstract.“The results show evidence of systematic racial bias in all datasets”
The study involved researchers training a system to flag tweets containing “hate speech,” in much the same way that other universities are developing systems for eventual online use, by using several databases of tweets, some of which had been flagged by human evaluators for offensive content.
“The results show evidence of systematic racial bias in all datasets, as classifiers trained on them tend to predict that tweets written in African-American English are abusive at substantially higher rates. If these abusive language detection systems are used in the field they will, therefore, have a disproportionate negative impact on African-American social media users,” the abstract continues.
……Cornell’s machine learning added another variable not used by other universities. Using a combination of census data, tweet location data, and demographic-specific language, they also trained the system to quantify the same “black-aligned” or “white-aligned.” The researchers used five different databases of potential “hate speech” tweets. All five yielded the same results: tweets likely from African American’s were much more likely to be flagged as offensive than those that were likely to be from whites.”
Just remember, Twitter is fast to eliminate some things and slow on others. Here’s one example of a tweet was tolerated/ignored/on Twitter for years, from the article:
For the moment, I’m willing to assume to fault rests more with Twitter and less with Artificial Intelligence.