PDA

View Full Version : Berkeley scientists developing artificial intelligence tool to combat ‘hate speech’



End Times
17th December 2018, 11:18 AM
https://www.thecollegefix.com/berkeley-scientists-developing-artificial-intelligence-tool-to-combat-hate-speech-on-social-media/

‘Ten students of diverse backgrounds’ helped developed algorithm

Scientists at the University of California, Berkeley, are developing a tool that uses artificial intelligence to identify “hate speech” on social media, a program that researchers hope will out-perform human beings in identifying bigoted comments on Twitter, Reddit and other online platforms.

Scientists at Berkeley’s D-Lab “are working in cooperation with the [Anti-Defamation League] on a ‘scalable detection’ system—the Online Hate Index (OHI)—to identify hate speech,” the Cal Alumni Association reports.

In addition to artificial intelligence, the program will use several different techniques to detect offensive speech online, including “machine learning, natural language processing, and good old human brains.” Researchers aim to have “major social media platforms” one day utilizing the technology to detect “hate speech” and eliminate it, and the users who spread it, from their networks.

End Times
17th December 2018, 11:21 AM
Profs develop tool for flagging 'social media prejudice'

https://www.campusreform.org/?ID=11423

Amid backlash to social media platforms being accused of squelching conservative voices, two university professors are promoting a new system designed for "automatically detecting prejudice in social media posts."

The program has the ability to flag certain posts as “having the potential to spread misinformation and ill will,” according to the University of Buffalo. The system is a result of a recent study by University of Buffalo assistant professor in the Department of Management Science and Systems, Haimonti Dutta, and Arizona State University Walter Cronkite School of Journalism and Mass Communication assistant professor, K. Hazel Kwon.

“In social media, users often express prejudice without thinking about how members of the other group would perceive their comments,” Dutta said, according to the university’s website. Dutta further asserted that “this not only alienates the targeted group members, but also encourages the development of dissent and negative behavior toward that group.”

The study analyzed “intergroup prejudice" by using Twitter data collected immediately after the 2013 Boston Marathon bombing. Using this data, the researchers set up a way to detect "intergroup prejudice" with artificial intelligence and machine learning. The messages identified by the new system are then automatically flagged.

“Terms like ‘prejudice’ are well defined in (((social psychology))) literature and we adopted the same,” Dutta told Campus Reform. “Our paper cites several references, the most prominent being Gordon Allport’s Nature of Prejudice, a 1954 publication.”

According to Dutta, detection of “prejudiced” social media content is an “increasingly important” but daunting task, one with which the university hopes the new system will assist.

Dutta explains that social media monitoring is critical because prejudiced messages have the ability to spread “far more rapidly and broadly” on social media than through person-to-person interactions.

“We would like to have a system like this integrated into browsers on the client side so that users can use them to tag social media content that causes hate, aversion and prejudice,” Dutta told Campus Reform.

Ares
17th December 2018, 11:22 AM
https://www.thecollegefix.com/berkeley-scientists-developing-artificial-intelligence-tool-to-combat-hate-speech-on-social-media/

‘Ten students of diverse backgrounds’ helped developed algorithm

Scientists at the University of California, Berkeley, are developing a tool that uses artificial intelligence to identify “hate speech” on social media, a program that researchers hope will out-perform human beings in identifying bigoted comments on Twitter, Reddit and other online platforms.

Scientists at Berkeley’s D-Lab “are working in cooperation with the [Anti-Defamation League] on a ‘scalable detection’ system—the Online Hate Index (OHI)—to identify hate speech,” the Cal Alumni Association reports.

In addition to artificial intelligence, the program will use several different techniques to detect offensive speech online, including “machine learning, natural language processing, and good old human brains.” Researchers aim to have “major social media platforms” one day utilizing the technology to detect “hate speech” and eliminate it, and the users who spread it, from their networks.

Until /Pol comes out with a new "Hate speech" code specifically targeting this algorithm.. They learned nothing when /Pol came out with Skype, Googles, and other symbolic company names and logos to represent "hate speech". Speech is fluid and always changing.

Can you imagine how many false positives will be detected when normies start wanting to "Skype" with their friends over Twitter?? LOL