Artificial intelligence is being developed that will allow advisory ‘quarantining’ of hate speech in a manner akin to malware filters – offering users a way to control exposure to ‘hateful content’ without resorting to censorship. The spread of hate speech via social media could be tackled using the same ‘quarantine’ approach deployed to combat malicious software, according to University of Cambridge researchers. Definitions of hate speech vary depending on nation, law and platform, and just blocking keywords is ineffectual: graphic descriptions of violence need not contain obvious ethnic slurs to constitute racist death threats, for example. As such, hate speech is difficult to detect automatically. It has to be reported by those exposed to it, after the intended “psychological harm” is inflicted, with armies of moderators required to judge every case. This is the new front line of an ancient debate: freedom of speech versus poisonous language. Now, an ...
Generasi Muda Indonesia yang Kreatif