•  
  •  
 

Abstract

Social networks sometimes become a medium for threats, insults, and other types of cyberbullying. A large number of people are involved in online social networks. Hence, the protection of network users from anti-social behavior is a critical activity [19]. One of the significant tasks of such activity is the detection of toxic language. Abusive/Toxic language in user-generated online content has become an issue of increasing importance in recent years. Most current commercial methods use blacklists and regular expressions; however, these measures fall short when contending with more subtle, lesser-known examples of hate speech, profanity, or swearing[6]. Abusive language classification has become an active research field with many recently planned approaches. However, while these approaches address some of the tasks’ challenges, others remain unsolved, and directions for further research are needed. In this work, we intend to create a robust text classification to detect hate speech online. Our analysis explores a two-step approach in performing the classification of abusive language, first detecting the abusive language, and secondly, using the method of the model. We will explore various aspects of abusive language classification and their correlation to the toxicity and leverage the results and verify the toxicity in the language. Additionally, we will simulate the data-set to test to see if we can circumvent the system. Apart from toxic detection, the utmost importance is equally essential is model efficiency. We will be reviewing the models and test a cascade/singular model that achieves high throughput in the average case while maintaining a high accuracy by combining multiple approaches.

Creative Commons License

Creative Commons Attribution-Noncommercial 4.0 License
This work is licensed under a Creative Commons Attribution-Noncommercial 4.0 License

Included in

Data Science Commons

Share

COinS