Call for Papers: The Use of Artificial Intelligence to Address Online Bullying and Abuse

DEADLINES:
ABSTRACTS - SEPTEMBER 15, 2020
FULL MANUSCRIPTS - DECEMBER 1, 2020

Cyberbullying or online bullying, harassment and abuse pose significant challenges for online platforms. The use of natural language processing (NLP), various forms of machine learning (such as supervised machine learning, deep learning) and artificial intelligence (AI), is becoming more prevalent in moderating these behaviours on social media platforms and various content sharing apps. A number of social media companies refer to their increasing reliance on AI to moderate various forms of abusive behaviour, indicating their relative success in identifying these proactively. Nonetheless, companies reveal little information about how such moderation is applied in practice, about the details behind the algorithm design; and they infrequently release datasets that would allow scientists who do not work in the social media industry to understand this process. While some industry experts and scholars place significant hopes in deep learning to solve the problem of online abuse, others identify the limitations of this approach, including a relative lack of training datasets; the misinterpretation of contextual cues and relational history, and the danger of systematic bias that can inadvertently slip into such modelling. Furthermore, there is a relative lack of insight from the perspectives of sociology and psychology and other social science disciplines about how users (adults and children) perceive such interventions and about their desirability. For instance, how do users understand the balance of rights to safety on the one hand and privacy and freedom of expression on the other, when it comes to the application of proactive moderation tools?

For this special issue, we are looking for a variety of submissions from a range of disciplines that examine various aspects of AI applications to address abuse. This includes but it is not limited to: communication, education, psychology, sociology, philosophy, computer science and engineering, human computer interaction, science and technology studies, among others.

The goals of this special issue are:

  • Outline various approaches in the application of NLP, machine learning and AI to addressing cyberbullying, harassment, and various specific forms of cyberaggression

  • Outline the state of the field today, assessing the strengths and limitations of the solutions currently available

  • Find articles that not only report on current approaches to the use of AI in moderation, but also critique current methods applied by social media platforms

  • Find insights from technical sciences and social science research that would inform the design and deployment of tools for computational scholars

  • Facilitate interdisciplinarity by translating some of the work undertaken in the fields of computer science and engineering into a language that is more accessible to scholars in social sciences and humanities

  • Drawing attention of the scholars in technical fields to the work being done in social sciences and humanities on this topic that can further inform their research

Abstracts (max 500 words) should be submitted by September 15, 2020 to Tijana Milosevic at tijana.milosevic@dcu.ie. Full manuscripts (typical length between 6,000 and 9,000 words-please seek permission in advance if you need to submit a shorter or longer manuscript) to be submitted by December 1st, 2020. The issue is planned for June 2021. Please do not hesitate to contact us if you have any questions.

NB: We are interested in a wide range of topics and we would also consider submissions that address the moderation of issues that do not necessarily fall under online bullying, such as online grooming, for instance. Nonetheless, please note that if you are contemplating such a topic that is a bit outside the scope of the special issue, it is really important to tie the discussion with cyberbullying in some way—e.g. by contextualising cybergrooming as a form of online bullying.

We thank you in advance for considering our special issue for publishing your work.

Sincerely,

Tijana Milosevic, Kathleen Van Royen, and Brian Davis

Guest Editors