The world is an ever-changing vortex, for better and for worse, and technological development often mirrors this phenomenon. One only has to think of all the latest forms of online communication: from social networks to instant chats, from video calls to virtual reality, to the metaverse.

A world of opportunities that can cancel out all distances thanks to sharing the same virtual environment, whether we are talking about an online video game, a video conference on Zoom or an Instagram or Facebook wall.

Along with so many positive aspects, however, every technology also brings with it a dark side, which, in the overwhelming majority of cases, is more related to misuse by users than to a “fault” of the technology itself.
Thus, for example, the fact of being able to communicate at a distance immediately while “hiding behind” a nickname has also given rise to negative phenomena, such as harassment, fraud and the like, even going so far as to create “new” problems (or rather, new permutations of old problems), such as cyberbullying, a term introduced into the Italian legal system only a handful of years ago, with the Law no. 71 of 29 May 2017.

The anti-bully invention

It is from this (long) premise that the  Sony Interactive Entertainment Inc.’s new patent originates; the company has designed a revolutionary system for the PlayStation 5®(PS5) console that iscapable of detecting harassment or cyberbullying through the biometric analysis of players and the use of artificial intelligence.

But how exactly does it work?

Sony inventors imagined a PS5 equipped with an input unit configured to receive biometric data associated with all users participating in the same gaming session. With it, an artificial intelligence  (AI) capable of translating the collected data into real “emotions”, thus interpreting the mood of the players.
As soon as the system recognises a possible incident of cyber-violence, Sony’s idea is to first identify the victim (based on his or her reactions) and then the harasser, immediately “moving” them into two separate game environments, in order to then analyse the case and take action against the bully.

The current context

In gaming, current methods of detecting harassment generally require the victim to send a “complaint” to a moderator (human or virtual) reporting their harasser, the means the harasser employed, and any evidence against them (screenshots of an offending chat, session IDs of the video game they were playing, etc.). After review by the moderator, the harasser may be banned (temporarily or permanently) from the shared environment, or any further interaction with the victim within the game may be inhibited (again, temporarily or permanently).

Often, however, the main stumbling block lies in the initial complaint, which for various reasons (fear, shame or even superficiality, in some cases) may never be made, leaving the field free for these individuals to harass other users, thus ruining the gaming experience and, above all, the mental and physical well-being of the players.

Hence Sony’s idea to “anticipate” reporting, by going live to intercept the unease of online users, with the aim of reducing the percentage of unreported incidents of harassment while increasing the security of these shared environments.

Limits and opportunities

A system that undoubtedly appears to be very complex and still has several grey areas, especially with regard to the interpretative capabilities of the AI, but which also promises a more serene future for all those online gaming enthusiasts who seek in video games a moment of escape and well-being and who all too often find themselves confronted instead with new and unsettling forms of violence and bullying.

Will Sony succeed in turning its PlayStation into an oasis of well-being for its users?
Time will tell.