The first conference of the Hatedemics project, entitled “AI Against Hate and Misinformation”, took place on Thursday 13th of March in Brussels. This one-day event brought together all of the project partners as well as leading experts, policymakers, civil society organisations, and technology professionals to discuss hate speech and misinformation as well as to present the project’s first results on the Hatedemics Platform. The platform is a tool suite that will integrate innovative AI tools and materials used for combating intolerance and discrimination online, designing and deploying interactive training and educational paths based on the platform as well as raising awareness on the importance of fake news at the basis of hate speech, and empowering action by engaging young people as activists. To add to the project partner’s speeches, external speakers were invited to bring their expertise during the different discussions, including Ron Salaj (ImpactSkills), Hana Kojakovic (Media Diversity Institute), Lydia El Khouri (Textgain -AI for Good), Stella Meyer (H/Advisors Brussels), Gemma Cortada (Diputaciò Barcelona) and Charlotte Weber (Make.org).
The conference opened with a keynote speech by Ron Salaj on the weaponisation of speech. He demonstrated the importance of free speech which is endangered when monopolised by the wrong persons. Speech belongs to the ones controlling the media and the communication. Therefore, these people own the power and use free speech “as a constellation to dismantle democratic institutions everywhere in the world”.
To provide the audience with context, the speakers tried to give definitions to key concepts related to the project, exposing a problem: there is a lack of consensus on operational definitions of these key concepts. For instance, the definition of hate speech varies with each country’s legislation, and depends on how we evaluate the intention of harm, which evaluation is subjective and strongly related to a context. Still, hate speech can be defined as a mechanism of dehumanisation of the other. In the context of the technological improvement of AI, concerns are emerging about the use of online tools with the wrong intentions: “AI is like a hammer: it can be used to build something valuable or cause significant harm”.
With the rise of generative AI, the risks have grown, raising important questions about who gets to decide what we see online. There is a need to draw a clear line between free speech and illegal speech, and it’s important to consider the entire system. Users must take a more active role and should not be seen merely as victims. Educating users, along with a combination of laws and rules, is essential. This is where the Hatedemics project stands.
The Hatedemics platform is a tool to counter hate speech and misinformation. Designed with the expertise of fact-checkers, NGOs operators for civil society members, the platform’s co-creation process is one of the key aspects discussed in the conference. The first step of the co-creation consisted in analysing and exploring hate speech in social media, more specifically on telegram channels to collect data for the creation of the platform’s algorithm. By co-creating, the Hademics project aims to design something different by understanding the gaps in the field and boosting the user’s value to increase the final uptake and impact. Co-creation enabled the project’s partners to work around 5 languages (Italian, Spanish, English, Polish and Maltese). These first feedbacks came out positive and highlighted the aspects that need to be improved before the release of the Platform.
The first results of the project permitted the definition of key educational goals to help learners address misinformation and hate speech. Indeed, the main objective of the project is to train the platform’s users, mostly students and youngsters, to answer hate speech using counterspeech. Counterspeech, a key concept in the Hatedemics project, is the use of fact-based information to refute hate speech and counter hate. As highlighted during the conference, fact-checking is essential in the fight against hate and misinformation online. That is the reason why a part of the Platform is devoted to the development of a chat box that will generate possible answers based on fact-based arguments to counter hate speech and misinformation.
The last part of the conference was dedicated to panel discussions on key issues related to the project. The first panel discussion focused on the use of AI in combating hate speech and misinformation, explaining that AI can be useful as a tool to handle the hate generated online, as manual intervention alone is not sufficient to deal with the amount of content generated every day even if it has some limitations, such as censorship, hindering freedom of speech, etc.
The second-panel discussion delved into the legal and ethical framework of the use of AI, explaining that since AI imitates human behaviour, it requires special literacy. AI is also becoming increasingly present in our lives, which raises privacy concerns that call for a complete reevaluation of privacy as we know it. Another problem mentioned is the difficulty for legal officers to follow the development of AI. Finally, the third-panel discussion addressed the future of fact-checking on social media, especially in the context of the recent decision from META to get rid of fact-checking. The speakers emphasised how important the role of fact-checkers is in ensuring accurate information and respecting free speech.
These discussions and the overall conference led to a general conclusion on AI and Against Hate and Misinformation: to fight it efficiently, the effort must be made together in every aspect involved.